text
stringlengths
100
500k
subset
stringclasses
4 values
DOE PAGES® Journal Article: Anharmonic properties in M g 2 X ( X = C , Si , Ge , Sn , Pb ) from first-principles calculations Title: Anharmonic properties in M g 2 X ( X = C , Si , Ge , Sn , Pb ) from first-principles calculations Thermal conductivity reduction is one of the potential routes to improve the performance of thermoelectric materials. Yet, detailed understanding of the thermal transport of many promising materials is still missing. In this paper, we employ electronic-structure calculations at the level of density functional theory to elucidate thermal transport properties of the Mg2X (X = C , Si, Ge, Sn, and Pb) family of compounds, which includes M g 2 Si , a material already identified as a potential thermoelectric. All these materials crystallize into the same antifluorite structure. Systematic trends in the anharmonic properties of these materials are presented and examined. These calculations indicate that the reduction in the group velocity is the main driver of the thermal conductivity trend in these materials, as the phonon lifetimes in these compounds are very similar. We also examine the limits of the applicability of perturbation theory to study the effect of point defects on thermal transport and find that it is in good agreement with experiment in a wide range of scattering parameter values. The thermal conductivity of the recently synthesized Mg2C is computed and predicted to be 34 W/mK at 300°C. Chernatynskiy, Aleksandr [1]; Phillpot, Simon R. [1] Univ. of Florida, Gainesville, FL (United States) Energy Frontier Research Centers (EFRC) (United States). Center for Materials Science of Nuclear Fuel (CMSNF) AC07-05ID14517 Physical Review. B, Condensed Matter and Materials Physics Journal Volume: 92; Journal Issue: 6; Related Information: CMSNF partners with Idaho National Laboratory (lead); Colorado School of Mines; University of Florida; Oak Ridge National Laboratory; Purdue University; University of Wisconsin at Madison; Journal ID: ISSN 1098-0121 Chernatynskiy, Aleksandr, and Phillpot, Simon R. Anharmonic properties in Mg2X (X=C,Si,Ge,Sn,Pb) from first-principles calculations. United States: N. p., 2015. Web. doi:10.1103/PhysRevB.92.064303. Chernatynskiy, Aleksandr, & Phillpot, Simon R. Anharmonic properties in Mg2X (X=C,Si,Ge,Sn,Pb) from first-principles calculations. United States. https://doi.org/10.1103/PhysRevB.92.064303 Chernatynskiy, Aleksandr, and Phillpot, Simon R. Fri . "Anharmonic properties in Mg2X (X=C,Si,Ge,Sn,Pb) from first-principles calculations". United States. https://doi.org/10.1103/PhysRevB.92.064303. https://www.osti.gov/servlets/purl/1369977. title = {Anharmonic properties in Mg2X (X=C,Si,Ge,Sn,Pb) from first-principles calculations}, author = {Chernatynskiy, Aleksandr and Phillpot, Simon R.}, abstractNote = {Thermal conductivity reduction is one of the potential routes to improve the performance of thermoelectric materials. Yet, detailed understanding of the thermal transport of many promising materials is still missing. In this paper, we employ electronic-structure calculations at the level of density functional theory to elucidate thermal transport properties of the Mg2X (X = C , Si, Ge, Sn, and Pb) family of compounds, which includes M g 2 Si , a material already identified as a potential thermoelectric. All these materials crystallize into the same antifluorite structure. Systematic trends in the anharmonic properties of these materials are presented and examined. These calculations indicate that the reduction in the group velocity is the main driver of the thermal conductivity trend in these materials, as the phonon lifetimes in these compounds are very similar. We also examine the limits of the applicability of perturbation theory to study the effect of point defects on thermal transport and find that it is in good agreement with experiment in a wide range of scattering parameter values. The thermal conductivity of the recently synthesized Mg2C is computed and predicted to be 34 W/mK at 300°C.}, doi = {10.1103/PhysRevB.92.064303}, journal = {Physical Review. B, Condensed Matter and Materials Physics}, https://doi.org/10.1103/PhysRevB.92.064303 Citation Metrics: Cited by: 4 works Citation information provided by Density-Functional Perturbation Theory for Quasi-Harmonic Calculations Baroni, S.; Giannozzi, P.; Isaev, E. Reviews in Mineralogy and Geochemistry, Vol. 71, Issue 1 DOI: 10.2138/rmg.2010.71.3 Anharmonic effects in Mg2X (X = Si, Ge, Sn) compounds studied by Raman spectroscopy Raptis, Y. S.; Kourouklis, G. A.; Anastassakis, E. Journal de Physique, Vol. 48, Issue 2 DOI: 10.1051/jphys:01987004802023900 Ab initio thermal transport in compound semiconductors Lindsay, L.; Broido, D. A.; Reinecke, T. L. Physical Review B, Vol. 87, Issue 16 DOI: 10.1103/PhysRevB.87.165201 Lattice thermal conductivity of MgO at conditions of Earth's interior journal, February 2010 Tang, X.; Dong, J. Proceedings of the National Academy of Sciences, Vol. 107, Issue 10 DOI: 10.1073/pnas.0907194107 Projector augmented-wave method Blöchl, P. E. Physical Review B, Vol. 50, Issue 24, p. 17953-17979 DOI: 10.1103/PhysRevB.50.17953 Lattice Dynamics of Mg 2 Pb at Room Temperature journal, March 1972 Wakabayashi, N.; Ahmad, A. A. Z.; Shanks, H. R. Physical Review B, Vol. 5, Issue 6 DOI: 10.1103/PhysRevB.5.2103 Highly effective Mg 2 Si 1 − x Sn x thermoelectrics Zaitsev, V. K.; Fedorov, M. I.; Gurieva, E. A. Physical Review B, Vol. 74, Issue 4 Phonon-Mediated Thermal Conductivity in Ionic Solids by Lattice Dynamics-Based Methods Chernatynskiy, Aleksandr; Turney, Joseph E.; McGaughey, Alan J. H. Journal of the American Ceramic Society, Vol. 94, Issue 10 Elastic constants and calculated lattice vibration frequencies of Mg2Sn Davis, L. C.; Whitten, W. B.; Danielson, G. C. Journal of Physics and Chemistry of Solids, Vol. 28, Issue 3 From ultrasoft pseudopotentials to the projector augmented-wave method Kresse, G.; Joubert, D. Physical Review B, Vol. 59, Issue 3, p. 1758-1775 DOI: 10.1103/PhysRevB.59.1758 Lattice dynamics of magnesium stannide at room temperature Kearney, R. J.; Worlton, T. G.; Schmunk, R. E. Bonding in Mg2Si Studied with X-ray Photoelectron Spectroscopy van Buuren, M. R. J.; Voermans, F.; vanKempen, H. The Journal of Physical Chemistry, Vol. 99, Issue 23 DOI: 10.1021/j100023a033 Bipolar Electronic Thermal Conductivity in Semimetals Gallo, C. F.; Miller, R. C.; Sutter, P. H. Journal of Applied Physics, Vol. 33, Issue 10 Lattice dynamics of Mg2Ge Chung, P. L.; Whitten, W. B.; Danielson, G. C. Journal of Physics and Chemistry of Solids, Vol. 26, Issue 12 Melt growth and characterization of Mg2Si bulk crystals Tamura, Daiki; Nagai, Ryo; Sugimoto, Kazuhiro Thin Solid Films, Vol. 515, Issue 22 DOI: 10.1016/j.tsf.2007.02.065 In situ X-ray observation of phase transitions in under high pressure Hao, Jian; Zou, Bo; Zhu, Pinwen Solid State Communications, Vol. 149, Issue 17-18 DOI: 10.1016/j.ssc.2009.02.018 Synthesis of Mg 2 C: A Magnesium Methanide Kurakevych, Oleksandr O.; Strobel, Timothy A.; Kim, Duck Young Angewandte Chemie International Edition, Vol. 52, Issue 34 Phonon Lifetime Investigation of Anharmonicity and Thermal Conductivity of UO 2 by Neutron Scattering and Theory Pang, Judy W. L.; Buyers, William J. L.; Chernatynskiy, Aleksandr Physical Review Letters, Vol. 110, Issue 15 DOI: 10.1103/PhysRevLett.110.157401 Evaluation of computational techniques for solving the Boltzmann transport equation for lattice thermal conductivity calculations Chernatynskiy, Aleksandr; Phillpot, Simon R. Vibrational thermodynamics of materials Fultz, Brent Progress in Materials Science, Vol. 55, Issue 4 DOI: 10.1016/j.pmatsci.2009.05.002 Ab initiomolecular dynamics for liquid metals Kresse, G.; Hafner, J. Physical Review B, Vol. 47, Issue 1, p. 558-561 DOI: 10.1103/PhysRevB.47.558 First-principles studies of intrinsic point defects in magnesium silicide Kato, Akihiko; Yagi, Takeshi; Fukusako, Naoto Journal of Physics: Condensed Matter, Vol. 21, Issue 20 DOI: 10.1088/0953-8984/21/20/205801 Neutron scattering investigation of lattice dynamics and thermally induced disorder in the antifluorite Mg2Si Hutchings, M. Solid State Ionics, Vol. 28-30 Phonon conduction in PbSe, PbTe, and PbTe 1 − x Se x from first-principles calculations Tian, Zhiting; Garg, Jivtesh; Esfarjani, Keivan Ab initio theory of the lattice thermal conductivity in diamond Ward, A.; Broido, D. A.; Stewart, Derek A. Phonon Transport Simulator (PhonTS) Computer Physics Communications, Vol. 192 DOI: 10.1016/j.cpc.2015.01.008 First-principles investigation of the electronic and lattice vibrational properties of Mg 2 C Li, Tongwei; Ju, Weiwei; Liu, Huihui Computational Materials Science, Vol. 93 DOI: 10.1016/j.commatsci.2014.06.048 Thermal conductivity of argon at high pressure from first principles calculations Journal of Applied Physics, Vol. 114, Issue 6 Direct Solution to the Linearized Phonon Boltzmann Equation Chaput, Laurent Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set Kresse, G.; Furthmüller, J. Computational Materials Science, Vol. 6, Issue 1, p. 15-50 Thermal stability and elastic properties of Mg2X (X=Si, Ge, Sn, Pb) phases from first-principle calculations Zhou, Dianwu; Liu, Jinshui; Xu, Shaohua Computational Materials Science, Vol. 51, Issue 1 Raman Scattering in Mg 2 Si, Mg 2 Ge, and Mg 2 Sn Buchenauer, C. J.; Cardona, M. Studies on AB2-type intermetallic compounds, I. Mg2Ge and Mg2Sn: single-crystal structure refinement and ab initio calculations Grosch, Georg H.; Range, Klaus-Jürgen Journal of Alloys and Compounds, Vol. 235, Issue 2 Thermoelectric Performance of Sb- and La-Doped Mg2Si0.5Ge0.5 Zhou, Xiaoyuan; Wang, Guoyu; Chi, Hang Journal of Electronic Materials, Vol. 41, Issue 6 Lattice dynamics in intermetallic Mg 2 Ge and Mg 2 Si Bessas, D.; Simon, R. E.; Friese, K. Mechanochemical synthesis and thermoelectric properties of high quality magnesium silicide Bux, Sabah K.; Yeung, Michael T.; Toberer, Eric S. Journal of Materials Chemistry, Vol. 21, Issue 33 DOI: 10.1039/c1jm10827a Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set Thermodynamic properties of Mg2Si and Mg2Ge investigated by first principles method Wang, Hanfu; Jin, Hao; Chu, Weiguo DOI: 10.1016/j.jallcom.2010.01.134 Simple view of the Mg 2 Si 1 − x Sn x phonon spectrum: Sn resonances and mean field Chaput, Laurent; Bourgeois, Julie; Prytuliak, Anastasiia Ab initio molecular-dynamics simulation of the liquid-metal–amorphous-semiconductor transition in germanium Thermal study of group II–IV semiconductors—I. Heat capacity of Mg2Ge in the range 5–300°K Gerstein, B. C.; Chung, P. L.; Danielson, G. C. Journal of Physics and Chemistry of Solids, Vol. 27, Issue 6-7 Thermal conductivity of bulk and nanowire Mg 2 Si x Sn 1 − x alloys from first principles Li, Wu; Lindsay, L.; Broido, D. A. Debye temperatures of 24 cubic elements by three methods Konti, A.; Varshni, Y. P. Canadian Journal of Physics, Vol. 47, Issue 19 DOI: 10.1139/p69-255 Thermal conductivity of magnesium plumbide Martin, J. J.; Shanks, H. R. Journal of Applied Physics, Vol. 45, Issue 6 Thermal Study of Groups II—IV Semiconductors. Lattice Heat Capacities and Free Energies of Formation. Heat Capacity of Mg 2 Si from 15°—300°K Gerstein, B. C.; Jelinek, F. J.; Habenschuss, M. The Journal of Chemical Physics, Vol. 47, Issue 6 Isotope scattering of large-wave-vector phonons in GaAs and InSb: Deformation-dipole and overlap-shell models Tamura, Shin-ichiro Infrared Reflectivities of Magnesium Silicide, Germanide, and Stannide McWilliams, D.; Lynch, D. W. Physical Review, Vol. 130, Issue 6 DOI: 10.1103/PhysRev.130.2248 Lattice dynamics of Mg2Si and Mg2Ge compounds from first-principles calculations Tani, Jun-ichi; Kido, Hiroyasu Effective charge on silicon atom in the metal silicides Mg 2 Si and CaSi Ishii, Hideshi; Matsuo, Shuji; Karimov, Pavel Magnetron Deposition of In Situ Thermoelectric Mg2Ge Thin Films Chuang, L.; Savvides, N.; Li, S. DOI: 10.1007/s11664-009-0690-x Thermal study of II–IV semiconductors: Heat capacity and thermodynamic functions of Mg2Pb from 5–300°K Schwartz, R. G.; Shanks, H.; Gerstein, B. C. Journal of Solid State Chemistry, Vol. 3, Issue 4 Defect and phase stability of solid solutions of Mg2X with an antifluorite structure: An ab initio study Viennois, Romain; Jund, Philippe; Colinet, Catherine Journal of Solid State Chemistry, Vol. 193 DOI: 10.1016/j.jssc.2012.04.048 X-Ray characteristic temperature and thermal expansion coefficient of Mg2Ge Dutchak, I. Ya.; Yarmolyuk, V. P. Soviet Physics Journal, Vol. 16, Issue 11 Solid State Communications, Vol. 7, Issue 24 Reducing Dzyaloshinskii-Moriya interaction and field-free spin-orbit torque switching in synthetic antiferromagnets Chen, Ruyi; Cui, Qirui; Liao, Liyang Nature Communications, Vol. 12, Issue 1 High-resolution X-ray luminescence extension imaging Ou, Xiangyu; Qin, Xian; Huang, Bolong Nature, Vol. 590, Issue 7846 Electronic structure of AlFeN films exhibiting crystallographic orientation change from c- to a-axis with Fe concentrations and annealing effect Tatemizo, Nobuyuki; Imada, Saki; Okahara, Kizuna Scientific Reports, Vol. 10, Issue 1 Angewandte Chemie, Vol. 125, Issue 34 DOI: 10.1002/ange.201303463 text, January 2009 Ward, Alister; Broido, David; Stewart, Derek A. DOI: 10.5283/epub.9466 Thermoelectric and vibrational properties of Be 2 C, BeMgC and Mg 2 C using first-principles method Maurya, V.; Paliwal, U.; Sharma, G. RSC Advances, Vol. 9, Issue 24 DOI: 10.1039/c9ra01573f Lattice dynamics and elasticity in thermoelectric Mg 2 Si 1 − x Sn x Klobes, Benedikt; de Boor, Johannes; Alatas, Ahmet Physical Review Materials, Vol. 3, Issue 2 DOI: 10.1103/physrevmaterials.3.025404 Evolution of magnetism in single-crystal C a 2 R u 1 - x I r x O 4 ( 0 ≤ x ≤ 0.65 ) Journal Article Yuan, S. J. ; Terzic, J. ; Wang, J. C. ; ... - Physical Review. B, Condensed Matter and Materials Physics In this paper, we report structural, magnetic, transport, and thermal properties of single-crystal Ca2Ru1-xIrxO4(0≤x≤0.65). Ca2RuO4 is a structurally driven Mott insulator with a metal-insulator transition at TMI=357K, which is well separated from antiferromagnetic order at TN=110K. Substitution of a 5d element, Ir, for Ru enhances spin-orbit coupling and locking between the structural distortions and magnetic moment canting. Ir doping intensifies the distortion or rotation of Ru/IrO6 octahedra and induces weak ferromagnetic behavior along the c axis. In particular, Ir doping suppresses TN but concurrently causes an additional magnetic ordering TN2 at a higher temperature up to 210 K for x=0.65.more » The effect of Ir doping sharply contrasts with that of 3d-element doping such as Cr, Mn, and Fe, which suppresses TN and induces unusual negative volume thermal expansion. Finally, the stark difference between 3d- and 5d-element doping underlines a strong magnetoelastic coupling inherent in the Ir-rich oxides.« less Sr 2 Ir 1 - x Rh x O 4 ( x < 0.5 ) : An inhomogeneous j eff = 1 2 Hubbard system Journal Article Chikara, Shalinee ; Haskel, Daniel ; Sim, Jae-Hoon ; ... - Physical Review. B, Condensed Matter and Materials Physics In a combined experimental and theoretical study, we investigate the properties of Sr 2 Ir 1 - x Rh x O 4 . From the branching ratios of the L -edge isotropic x-ray absorption spectra, we determine that the spin-orbit coupling is remarkably independent of x for both iridium and rhodium sites. DFT + U calculations show that the doping is close to isoelectronic and introduces impurity bands of predominantly rhodium character close to the lower Hubbard band. Overlap of these two bands leads to metallic behavior. Since the low-energy states for x < 0.5 have predominantly j eff =more » 1 /2 character, we suggest that the electronic properties of this material can be described by an inhomogeneous Hubbard model, where the on-site energies change due to local variations in the spin-orbit interaction strength combined with additional changes in binding energy.« less Sr 2 Ir 1 – x Rh x O 4 ( x < 0.5 ) : An inhomogeneous j eff = 1 2 Hubbard system Journal Article Chikara, Shalinee ; Haskel, Daniel ; Sim, Jae -Hoon ; ... - Physical Review. B, Condensed Matter and Materials Physics In a combined experimental and theoretical study, we investigate the properties of Sr2Ir1-xRhxO4. Here, from the branching ratios of the L-edge isotropic x-ray absorption spectra, we determine that the spin-orbit coupling is remarkably independent of x for both iridium and rhodium sites. DFT + U calculations show that the doping is close to isoelectronic and introduces impurity bands of predominantly rhodium character close to the lower Hubbard band. Overlap of these two bands leads to metallic behavior. Since the low-energy states for x < 0.5 have predominantly jeff = 1/2 character, we suggest that the electronic properties of this materialmore » can be described by an inhomogeneous Hubbard model, where the on-site energies change due to local variations in the spin-orbit interaction strength combined with additional changes in binding energy.« less Spin triplet ground-state in the copper hexamer compounds A 2 C u 3 O ( S O 4 ) 3 ( A = Na , K ) Journal Article Furrer, A. ; Podlesnyak, A. ; Pomjakushina, E. ; ... - Physical Review B The compoundsmore » $${A}_{2}\mathrm{C}{\mathrm{u}}_{3}\mathrm{O}{(\mathrm{S}{\mathrm{O}}_{4})}_{3}(A=\mathrm{Na},\mathrm{K})$$ are characterized by copper hexamers which are weakly coupled along the $$b$$ axis to realize one-dimensional antiferromagnetic chains below $${T}_{\mathrm{N}}{\approx}3\mathrm{K}$$, whereas the interchain interactions along the $$a$$ and $$c$$ axes are negligible. Here we investigated the energy-level splittings of the copper hexamers by inelastic neutron scattering below and above $${T}_{\mathrm{N}}$$. The eight lowest-lying hexamer states could be unambiguously assigned and parametrized in terms of a Heisenberg exchange Hamiltonian, providing direct experimental evidence for an $S=1$ triplet ground-state associated with the copper hexamers. Therefore, the compounds $${A}_{2}\mathrm{C}{\mathrm{u}}_{3}\mathrm{O}{(\mathrm{S}{\mathrm{O}}_{4})}_{3}$$ serve as cluster-based spin-1 antiferromagnets to support Haldane's conjecture that a gap appears in the excitation spectrum below $${T}_{\mathrm{N}}$$, which was verified by inelastic neutron scattering.« less Superconducting and normal state properties of the systems La 1 - x M x Pt 4 Ge 12 ( M = Ce , Th ) Journal Article Huang, K. ; Yazici, D. ; White, B. D. ; ... - Physical Review B Electrical resistivity, magnetization, and specific heat measurements were performed on polycrystalline samples of the filled-skutterudite systems La1-xMxPt4Ge12 (M = Ce, Th). Superconductivity in LaPt4Ge12 was quickly suppressed with Ce substitution and no evidence for superconductivity was found down to 1.1 K for x > 0.2. Temperature-dependent specific heat data at low temperatures for La1-xCexPt4Ge12 show a change from power-law to exponential behavior, which may be an indication for multi-band superconductivity in LaPt4Ge12. A similar crossover was observed in the Pr1-xCexPt4Ge12 system. However, the suppression rates of the superconducting transition temperatures, Tc(x), in the two systems are quite disparate, indicating amore » difference in the nature of superconductivity, which is conventional in LaPt4Ge12 and unconventional in PrPt4Ge12. In comparison, a nearly linear and smooth evolution of Tc with increasing Th was observed in the La1-xThxPt4Ge12 system, with no change of the superconducting energy gap in the temperature dependence of the specific heat, suggesting similar types of superconductivity in both the LaPt4Ge12 and ThPt4Ge12 compounds.« less
CommonCrawl
Search SpringerLink Advances in Discrete Differential Geometry pp 133–149Cite as Approximation of Conformal Mappings Using Conformally Equivalent Triangular Lattices Ulrike Bücking2 Two triangle meshes are conformally equivalent if their edge lengths are related by scale factors associated to the vertices. Such a pair can be considered as preimage and image of a discrete conformal map. In this article we study the approximation of a given smooth conformal map f by such discrete conformal maps \(f^\varepsilon \) defined on triangular lattices. In particular, let T be an infinite triangulation of the plane with congruent strictly acute triangles. We scale this triangular lattice by \(\varepsilon >0\) and approximate a compact subset of the domain of f with a portion of it. For \(\varepsilon \) small enough we prove that there exists a conformally equivalent triangle mesh whose scale factors are given by \(\log |f'|\) on the boundary. Furthermore we show that the corresponding discrete conformal (piecewise linear) maps \(f^\varepsilon \) converge to f uniformly in \(C^1\) with error of order \(\varepsilon \). Dirichlet Problem Circle Pattern Triangular Lattice Boundary Vertex These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. Holomorphic functions build the basis and heart of the rich theory of complex analysis. Holomorphic functions with nowhere vanishing derivative, also called conformal maps, have the property to preserve angles. Thus they may be characterized by the fact that they are infinitesimal scale-rotations. In the discrete theory, the idea of characterizing conformal maps as local scale-rotations may be translated into different concepts. Here we consider the discretization coming from a metric viewpoint: Infinitesimally, lengths are scaled by a factor, i.e. by \(|f'(z)|\) for a conformal function f on \(D\subset \mathbb C\). More generally, on a smooth manifold two Riemannian metrics g and \(\tilde{g}\) are conformally equivalent if \(\tilde{g}=\text {e}^{2u}g\) for some smooth function u. The smooth complex domain (or manifold) is replaced in this discrete setting by a triangulation of a connected subset of the plane \(\mathbb C\) (or a triangulated piecewise Euclidean manifold). 1.1 Convergence for Discrete Conformal PL-Maps on Triangular Lattices In this article we focus on the case where the triangulation is a (part of a) triangular lattice. In particular, let T be a lattice triangulation of the whole complex plane \(\mathbb C\) with congruent triangles, see Fig. 1a. The sets of vertices and edges of T are denoted by V and E respectively. Edges will often be written as \(e=[v_i,v_j]\in E\), where \(v_i,v_j\in V\) are its incident vertices. For triangular faces we use the notation \(\varDelta [v_i,v_j,v_k]\) enumerating the incident vertices with respect to the orientation (counterclockwise) of \(\mathbb C\). Lattice triangulation of the plane with congruent triangles. a Example of a triangular lattice. b Acute angled triangle. On a subcomplex of T we now define a discrete conformal mapping. The main idea is to change the lengths of the edges of the triangulation according to scale factors at the vertices. The new triangles are then 'glued together' to result in a piecewise linear map, see Fig. 2 for an illustration. More precisely, we have Definition 1.1 A discrete conformal PL-mapping g is a continuous and orientation preserving map of a subcomplex \(T_S\) of a triangular lattice T to \(\mathbb C\) which is locally a homeomorphism in a neighborhood of each interior point and whose restriction to every triangle is a linear map onto the corresponding image triangle, that is the mapping is piecewise linear. Furthermore, there exists a function \(u:V_S\rightarrow \mathbb {R}\) on the vertices, called associated scale factors, such that for all edges \(e=[v,w]\in E_S\) there holds $$\begin{aligned} |g(v)-g(w)|=|v-w|\text {e}^{(u(v)+u(w))/2}, \end{aligned}$$ where |a| denotes the modulus of \(a\in \mathbb C\). Note that Eq. (1) expresses a linear relation for the logarithmic edge lengths, that is $$\begin{aligned} 2\log |g(v)-g(w)| =2\log |v-w| +u(v)+u(w). \end{aligned}$$ Example of a discrete conformal PL-map g In fact, the definition of a discrete conformal PL-map relies on the notion of discrete conformal triangle meshes. These have been studied by Luo, Gu, Sun, Wu, Guo [8, 9, 14], Bobenko, Pinkall, and Springborn [1] and others. As possible application, discrete conformal PL-maps can be used for discrete uniformization. The simplest case is a discrete Riemann mapping theorem, i.e. the problem of finding a discrete conformal mapping of a simply connected domain onto the unit disc. Similarly, we may consider a related Dirichlet problem. Given some function \(u_\partial \) on the boundary of a subcomplex \(T_S\), find a discrete conformal PL-map whose associated scale factors agree on the boundary with \(u_\partial \). For such a Dirichlet problem (with assumptions on \(u_\partial \) and \(T_S\)) we will prove existence as part of our convergence theorem. In this article we present a first answer to the following problem: Given a smooth conformal map, find a sequence of discrete conformal PL-maps which approximate the given map. We study this problem on triangular lattices T with acute angles and always assume for simplicity that the origin is a vertex. Denote by \(\varepsilon T\) the lattice T scaled by \(\varepsilon >0\). Using the values of \(\log |f'|\), we obtain a discrete conformal PL-map \(f^\varepsilon \) on a subcomplex of \(\varepsilon T\) from a boundary value problem for the associated scale factors. More precisely, we prove the following approximation result. Theorem 1.2 Let \(f:D\rightarrow \mathbb C\) be a conformal map (i.e. holomorphic with \(f'\not =0\)). Let \(K\subset D\) be a compact set which is the closure of its simply connected interior int(K) and assume that \(0\in int(K)\). Let T be a triangular lattice with strictly acute angles. For each \(\varepsilon >0\) let \(T^\varepsilon _K\) be a subcomplex of \(\varepsilon T\) whose support is contained in K and is homeomorphic to a closed disc. We further assume that 0 is an interior vertex of \(T^\varepsilon _K\). Let \(e_0=[0,{\hat{v}}_0]\in E^\varepsilon _K\) be one of its incident edges. Then if \(\varepsilon >0\) is small enough (depending on K, f, and T) there exists a unique discrete conformal PL-map \(f^\varepsilon \) on \(T^\varepsilon _K\) which satisfies the following two conditions: The associated scale factors \(u^\varepsilon :V^\varepsilon _K\rightarrow \mathbb {R}\) satisfy $$\begin{aligned} u^\varepsilon (v)=\log |f'(v)|\qquad \text {for all boundary vertices } v \text { of } V^\varepsilon _K. \end{aligned}$$ The discrete conformal PL-map is normalized according to \(f^\varepsilon (0)=f(0)\) and \(\arg (f^\varepsilon ({\hat{v}}_0)-f^\varepsilon (0))= \arg ({\hat{v}}_0)+ \arg (f'(\frac{{\hat{v}}_0}{2})) \pmod {2\pi }\). Furthermore, the following estimates for \(u^\varepsilon \) and \(f^\varepsilon \) hold for all vertices \(v\in V^\varepsilon _K\) and points x in the support of \(T^\varepsilon _K\) respectively with constants \(C_1,C_2,C_3\) depending only on K, f, and T, but not on v or x: The scale factors \(u^\varepsilon \) approximate \(\log |f'|\) uniformly with error of order \(\varepsilon ^2\): $$\begin{aligned} \left| u^\varepsilon (v)-\log |f'(v)|\right| \leqslant C_1\varepsilon ^2. \end{aligned}$$ The discrete conformal PL-mappings \(f^\varepsilon \) converge to f for \(\varepsilon \rightarrow 0\) uniformly with error of order \(\varepsilon \): $$\begin{aligned} \left| f^\varepsilon (x)-f(x)\right| \leqslant C_2\varepsilon . \end{aligned}$$ The derivatives of \(f^\varepsilon \) (in the interior of the triangles) converge to \(f'\) uniformly for \(\varepsilon \rightarrow 0\) with error of order \(\varepsilon \): $$\begin{aligned} \left| \partial _z f^\varepsilon (x)-f'(x)\right| \leqslant C_3\varepsilon \qquad \text {and} \qquad \left| \partial _{\bar{z}} f^\varepsilon (x)\right| \leqslant C_3\varepsilon \end{aligned}$$ for all points x in the interior of a triangle \(\varDelta \) of \(T^\varepsilon _K\). Here \(\partial _z\) and \(\partial _{\bar{z}}\) denote the Wirtinger derivatives applied to the linear maps \(f^\varepsilon |_\varDelta \). Note that the subcomplexes \(T^\varepsilon _K\) may be chosen such that they approximate the compact set K. Further notice that (3) implies that \(u^\varepsilon \) converges to \(\log |f'|\) in \(C^1\) with error of order \(\varepsilon \), in the sense that also $$\begin{aligned} \left| \frac{u^\varepsilon (v)-u^\varepsilon (w)}{\varepsilon } - \text {Re}\left( \frac{f''((v+w)/2)}{f'((v+w)/2)}\right) \right| \leqslant {\tilde{C}}\varepsilon \end{aligned}$$ on edges [v, w] uniformly for some constant \(\tilde{C}\). The proof of Theorem 1.2 is given in Sect. 4. The arguments are based on estimates derived in Sect. 3. The problem of actually computing the scale factors u for given boundary values \(u_\partial \) such that u gives rise to a discrete conformal PL-map (in case it exists) can be solved using a variational principle, see [1, 20]. Our proof relies on investigations using the corresponding convex functional, see Theorem 2.2 in Sect. 2. Remark 1.3 The convergence result of Theorem 1.2 also remains true if linear interpolation is replaced with the piecewise projective interpolation schemes described in [1, 3], i.e., circumcircle preserving, angle bisector preserving and, generally, exponent-t-center preserving for all \(t\in \mathbb {R}\). The proof is the same with only small adaptations. This is due to the fact that the image of the vertices is the same for all these interpolation schemes and these image points converge uniformly to the corresponding image points under f with error of order \(\varepsilon \). The estimates for the derivatives similarly follow from Theorem 1.2(i). 1.2 Other Convergence Results for Discrete Conformal Maps Smooth conformal maps can be characterized in various ways. This leads to different notions of discrete conformality. Convergence issues have already been studied for some of these discrete analogs. We only give a very short overview and cite some results of a growing literature. In particular, linear definitions can be derived as discrete versions of the Cauchy-Riemann equations and have a long and still developing history. Connections of such discrete mappings to smooth conformal functions have been studied for example in [2, 6, 7, 13, 16, 19, 22]. The idea of characterizing conformal maps as local scale-rotations has lead to the consideration of circle packings, more precisely to investigations on circle packings with the same (given) combinatorics of the tangency graph. Thurston [21] first conjectured the convergence of circle packings to the Riemann map, which was then proven by [10, 11, 17]. The theory of circle patterns generalizes the case of circle packings. Also, there is a link to integrable structures via isoradial circle patterns. The approximation of conformal maps using circle patterns has been studied in [4, 5, 12, 15, 18]. The approach taken in this article constructs discrete conformal maps from given boundary values. Our approximation results and some ideas of the proof are therefore similar to those in [4, 5, 18] for circle patterns which also rely on boundary value problems. 2 Some Characterizations of Associated Scale Factors of Discrete Conformal PL-Maps Consider a subcomplex \(T_S\) of a triangular lattice T and an arbitrary function \(u:V_S\rightarrow \mathbb {R}\). Assign new lengths to the edges according to (1) by $$\begin{aligned} \tilde{l}([v,w])=|v-w|\text {e}^{(u(v)+u(w))/2} \end{aligned}$$ In order to obtain new triangles with these lengths (and ultimately a discrete conformal PL-map) the triangle inequalities need to hold for the edge lengths \(\tilde{l}\) on each triangle. If we assume this, we can embed the new triangles (respecting orientation) and immerse sequences of triangles with edge lengths given by \(\tilde{l}\) as in (4). In order to obtain a discrete conformal PL-map, in particular a local homeomorphism, the interior angles of the triangles need to sum up to \(2\pi \) at each interior vertex. The angle at a vertex of a triangle with given side lengths can be calculated. With the notation of Fig. 1b we have the half-angle formula $$\begin{aligned} \tan \left( \frac{\alpha }{2}\right) = \sqrt{\frac{(-b+a+c)(-c+a+b)}{(b+c-a)(a+b+c)}} = \sqrt{\frac{1-(\frac{b}{a}-\frac{c}{a})^2}{(\frac{b}{a}+\frac{c}{a})^2-1}}. \end{aligned}$$ The last expression emphasizes the fact that the angle does not depend on the scaling of the triangle. Careful considerations of this angle function depending on (scaled) side lengths of the triangle form the basis for our proof. In particular, we define the function $$\begin{aligned} \theta (x,y):=2\arctan \sqrt{\frac{1-(\text {e}^{-x/2} -\text {e}^{-y/2})^2}{(\text {e }^{-x/2} +\text {e}^{-y/2})^2-1}}, \end{aligned}$$ so (5) can be written as $$\alpha =\theta (x,y)\qquad \text {with}\quad \frac{b}{a}=\text {e}^{-x/2}\ \text { and }\ \frac{c}{a}=\text {e}^{-y/2}.$$ Summing up, we have the following characterization of scale factors associated to discrete conformal PL-maps. Proposition 2.1 Let \(T_S\) be a subcomplex of a triangular lattice T and \(u:V_S\rightarrow \mathbb {R}\) a function satisfying the following two conditions. For every triangle \(\varDelta [v_1,v_2,v_3]\) of \(T_S\) the triangle inequalities for \(\tilde{l}\) defined by (4) hold, in particular $$\begin{aligned} |v_i-v_j|\text {e}^{(u(v_i)+u(v_j))/2}< |v_i-v_k|\text {e}^{(u(v_i)+u(v_k))/2} +|v_j-v_k|\text {e}^{(u(v_j)+u(v_k))/2} \end{aligned}$$ for all permutations (ijk) of (123). For every interior vertex \(v_0\) with neighbors \(v_1,v_2,\dots ,v_k,v_{k+1}=v_1\) in cyclic order we have $$\begin{aligned} \sum _{j=1}^k \theta (\lambda (v_0,v_j,v_{j+1})+ u(v_{j+1})-u(v_0), \lambda (v_0,v_{j+1},v_j) +u(v_j)-u(v_0))=2\pi , \end{aligned}$$ where \(\lambda (v_a,v_b,v_c)= 2\log (|v_b-v_c|/|v_a-v_b|)\) for a triangle \(\varDelta [v_a,v_b,v_c]\). Then there is a discrete conformal PL-map (unique up to post-composition with Euclidean motions) such that its associated scale factors are the given function \(u:V_S\rightarrow \mathbb {R}\). Conversely, given a discrete conformal PL-map on a subcomplex \(T_S\) of a triangular lattice T, its associated scale factors \(u:V_S\rightarrow \mathbb {R}\) satisfy conditions (i) and (ii). In order to obtain discrete conformal PL-maps from a given smooth conformal map we will consider a Dirichlet problem for the associated scale factors. Therefore we will apply a theorem from [1] which characterizes the scale factors u for given boundary values using a variational principle for a functional E defined in [1, Sect. 4]. Note that we will not need the exact expression for E but only the formula for its partial derivatives. In fact, the vanishing of these derivatives is equivalent to the necessary condition (8) for the scale factors to correspond to a discrete conformal PL-map. ([1]) Let \(T_S\) be a subcomplex of a triangular lattice and let \(u_\partial :V_\partial \rightarrow \mathbb {R}\) be a function on the boundary vertices \(V_\partial \) of \(T_S\). Then the solution \(\tilde{u}\) (if it exists) of Eq. (8) at all interior vertices with \({\tilde{u}}|_{V_\partial }=u_\partial \) is the unique argmin of a locally strictly convex functional \(E(u)=E_{T_S}(u)\) which is defined for functions \(u:V\rightarrow \mathbb {R}\) satisfying the inequalities (7). The partial derivative of E with respect to \(u_i=u(v_i)\) at an interior vertex \(v_i\in V_{int}\) with k neighbors \(v_{i_1},v_{i_2},\dots ,v_{i_k}v_{i_{k+1}}=v_{i_1}\) in cyclic order is $$\begin{aligned} \frac{\partial E}{\partial u_i}(u) = 2\pi - \sum _{j=1}^k \theta (2\log \left( \frac{l_{i_{j+1},i_j}}{l_{i,i_{j+1}}}\right) + u_{i_j}- u_i, 2\log \left( \frac{l_{i_{j+1},i_j}}{l_{i,i_{j}}}\right) + u_{i_{j+1}} -u_i), \end{aligned}$$ where \(l_{j,k}=|v_j-v_k|\). By Proposition 2.1 such a solution \(\tilde{u}\) are then scale factors associated to a discrete conformal PL-map. The functional E can be extended to a convex continuously differentiable function on \(\mathbb {R}^V\), see [1] for details. 3 Taylor Expansions We now examine the effect when we take \(u=\log |f'|\) as 'scale factors', i.e. for each triangle we multiply the length \(|v-w|\) of an edge [v, w] by the geometric mean \(\sqrt{|f'(v)f'(w)|}\) of \(|f'|\) at the vertices. The proof of Theorem 1.2 is based on the idea that \(u=\log |f'|\) almost satisfies the conditions for being the associated scale factors of an discrete conformal PL-map, that is conditions (i) and (ii) of Proposition 2.1, and therefore is close to the exact solution \(u^\varepsilon \). To be precise, suppose that \(\varepsilon T\) is the equilateral triangulation of the plane. Assume without loss of generality that the edge lengths equal \(\frac{\sqrt{3}}{2}\varepsilon >0\) and edges are parallel to \(\text {e}^{ij\pi /3}\) for \(j=0,1,\dots , 5\). Let the conformal function f, the compact set K, and the subcomplexes \(T^\varepsilon _K\) (with vertices \(V^\varepsilon _K\) and edges \(E^\varepsilon _K\)) be given as in Theorem 1.2. Let \(v_0\in V^\varepsilon _{K, \text {int}}\) be an interior vertex. Here and below \(V^\varepsilon _{K,\text {int}}\) denotes the set of interior vertices having six neighbors in \(V^\varepsilon _K\). Denote the neighbors of \(v_0\) by \(v_j= v_0+\varepsilon \frac{\sqrt{3}\text {e}^{ij\frac{\pi }{3}}}{2}\) and consider the triangle \(\varDelta _j= \varDelta [v_0,v_j,v_{j+1}]\) for some \(j\in \{0,1,\dots , 5\}\). Taking \(u=\log |f'|\), we obtain edge lengths of a new triangle \({{\tilde{\varDelta }}}_j\), i.e. satisfying (7), if \(\varepsilon \) is small enough. Then the angle in \({\tilde{\varDelta }}_j\) at the image vertex of \(v_0\) is given by $$ \theta (\log |f'(v_0+\varepsilon \frac{\sqrt{3}\text {e}^{ij\frac{\pi }{3}}}{2})| -\log |f'(v_0)|,\, \log |f'(v_0+\varepsilon \frac{\sqrt{3}\text {e}^{i(j+1)\frac{\pi }{3}}}{2})| -\log |f'(v_0)|) $$ according to (6). Summing up these angles—that is inserting \(\log |f'|\) into (8) instead of u at an interior vertex \(v_0\in V^\varepsilon _{K, \text {int}}\)—we obtain the function $$\begin{aligned}&\mathcal{S}_{v_0}(\varepsilon )= \\&\sum _{j=0}^5 \theta (\log |f'(v_0+\varepsilon \frac{\sqrt{3}\text {e}^{ij\frac{\pi }{3}}}{2})| -\log |f'(v_0)|,\, \log |f'(v_0+\varepsilon \frac{\sqrt{3}\text {e}^{i(j+1)\frac{\pi }{3}}}{2})| -\log |f'(v_0)|) \end{aligned}$$ We are interested in the Taylor expansion of \(\mathcal{S}_{v_0}\) in \(\varepsilon \). The symmetry of the lattice T implies that \(\mathcal{S}_{v_0}\) is an even function, so the expansion contains only even powers of \(\varepsilon ^n\). Using a computer algebra program we arrive at $$\begin{aligned} \mathcal{S}_{v_0}(\varepsilon )= 2\pi + C_{v_0}\varepsilon ^4 +\mathscr {O}(\varepsilon ^6). \end{aligned}$$ Here and below, the notation \(h(\varepsilon )=\mathscr {O}(\varepsilon ^n)\) means that there is a constant \(\mathcal C\), such that \(|h(\varepsilon )|\leqslant \mathcal{C}\varepsilon ^n\) holds for all small enough \(\varepsilon >0\). The constant of the \(\varepsilon ^4\)-term is $$C_{v_0}= -\frac{3\sqrt{3}}{32} \text {Re}\left( S(f)(v_0) \overline{\left( { \frac{f''}{f'}}\right) '}(v_0)\right) ,$$ where \(S(f)=\left( \frac{f''}{f'}\right) ' -\frac{1}{2} \left( \frac{f''}{f'}\right) ^2\) is the Schwarzian derivative of f. We will not need the exact form of this constant, but only the fact that it is bounded on K. Analogous results to (10) hold for all triangular lattices \(\varepsilon T\) with edge lengths \(a^\varepsilon =\varepsilon \sin \alpha \), \(b^\varepsilon =\varepsilon \sin \beta \), \(c^\varepsilon =\varepsilon \sin \gamma \), also if the angles are larger than \(\pi /2\). We assume without loss of generality the edge directions being parallel to 1, \(\text {e}^{i\alpha }\) and \(\text {e}^{i(\alpha +\beta )}\). Arguing as above, we consider the function $$\begin{aligned} \mathcal{S}_{v_0}(\varepsilon ) =&\quad \theta ( 2\log \frac{\sin \alpha }{\sin \gamma } +\log |\frac{f'(v_0+\varepsilon \sin \beta )}{f'(v_0)}|,\, 2\log \frac{\sin \alpha }{\sin \beta }+ \log |\frac{f'(v_0+\varepsilon \sin \gamma \text {e}^{i \alpha })}{f'(v_0)}|) \\&+ \theta ( 2\log \frac{\sin \beta }{\sin \alpha }+ \log |\frac{f'(v_0+\varepsilon \sin \gamma \, \text {e}^{i \alpha })}{f'(v_0)}|, \, 2\log \frac{\sin \beta }{\sin \gamma }+ \log |\frac{f'(v_0+\varepsilon \sin \alpha \, \text {e}^{i (\alpha +\beta )})}{f'(v_0)}|) \\&+ \theta ( 2\log \frac{\sin \gamma }{\sin \beta }+ \log |\frac{f'(v_0+\varepsilon \sin \alpha \, \text {e}^{i (\alpha +\beta )})}{f'(v_0)}|,\, 2\log \frac{\sin \gamma }{\sin \alpha }+\log |\frac{f'(v_0-\varepsilon \sin \beta )}{f'(v_0)}|) \\&+ \theta ( 2\log \frac{\sin \alpha }{\sin \gamma }+ \log |\frac{f'(v_0-\varepsilon \sin \beta )}{f'(v_0)}|, \, 2\log \frac{\sin \alpha }{\sin \beta }+ \log |\frac{f'(v_0-\varepsilon \sin \gamma \, \text {e}^{i \alpha })}{f'(v_0)|}) \\&+ \theta ( 2\log \frac{\sin \beta }{\sin \alpha }+ \log |\frac{f'(v_0-\varepsilon \sin \gamma \, \text {e}^{i \alpha })}{f'(v_0)}|,\, 2\log \frac{\sin \beta }{\sin \gamma }+ \log |\frac{f'(v_0-\varepsilon \sin \alpha \, \text {e}^{i (\alpha +\beta )})}{f'(v_0)}|) \\&+ \theta ( 2\log \frac{\sin \gamma }{\sin \beta }+ \log |\frac{f'(v_0-\varepsilon \sin \alpha \, \text {e}^{i (\alpha +\beta )})}{f'(v_0)}|,\, 2\log \frac{\sin \gamma }{\sin \alpha }+ \log |\frac{f'(v_0+\varepsilon \sin \beta )}{f'(v_0)}|). \end{aligned}$$ Again, \(\mathcal{S}_{v_0}\) is an even function. Using a computer algebra program we arrive at $$\begin{aligned} \mathcal{S}_{v_0}(\varepsilon )= 2\pi + C_{v_0}\varepsilon ^4 +\mathscr {O}(\varepsilon ^6), \end{aligned}$$ with corresponding constant $$\begin{aligned}&C_{v_0}= -\frac{\sin \alpha \sin \beta \sin \gamma }{4}\; \text {Re}\left( S(f)(v_0) \overline{\left( \frac{f''}{f'}\right) '}(v_0)\right. \\&\qquad \qquad \qquad \qquad \qquad \left. +c(\alpha ,\beta ,\gamma ) \left( \frac{1}{2}\left( \frac{f''}{f'}\right) ^2 \left( \frac{f''}{f'}\right) ' -\frac{1}{3}\left( \frac{f''}{f'}\right) ''' \right) \right) , \end{aligned}$$ where \(c(\alpha ,\beta ,\gamma )=\cos \beta \sin ^3\beta +\cos \gamma \sin ^3\gamma \text {e}^{2i\alpha } +\cos \alpha \sin ^3\alpha \text {e}^{2i(\alpha +\beta )}\). Our key observation is that we can control the sign of the \(\mathscr {O}(\varepsilon ^4)\)-term in (10) if we replace \(\log |f'(x)|\) by \(\log |f'(x)|+a\varepsilon ^2|x|^2\), where \(a\in \mathbb {R}\) is some suitable constant. In particular, for positive constants \(M^\pm ,C^\pm \) consider the functions $$\begin{aligned} w^\pm&= \log |f'| +q^\pm&\text { with } q^\pm (v)&={\left\{ \begin{array}{ll}\pm \varepsilon ^2(M^\pm -C^\pm |v|^2) &{} \text {for }v\in V^\varepsilon _{K,\text {int}}, \\ 0 &{} \text {for }v\in \partial V^\varepsilon _K. \end{array}\right. } \end{aligned}$$ Here and below \(\partial V^\varepsilon _K\) denotes the set of boundary vertices of \(V^\varepsilon _K\). Then we obtain for equilateral triangulations with edge length \(\frac{\sqrt{3}}{2}\varepsilon \) the following Taylor expansion for all interior vertices \(v_0\in V^\varepsilon _{K,\text {int}}\) whose neighbors are also in \(V^\varepsilon _{K,\text {int}}\): $$\begin{aligned}&\sum _{j=0}^5 \theta (w^\pm (v_0+\varepsilon \frac{\sqrt{3}}{2} \text {e}^{ij\frac{\pi }{3}}) -w^\pm (v_0), w^\pm (v_0+\varepsilon \frac{\sqrt{3}}{2} \text {e}^{i(j+1)\frac{\pi }{3}}) -w^\pm (v_0)) \nonumber \\&\qquad \qquad \qquad \qquad \qquad = 2\pi + (C_{v_0}\mp \frac{3\sqrt{3}}{2}C^\pm )\varepsilon ^4 +\mathscr {O}(\varepsilon ^5). \end{aligned}$$ Again, analogous results hold for all regular triangular lattices, where the corresponding \(\mathscr {O}(\varepsilon ^4)\)-term then is $$\begin{aligned} C_{v_0}\mp 4 \sin \alpha \sin \beta \sin \gamma \; C^\pm . \end{aligned}$$ For interior vertices \(v_0\in V^\varepsilon _{K,\text {int}}\) which are incident to k boundary vertices we obtain instead of the right-hand side of (12): $$ 2\pi \mp k\frac{\sqrt{3}}{4} (M^\pm -C^\pm |v_0|^2)\varepsilon ^2 +\mathscr {O}(\varepsilon ^4). $$ For general triangular lattices we get for every edge \(e={[}v_0,v_j{]}\) which is incident to a boundary vertex \(v_j \in \partial V^{{\varepsilon }}_K\) a term \({\mp } ({M}^{\pm } - {C}^{\pm }{|{v_{0}}|}^2)\cos {\varphi }_{e}\sin \varphi _{e} \varepsilon ^2\) where \({\varphi }_e\) is the angle opposite to the edge e, see Fig. 3. Two adjacent triangles of the triangular lattice \(\varepsilon T\) and orthogonal edges \(e\in (\varepsilon E)\) (solid) and \(e^*\in (\varepsilon E^{*})\) (dashed) The following lemma summarizes the main properties of \(w^\pm \) which follow from the definition of \(w^\pm \) together with the preceding estimates. Lemma 3.1 \(w^\pm \) satisfies the boundary condition \(w^\pm |_{\partial V^\varepsilon _K} = \log |f'| \big |_{\partial V^\varepsilon _K}\). Furthermore, \(C^\pm >0\) and \(M^\pm >0\) can be chosen such that for all \(\varepsilon \) small enough and all \(v_0\in V^\varepsilon _{K, \text {int}}\): \(q^+(v_0)>0\) and \(q^-(v_0)<0\) If \(v_1, v_2,\dots , v_6,v_7=v_1\) denote the chain of neighboring vertices of \(v_0\) in cyclic order and \(\lambda (v_a,v_b,v_c)= 2\log (|v_b-v_c|/|v_a-v_b|)\) for any triangle \(\varDelta [v_a,v_b,v_c]\), we have $$\begin{aligned} \sum _{j=1}^6 \theta (&\lambda (v_0,v_{j+1},v_j) +w^+(v_{j})- w^+(v_0), \lambda (v_0,v_j,v_{j+1}) +w^+(v_{j+1})-w^+(v_0)) < 2\pi ,\\ \sum _{j=1}^6 \theta (&\lambda (v_0,v_{j+1},v_j) +w^-(v_{j})- w^-(v_0),\lambda (v_0,v_j,v_{j+1}) +w^-(v_{j+1})-w^-(v_0)) > 2\pi \end{aligned}$$ The choices of \(C^\pm \) and \(M^\pm \) only depend on f (and its derivatives), K, and on the angles of the triangular lattice T. In analogy to the continuous case we interpret Eq. (8) as a non-linear Laplace equation for u. In this spirit \(w^+\) may be taken as superharmonic function and \(w^-\) as subharmonic function. 4 Existence of Discrete Conformal PL-Maps and Estimates The functions \(w^\pm \) have been introduced in order to 'catch' the solution \(u^\varepsilon \) in the following compact set: $$\begin{aligned}&W^\varepsilon =\{u:V^\varepsilon _K\rightarrow \mathbb {R}\ |\ u(v) = \log |f'(v)| \text { for all } v\in \partial V^\varepsilon _K,\\&\qquad \qquad \qquad \qquad \qquad w^-(v)\leqslant u(v)\leqslant w^+(v) \text { for all } v\in V^\varepsilon _{K, \text {int}} \}. \end{aligned}$$ Note that \(W^\varepsilon \) is a n-dimensional interval in \(\mathbb {R}^n\) for \(n=|V^\varepsilon _K|=\) number of vertices, if we identify a function \(u:V^\varepsilon _K\rightarrow \mathbb {R}\) with the vector of its values \(u(v_i)\). Also, for neighboring vertices \(v_i\sim v_j\) and \(u\in W^\varepsilon \) we have \(u(v_j)-u(v_i)=\mathscr {O}(\varepsilon )\). Therefore, \(u\in W^\varepsilon \) satisfies the triangle inequalities (7) if \(\varepsilon \) is small enough. Our aim is to show that for \(\varepsilon \) small enough there exists a function \(u^\varepsilon \) satisfying conditions (i) and (ii) of Proposition 2.1 and \(u^\varepsilon (v)=\log |f'(v)|\) for all boundary vertices \(v\in \partial V^\varepsilon _K\). This function then defines a discrete conformal PL-map \(f^\varepsilon \) (uniquely if we use the normalization of Theorem 1.2). Assume that all angles of the triangular lattice T are strictly smaller than \(\pi /2\). There is an \(\varepsilon _0>0\) (depending on f, K and the triangulation parameters) such that for all \(0<\varepsilon <\varepsilon _0\) the minimum of the functional E (see Theorem 2.2) with boundary conditions (2) is attained in \(W^\varepsilon \). Corollary 4.2 For all \(0<\varepsilon <\varepsilon _0\) there exists a discrete conformal PL-map on \(T^\varepsilon _K\) whose associated scale factors satisfy the boundary conditions (2). The proof of Theorem 4.1 follows from Lemma 4.4 below. It is based on Theorem 2.2 and on monotonicity estimates of the angle function \(\theta (x,y)\) defined in (6). It is only here where we need the assumption that all angles of the triangular lattice T are strictly smaller than \(\pi /2\). (Monotonicity lemma) Consider the star of a vertex \(v_0\) of a triangular lattice T and its neighboring vertices \(v_1,\dots , v_6,v_7=v_1\) in cyclic order. Denote \(\lambda _{0,k}:=2\log (|v_{k+1}-v_{k}|/|v_{0}-v_{k}|)\). Assume that all triangles \(\varDelta (v_0,v_k,v_{k+1})\) are strictly acute angled, i.e. all angles \(<\pi /2\). Then there exists \(\eta _0>0\), depending on the \(\lambda \)s, such that for all \(0\leqslant \eta _1,\dots ,\eta _6\), \(\eta _7=\eta _1<\eta _0\) there holds $$\sum _{k=1}^6 \theta (\lambda _{0,k}+\eta _{k}, \lambda _{0,k+1}+\eta _{k+1})\geqslant \sum _{k=1}^6 \theta (\lambda _{0,k}, \lambda _{0,k+1}),$$ and for all \(0\geqslant \eta _1,\dots ,\eta _6,\eta _7=\eta _1>-\eta _0\) we have $$\sum _{k=1}^6 \theta (\lambda _{0,k}+\eta _{k}, \lambda _{0,k+1}+\eta _{k+1})\leqslant \sum _{k=1}^6 \theta (\lambda _{0,k}, \lambda _{0,k+1}).$$ First, consider a single acute angled triangle. Observe that with the notation of Fig. 1b: $$\begin{aligned} \frac{\partial \beta }{\partial a} = -\frac{1}{a}\cot \gamma . \end{aligned}$$ Thus, we easily deduce that $$\left. \frac{\partial }{\partial \varepsilon } \theta (2\log (\frac{a}{c})+\varepsilon ,2\log (\frac{a}{b}))\right| _{\varepsilon =0}= \frac{1}{2} \cot \gamma .$$ Now the claim follows by Taylor expansion. \(\square \) There is an \(\varepsilon _0>0\) such that for all \(0<\varepsilon <\varepsilon _0\) the negative gradient \(-\text {grad}(E)\) on the boundary of \(W^\varepsilon \) points into the interior of \(W^\varepsilon \). For notational simplicity, set \(u_k=u(v_k)\), \(w_k^\pm =w^\pm (v_k)\) for vertices \(v_k\in V^\varepsilon _K\) and \(\lambda _{a,b,c}=2\log (|v_b-v_c|/|v_a-v_b|)\). Consider \(\text {grad} (E)\) on a boundary face \(W_i^+=\{u\in W^\varepsilon : u_i=w_i^+\}\) of the n-dimensional interval \(W^\varepsilon \). Let \(v_1,\dots , v_6,v_7=v_1\) denote the neighbors of \(v_i\) in cyclic order. Note that \(w_j^+-w_j^-=\varepsilon ^2(M^++M^--(C^++C^-)|v_j|^2)\) for all vertices \(v_j\). As K is compact we may assume that \(0<\varepsilon _0\) is such that \(w_j^+-w_j^-\,{\leqslant }\, \varepsilon \) for \(0<\varepsilon \,{<}\,\varepsilon _0\). Then using the properties of \(w^+\) and u we obtain from Lemmas 4.3 and 3.1 $$\begin{aligned} \frac{\partial E}{\partial u_i}(u)&= 2\pi - \sum _{j=0}^5 \theta (\lambda _{i,{j+1},j} +\underbrace{u_j- \underbrace{u_i}_{=w_i^+}}_{\leqslant w_j^+-w_i^+} ,\lambda _{i,j,{j+1}} + \underbrace{u_{j+1} -\underbrace{u_i}_{=w_i^+}}_{\leqslant w_{j+1}^+-w_i^+})\\&\geqslant 2\pi - \sum _{j=0}^5 \theta (\lambda _{i,{j+1},j} +w_j^+-w_i^+, \lambda _{i,j,{j+1}} +w_{j+1}^+-w_i^+) \\&> 0. \end{aligned}$$ An analogous estimate holds for boundary faces \(W_i^-\). \(\square \) We are now ready to deduce our convergence theorem. (of Theorem 1.2) The existence part follows from Theorem 4.1. The uniqueness is obvious as the translational and rotational freedom of the image of \(f^\varepsilon \) is fixed using values of f. We now deduce the remaining estimates. Part (i): Together with the definition of \(w^\pm \), Theorem 4.1 implies that for \(\varepsilon >0\) small enough and all vertices \(v\in V^\varepsilon _K\) $$\begin{aligned}&-\varepsilon ^2(M^--C^-|v|^2) \\&\qquad \leqslant w^-(v) -\log |f'(v)| \leqslant u^\varepsilon (v)-\log |f'(v)|\leqslant w^+(v) -\log |f'(v)|\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \leqslant \varepsilon ^2(M^+-C^+|v|^2). \end{aligned}$$ As K is compact, this implies estimate (3). Part (ii): Given the scale factors \(u^\varepsilon \) associated to the discrete conformal PL-map \(f^\varepsilon \) on \(T^\varepsilon _K\), we can in every image triangle determine the interior angles (using for example (5)). In particular, we begin by deducing from estimate (3) the change of these interior angles of the triangles. Recall that for acute angled triangles the center of the circumcircle lies in the interior of the triangle. Joining these centers for incident triangles leads to an embedded regular graph \(\varepsilon T^*=(\varepsilon V^*,\varepsilon E^*)\) which is dual to the given triangular lattice \(\varepsilon T\). In particular, the vertices \(\varepsilon V^*\) are identified with the centers of the circumcircles of the triangles of \(\varepsilon T\). Furthermore, each edge \(e^*\in (\varepsilon E^*)\) intersects exactly one edge \(e\in (\varepsilon E)\) orthogonally, so e and \(e^*\) are dual, see Fig. 3. Consider an edge \(e=[v_1,v_2]\in E^\varepsilon _K\) with dual edge \(e^*=[c_1,c_2]\). Their lengths are related by \(|c_2-c_1|= |v_2-v_1|\cot \varphi _e\), where \(\varphi _e\) denotes the angle opposite to e in \(\varepsilon T\). Furthermore we obtain $$\begin{aligned} \cot \varphi _e(\log |f'(v_2)|-\log |f'(v_1)|)&=\cot \varphi _e\text {Re}((\log f')'(v_1)(v_2-v_1)) +\mathscr {O}(\varepsilon ^2) \nonumber \\&= \cot \varphi _e\text {Im}((\log f')'(v_1)i(v_2-v_1)) +\mathscr {O}(\varepsilon ^2)\nonumber \\&=\text {Im}((\log f')'(v_1)(c_2-c_1)) +\mathscr {O}(\varepsilon ^2) \nonumber \\&=2\text {Im}((\log f')'(v_1)(c_2-v_1)) \nonumber \\&\quad +2\text {Im}((\log f')'(v_1)(v_1-\frac{c_2+c_1}{2})) +\mathscr {O}(\varepsilon ^2) \nonumber \\&= 2\arg f'(c_2)-2\arg f'(\underbrace{\frac{c_2+c_1}{2}}_{=\frac{v_2+v_1}{2}}) +\mathscr {O}(\varepsilon ^2) \end{aligned}$$ $$\begin{aligned}&=2\arg f'(\frac{v_2+v_1}{2}) - 2\arg f'(c_1)+\mathscr {O}(\varepsilon ^2), \end{aligned}$$ where we have chosen the notation such that \((v_2-v_1)i=(c_2-c_1)\tan \varphi _e\). Now we estimate the change of the angles in a triangle of \(T^\varepsilon _K\) compared with its image triangle under \(f^\varepsilon \). Assume given a triangle \(\varDelta [v_0,v_1,v_2]\) and denote \(e_1=[v_0,v_1]\) and \(e_2=[v_0,v_2]\). Denote the angle at \(v_0\) by \(\theta _0=\theta (\lambda _1,\lambda _2)\), where \(l_{e_{j}}=|v_j-v_0|\) and \(\lambda _j=2\log (|v_1-v_2|/l_{e_{j+1}})\) for \(j=1,2\) and \(e_3=e_1\). Consider the Taylor expansion $$\theta (\lambda _1 +x_1\varepsilon ,\lambda _2 +x_2\varepsilon )= \theta _0 +\varepsilon ( \frac{\cot \varphi _{e_1}}{2}x_1 +\frac{\cot \varphi _{e_2}}{2}x_2) + \mathscr {O}(\varepsilon ^2).$$ We apply this estimate for the bounded terms $$x_j= \frac{u^\varepsilon (v_j)-u^\varepsilon (v_0)}{\varepsilon } = \frac{\log |f'(v_j)|-\log |f'(v_0)|}{\varepsilon } +\mathscr {O}(\varepsilon )$$ for \(j=1,2\). Denote by \(\delta +\theta _0\in (0,\pi )\) the angle at the image point of \(v_0\) in the image triangle \(f^\varepsilon (\varDelta [v_0,v_1,v_2])\). Then by (13) and (14) the change of angle \(\delta \) is given by $$\begin{aligned} \delta = \arg f'(\frac{v_2+v_0}{2})-\arg f'(\frac{v_0+v_1}{2}) + \mathscr {O}(\varepsilon ^2) \end{aligned}$$ This local change of angles is related to the angle \(\psi ^\varepsilon (e)\) by which each edge e of \(T^\varepsilon _K\) has to be rotated to obtain the corresponding image edge \(f^\varepsilon (e)\) (or, more precisely, a parallel edge). The function \(\psi ^\varepsilon \) may be defined globally on \(E^\varepsilon _K\) such that in the above notation the change of the angle at \(v_0\) is given as \(\delta = \psi ^\varepsilon (e_2)-\psi ^\varepsilon (e_1) \in (-\pi ,\pi )\). We fix the value of \(\psi ^\varepsilon \), that is the rotational freedom of the image of \(T^\varepsilon _K\) under \(f^\varepsilon \) at the edge \(e_0\) according to \(\arg f'\), see Theorem 1.2. Then we take shortest simple paths and deduce from (15) that each edge \(e=[v_j,v_{j+1}]\in E^\varepsilon _K\) is rotated counterclockwise by $$\psi ^\varepsilon (e)= \arg f'(\frac{v_j+v_{j+1}}{2}) + \mathscr {O}(\varepsilon ).$$ This implies together with (3) that for all edges \(e=[v_j,v_{j+1}]\in E^\varepsilon _K\) we have uniformly $$\begin{aligned} \log f'(\frac{v_j+v_{j+1}}{2}) - \frac{u^\varepsilon (v_j)+u^\varepsilon (v_{j+1})}{2} -i\psi ^\varepsilon (e)=\mathscr {O}(\varepsilon ). \end{aligned}$$ Therefore the difference of the smooth and discrete conformal maps at vertices \(v_0\in V^\varepsilon _K\) satisfies uniformly $$\begin{aligned} f(v_0)-f^\varepsilon (v_0)= O(\varepsilon ) \end{aligned}$$ by suitable integration along shortest simple paths from the reference point as above. This estimate then also holds for all points in the support of \(T^\varepsilon _K\) and \(\varepsilon \rightarrow 0\). Part (iii): As last step we consider the derivatives of \(f^\varepsilon \) restricted to a triangle. Assume given a triangle \(\varDelta [v_0,v_1,v_2]\) in \(T^\varepsilon _K\). As \(f^\varepsilon \) is piecewise linear its restriction to \(\varDelta =\varDelta [v_0,v_1,v_2]\) is the restriction of an \(\mathbb {R}\)-linear map \(L_\varDelta \). This map can be written for \(z\in \mathbb C\) as $$\begin{aligned} L_\varDelta (z)= f^\varepsilon (v_0) + a\cdot (z-v_0) +b\cdot \overline{(z-v_0)}, \end{aligned}$$ where the constants \(a,b\in \mathbb C\) are determined from the conditions \(L_\varDelta (v_j)= f^\varepsilon (v_j)\) for \(j=0,1,2\). Straightforward calculation gives $$\begin{aligned} \partial _z L_\varDelta&= a= \frac{(f^\varepsilon (v_2)-f^\varepsilon (v_0)) \overline{(v_1-v_0)} -(f^\varepsilon (v_1)-f^\varepsilon (v_0)) \overline{(v_2-v_0)}}{\overline{(v_1-v_0)}(v_2-v_0) -(v_1-v_0)\overline{(v_2-v_0)}} \\ \partial _{\bar{z}} L_\varDelta&= b= \frac{(f^\varepsilon (v_2)-f^\varepsilon (v_0))(v_1-v_0) -(f^\varepsilon (v_1)-f^\varepsilon (v_0))(v_2-v_0)}{\overline{(v_1-v_0)}(v_2-v_0) -(v_1-v_0)\overline{(v_2-v_0)}}. \end{aligned}$$ Note that by definition of \(f^\varepsilon \) and \(\psi ^\varepsilon \) we know that $$f^\varepsilon (v_j)-f^\varepsilon (v_0)= (v_j-v_0)\text {e}^{(u^\varepsilon (v_j)+u^\varepsilon (v_0))/2 +i\psi ^\varepsilon ([v_j,v_0])},$$ where we use the rotation function \(\psi ^\varepsilon \) on the edges as defined in the previous part (ii) of the proof. Now (16) together with the above expressions of a and b immediately implies the desired estimates $$ \partial _z f^\varepsilon |_\varDelta (z)=\partial _z L_\varDelta (z) =f'(z)+\mathscr {O}(\varepsilon ) \quad \text {and}\quad \partial _{\bar{z}} f^\varepsilon |_\varDelta (z)= \partial _{\bar{z}} L_\varDelta (z)= \mathscr {O}(\varepsilon ). $$ uniformly on the triangle \(\varDelta =\varDelta [v_0,v_1,v_2]\). Also, the constants in the estimate do not depend on the choice of the triangle. This finishes the proof. \(\square \) Theorem 1.2 focuses on a particular way to approximate a given conformal map f by a sequence of discrete conformal PL-maps. Namely, we consider corresponding smooth and discrete Dirichlet boundary value problems and compare the solutions. There is of course a corresponding problem for Neumann boundary conditions, i.e. prescribing angle sums of the triangles at boundary vertices using \(\arg f'\). Also, there is a corresponding variational description for conformally equivalent triangle meshes or discrete conformal PL-maps in terms of angles, see [1]. But unfortunately, the presented methods for a convergence proof seem not to generalize in a straightforward manner to this case, as the order of the corresponding Taylor expansion is lower . Bobenko, A.I., Pinkall, U., Springborn, B.: Discrete conformal maps and ideal hyperbolic polyhedra. Geom. Topol. 19, 2155–2215 (2015) CrossRef MathSciNet MATH Google Scholar Bobenko, A.I., Skopenkov, M.: Discrete Riemann surfaces: linear discretization and its convergence. To appear in J. Reine Angew, Math (2014) Born, S., Bücking, U., Springborn, B.: Quasiconformal distortion of projective transformations, with an application to discrete conformal maps. arXiv:1505.01341 [math.CV] Bücking, U.: Approximation of conformal mappings by circle patterns and discrete minimal surfaces. Ph.D. thesis, Technische Universität Berlin (2007). http://opus.kobv.de/tuberlin/volltexte/2008/1764/ Bücking, U.: Approximation of conformal mapping by circle patterns. Geom. Dedicata 137, 163–197 (2008) Chelkak, D., Smirnov, S.: Universality in the 2D Ising model and conformal invariance of fermionic observables. Invent. math. 189, 515–580 (2012) Courant, R., Friedrichs, K., Lewy, H.: Über die partiellen Differenzengleichungen der mathematischen Physik. Math. Ann. 100, 32–74 (1928). English translation: IBM Journal (1967), 215–234 Gu, X., Guo, R., Luo, F., Sun, J., Wu, T.: A discrete uniformization theorem for polyhedral surfaces II. arXiv:1401.4594 [math.GT] Gu, X., Luo, F., Sun, J., Wu, T.: A discrete uniformization theorem for polyhedral surfaces. arXiv:1309.4175 [math.GT] He, Z.X., Schramm, O.: On the convergence of circle packings to the Riemann map. Invent. Math. 125, 285–305 (1996) He, Z.X., Schramm, O.: The \(C^\infty \)-convergence of hexagonal disk packings to the Riemann map. Acta Math. 180, 219–245 (1998) Lan, S.Y., Dai, D.Q.: The \(C^\infty \)-convergence of SG circle patterns to the Riemann mapping. J. Math. Anal. Appl. 332, 1351–1364 (2007) Lelong-Ferrand, J.: Représentation conforme et transformations à intégrale de Dirichlet bornée. Gauthier-Villars, Paris (1955) Luo, F.: Combinatorial Yamabe flow on surfaces. Commun. Contemp. Math. 6(5), 765–780 (2004) Matthes, D.: Convergence in discrete Cauchy problems and applications to circle patterns. Conform. Geom. Dyn. 9, 1–23 (2005) Mercat, C.: Discrete Riemann Surfaces. In: Papadopoulos, A. (ed.) Handbook of Teichmüller theory, vol. I, pp. 541–575. Eur. Math. Soc., Zürich (Ed.) (2007) Rodin, B., Sullivan, D.: The convergence of circle packings to the Riemann mapping. J. Diff. Geom. 26, 349–360 (1987) MathSciNet MATH Google Scholar Schramm, O.: Circle patterns with the combinatorics of the square grid. Duke Math. J. 86, 347–389 (1997) Skopenkov, M.: The boundary value problem for discrete analytic functions. Adv. Math. 240, 61–87 (2013) Springborn, B., Schröder, P., Pinkall, U.: Conformal equivalence of triangle meshes. ACM Trans. Graph. 27(3) (2008) Thurston, B.: The finite Riemann mapping theorem. Invited address at the International Symposioum in Celebration of the proof of the Bieberbach Conjecture, Purdue University (1985) Werness, B.M.: Discrete analytic functions on non-uniform lattices without global geometric control (2014). Preprint The author would like to thank the anonymous referees for the careful reading of the initial manuscript and various suggestions for improvement. This research was supported by the DFG Collaborative Research Center TRR 109 "Discretization in Geometry and Dynamics". Inst. für Mathematik, Technische Universität Berlin, Straße des 17. Juni 136, 10623, Berlin, Germany Ulrike Bücking Correspondence to Ulrike Bücking . Editor information Editors and Affiliations Technical University of Berlin, Berlin, Germany Alexander I. Bobenko Open Access This chapter is distributed under the terms of the Creative Commons Attribution-Noncommercial 2.5 License (http://creativecommons.org/licenses/by-nc/2.5/) which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. The images or other third party material in this chapter are included in the work's Creative Commons license, unless indicated otherwise in the credit line; if such material is not included in the work's Creative Commons license and the respective action is not permitted by statutory regulation, users will need to obtain permission from the license holder to duplicate, adapt or reproduce the material. © 2016 The Author(s) Cite this chapter Bücking, U. (2016). Approximation of Conformal Mappings Using Conformally Equivalent Triangular Lattices. In: Bobenko, A. (eds) Advances in Discrete Differential Geometry. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-50447-5_3 DOI: https://doi.org/10.1007/978-3-662-50447-5_3 Publisher Name: Springer, Berlin, Heidelberg Online ISBN: 978-3-662-50447-5 eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0) Share this chapter Over 10 million scientific documents at your fingertips Switch Edition Academic Edition Corporate Edition Not affiliated © 2023 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
On the assimilation of absolute geodetic dynamic topography in a global ocean model: impact on the deep ocean state Alexey Androsov ORCID: orcid.org/0000-0001-5237-08021,2, Lars Nerger1, Reiner Schnur3, Jens Schröter1, Alberta Albertella4, Reiner Rummel5, Roman Savcenko6, Wolfgang Bosch6, Sergey Skachko7 & Sergey Danilov1 Journal of Geodesy volume 93, pages 141–157 (2019)Cite this article General ocean circulation models are not perfect. Forced with observed atmospheric fluxes they gradually drift away from measured distributions of temperature and salinity. We suggest data assimilation of absolute dynamical ocean topography (DOT) observed from space geodetic missions as an option to reduce these differences. Sea surface information of DOT is transferred into the deep ocean by defining the analysed ocean state as a weighted average of an ensemble of fully consistent model solutions using an error-subspace ensemble Kalman filter technique. Success of the technique is demonstrated by assimilation into a global configuration of the ocean circulation model FESOM over 1 year. The dynamic ocean topography data are obtained from a combination of multi-satellite altimetry and geoid measurements. The assimilation result is assessed using independent temperature and salinity analysis derived from profiling buoys of the AGRO float data set. The largest impact of the assimilation occurs at the first few analysis steps where both the model ocean topography and the steric height (i.e. temperature and salinity) are improved. The continued data assimilation over 1 year further improves the model state gradually. Deep ocean fields quickly adjust in a sustained manner: A model forecast initialized from the model state estimated by the data assimilation after only 1 month shows that improvements induced by the data assimilation remain in the model state for a long time. Even after 11 months, the modelled ocean topography and temperature fields show smaller errors than the model forecast without any data assimilation. A major task in oceanography is the determination of currents and associated transports of mass and heat. Velocities are difficult to measure directly. However, there is an elegant two-step procedure for their estimation which involves information derived from geodesy. First, using the geostrophic and hydrostatic relationships, the "thermal wind" equations can be derived (Defant 1941; Stommel 1956). They allow the calculation of the vertical velocity shear simply from observed fields of temperature and salinity alone. Vertical integration then yields velocities. The problem has now been reduced to the determination of the remaining integration constant which varies locally. For this second step, two solutions are available, (a) knowledge about the (full) velocity at some depth or (b)—equivalently—a "geostrophic surface velocity" derived from the slope of the sea surface referenced to the geoid. Making an absolute geodetic surface useful for oceanography has a long tradition. Generations of oceanographers have been searching for a highly accurate reference surface, that can be used to convert relative to absolute oceanic velocities and transports (Defant 1941). The concept of a "level of no motion" (Stommel 1956) is only a convenient approximation in this context assuming the velocity becomes zero at this level. However, it is relatively inaccurate and cannot be applied in areas such as the Southern Ocean. Alternatively "baroclinic transports" relative to zero bottom velocity have been used (Rintoul and Sokolov 2001). Other approaches used data assimilation and inverse modelling to determine absolute velocities (e.g. Wunsch 1978). The concept of using geodetic information simultaneously with oceanic data in a joint estimation process has been introduced decades ago (Wunsch and Gaposchkin 1980). It has formed a basis for a long and successful series of satellite oceanographic and geodetic space missions. At the lifetime of the SEASAT and GEOSAT satellite missions, the marine geoid was uncertain to such an extent that only temporal changes were used for oceanic applications. For example, a collinear analysis technique producing sea surface anomalies relative to an unknown or undetermined mean was applied by Cheney and Marsh (1981). Indeed, the primary mission of GEOSAT was to approximate the marine geoid N by measuring a mean altimetric sea surface height (SSH) and correcting it for steric height referenced to a deep level (Douglas and Cheney 1990). The difference between SSH and N is the deviation of the real ocean surface from the geoid, denoted \(\eta \). It is a characteristic property related to ocean dynamics similar to surface pressure for the atmosphere. Frequently, the difference is called dynamic ocean topography (DOT), averaging it provides the mean dynamic topography (MDT). Although oceanographers conventionally call this quantity differently it is now well understood in the space oceanographic community. The time-varying difference between DOT and MDT is denoted sea-level anomaly (SLA). The joint estimation of N and SSH is attractive as the accuracy of gravity and ocean information differs in the spectral domain. The geoid is best known at very long wavelengths with rapid error growth towards shorter wavelengths (blue spectrum). In contrast, the ocean measurements are mostly accurate on small scales and accumulate error on longer scales (red spectrum), see e.g. Rio and Hernandez (2004). Thus, the combination can result in smaller errors in the shorter and longer wavelengths. For a long time, it was difficult or even impossible to use the DOT for ocean studies to derive unmeasured quantities, e.g. by data assimilation and inverse modelling (Verron 1992). The difficulty arose from the fact that N was quite uncertain such that only the SLA was representative. One approach to handle DOT data was to replace the mean of the time-dependent DOT by one derived from an ocean model or from an in situ ocean data analysis (Stammer 1997). A different approach was to constrain the SLA separately from the mean MDT (Wenzel et al. 2001; Stammer et al. 2002). With the first observations from the low earth orbiting CHAMP satellite, the situation changed significantly. The CHAMP geoid (Reigber et al. 2002) was sufficiently accurate to subtract it from measurements by altimetric satellites (TOPEX/Poseidon, ERS1/2, JASON), see e.g. Seufer et al. (2003). However, the issue of geoid errors on smaller scales remained until the GRACE (Gravity Recovery and Climate Experiment) geoid became available. First studies by Birol et al. (2004, 2005) show the impact of using a geoid of much better resolution instead of a mean dynamic topography MDT. Also they consider the issue of geoid error and resolution as they were limited to a spherical harmonic cut-off degree of \(L=60\). Stammer et al. (2007) continue their earlier work and constrain the time mean model surface by altimetry minus the GRACE (GGM01c)geoid. Anomalies are constrained separately. The authors find little impact on the solution and discuss sensitivities as well as insufficient accuracy in the Southern Ocean. A more recent work by Haines et al. (2011) reviews the current research status in using geoid data derived from GRACE to constrain modern ocean general circulation models (OGCMs). The authors also discuss the future prospects of using an improved geoid from the GOCE mission. The need for an even higher-resolution GOCE (Gravity and steady-state Ocean Circulation Explorer) geoid for ocean studies had been pointed out by e.g. LeGrand and Minster (1999); Schröter et al. (2002) and many others. LeGrand (2001) demonstrates how an accurate marine geoid could be used to determine oceanic transports of heat and mass with unprecedented precision. With the success of the GOCE mission, there are accurate satellite products of DOT that can be assimilated for estimating the ocean state (Rummel 1999; Haines et al. 2011; Rio et al. 2014; Carrere et al. 2016; Pail 2015). In this study, we focus on the deep ocean and show the current achievements in assimilating such combined product using the finite-element sea-ice ocean model (FESOM, Wang et al. 2014). Indeed, we can support findings by Stammer et al. (2007) about the accuracy and about deficiencies in the Southern Ocean. In our present study, we are able to use a DOT with a spherical harmonic cut-off degree of 200 and observe the biggest improvements in the Southern Ocean. Oceanic transports based on measured hydrography and the slope of the DOT (i.e.) surface geostrophic velocities are uncertain to some extent. In the deep ocean, errors of only 5 cm in DOT lead to errors on the order of 20 Sv (1 Sverdrup corresponds to \(10^6\,\hbox {m}^3\,\hbox {s}^{-1}\)). Almost all ocean currents transport less volume which demonstrates the necessity for a highly accurate DOT. Furthermore, velocity fields derived from measurements alone do not obey mass conservation. To make them mass-consistent, it is common to combine the measurements with an ocean model. The estimation of absolute dynamic topography within a system based on an ocean model with assimilation of combined geoid and altimetry data is usually performed by one of two data assimilation approaches: an iterative four-dimensional variational (4D-Var) method minimizing a cost function (Talagrand and Courtier 1987) measuring the discrepancy between observations and the model or a sequential data assimilation scheme based on the ensemble Kalman filter (EnKF Evensen 1994). In the case of EnKFs, data are treated when they are available and the system is driven by short-scale model forecasts. The sequential assimilation approach has been applied in several studies like De Mey and Benkiran (2002) and Bertino and Lisaeter (2008). For the assimilation, the Mean MDT and the SLA data are merged to be assimilated together as an absolute signal. There are three main issues related to the sequential assimilation approach. The first issue is how to take the errors in the MDT correctly into account so that they are distinguished from the SLA errors. Dobricic (2005) assumed that the error in the MDT field is introduced in the assimilation system as a temporally constant and spatially variable observational bias. The author tried to estimate the MDT error from the differences between long-term averages of MDT and the instantaneous MDT field from the previous time step. The chosen method resulted in an improved SSH analysis. Lea et al. (2008) estimated the MDT errors using a Bayesian approach in the combined MDT and SLA assimilation as an observational bias. In our previous studies (Skachko et al. 2008; Janjić et al. 2011, 2012a, b), we decided to not separate the geoid and altimetry errors, but to rather increase the assumed DOT errors in the data assimilation system. The second issue of the sequential assimilation concerns the model performance. As previously stated in Skachko et al. (2008), the predecessor version FEOM (Danilov et al. 2004) of the FESOM model showed a significant sea surface level drift away from the observations. This bias prevented the direct assimilation of the satellite DOT product. To correct the model prior for the data assimilation, the idea of adiabatic pressure correction (Sheng et al. 2001; Eden et al. 2004) was applied. Thus, the sea-level drift was associated with the systematic changes in the thermohaline structure. The chosen method removed the model bias only partially and remained thus suboptimal. Finally, the third issue in the sequential data assimilation is how to adequately redistribute the observational update on the surface into the ocean depth. In Skachko et al. (2008), we had chosen the method by Fukumori et al. (1999) where the temperature and salinity updates follow the first baroclinic mode in the vertical direction. However, such vertical modes deviate from real modes of variability, which are affected by thermal wind and variable bottom topography and are sensitive to the horizontal amplitude of the perturbations. As an alternative approach, Janjić et al. (2011, 2012a, b) directly utilized the vertical correlations that are estimated from an ensemble of model state realizations in a ensemble-based SEIK filter. These studies are continued here. To improve the state estimation by assimilating DOT data in the present work, the current model version of FESOM (Wang et al. 2014) is used with an increased resolution compared to the previous studies. In addition, an improved surface forcing derived from CORE-II inter-annual forcing (Large and Yeager 2008) is used. Compared to the studies by Janjić et al. (2011, 2012a, b) also a newer ensemble-based Kalman filter, the ESTKF (Nerger et al. 2012a), which keeps the ensemble variance better distributed over all ensemble members than the LSEIK filter used by Janjić et al. (2011), is applied to assimilate the dynamic ocean topography data. Further differences include a much more accurate geoid model based on the final GOCE product as well as improved along-track altimetry. Finally, we focus on changes in the deep ocean and show how even a short assimilation time can be used to improve modelled ocean fields over a significant period. The paper is organized as follows. Section 2 describes the ocean circulation model FESOM. The observations are described in Sect. 3 followed by the description of the data assimilation method in Sect. 4. The results of the data assimilation experiments are discussed with a focus on the vertical structure of the changes induced by the data assimilation procedure in Sect. 5. The impact of the data assimilation in different depths and at the surface are discussed in the Sect. 6. Section 7 concludes the paper. The numerical experiments of this study have been performed with the Finite-Element Sea-ice Ocean Model (FESOM) (Danilov et al. 2004; Wang et al. 2008, 2014; Timmermann et al. 2009). FESOM is a global coupled ocean-sea ice general circulation model built on finite elements. It uses unstructured triangular meshes in the horizontal directions and tetrahedral elements in the volume. The model uses a continuous linear representation for the horizontal velocity, surface elevation, temperature and salinity, and solves the standard set of hydrostatic ocean dynamic primitive equations. It uses a finite-element flux-corrected transport algorithm for tracer advection (Löhner et al. 1987). The configuration of FESOM used in this study is the same as used in the CORE-II intercomparison study, see, e.g., Danabasoglu et al. (2014). The important parameters and characteristics of the model can be found there. The main principles of FESOM and examples of its sensitivity to important governing parameters are discussed by Wang et al. (2014). The model mesh is configured such that the computational North Pole is located on Greenland. The horizontal resolution varies from about 100 km in the open ocean to 25 km in the vicinity of Greenland and to around 30–50 km in the equatorial belt. There are 39 z-levels in the vertical direction. The layer thickness is 10 m in the top ten surface layers and then increases monotonically to 250 m. Vertical mixing is parameterized using the scheme by Pakanowski and Philander (1981) with the background vertical diffusion of \(10^{-4} \hbox { m}^{2}\,\hbox {s}^{-1}\) for momentum and \(10^{-5}~\hbox {m}^2\,\hbox {s}^{-1}\) for tracers. To avoid unrealistically shallow mixed layers that might occur in summer, we introduced an additional diffusivity of \(0.01~\hbox {m}^2\,\hbox {s}^{-1}\) over the surface mixed layer depth defined by the Monin–Obukhov-length (Timmermann and Beckmann 2004). The effects of subgrid-scale processes are parameterized using tracer mixing along isopycnals (Redi 1982) and the Gent and McWilliams parameterization (Gent and McWilliams 1990). The model is forced by the CORE-II inter-annual forcing (Large and Yeager 2008). The ocean and sea-ice are first spun up for 35 years, beginning from climatological temperature and salinity, before the data assimilation is applied for the year 2004. As all models that use the Boussinesq approximation, FESOM conserves volume and not mass. Apart from a gain in numerical efficiency, there is a serious reason for this. Mass conservation would require sufficient knowledge about inflow and outflow of fresh water through the boundaries of the ocean, i.e. precipitation–evaporation, inflow by rivers, ground water and ice streams. These fluxes are quite large and may be estimated. However, while the relative error of these fluxes may be small, the absolute error is so big that it makes the balance uncertain to an equivalent on the order of 10 mm per year. Accordingly, sea-level change cannot be retrieved from model simulations but has to be measured by tide gauges and is a prime target of space altimetric missions. The model equivalent of the geodetic DOT is the models surface elevation \(\eta \), which is closely related to DOT. Oceanographers reference \(\eta \) to their coordinate system (z) and they define \(z=0\) being identical to the geoid N. Thus, any secular changes in N such as GIA, self gravitation, etc., are not visible to the ocean model. Associated changes in ocean bottom topography are neglected in general circulation models. Volume conservation implies that not \(\eta \) but its horizontal gradient is modelled correctly and only the equation $$\begin{aligned} \nabla \eta = \nabla \mathrm{DOT} \end{aligned}$$ holds. As a consequence, we set \(\eta + \mathrm{const} = \mathrm{DOT}\) and estimate the constant to be 47 cm by fitting the average \(\eta \) to DOT over the observed area. The observations that are assimilated are geodetic dynamic ocean topography (DOT) data. They are derived from filtered geoid and altimetry data in form of a filter-corrected difference. We only provide a short overview of the method here, a more detailed description can be found in Albertella et al. (2012). Generally, the DOT is estimated as the difference \(\mathrm{DOT} = \mathrm{SSH} - N\) of the sea surface height SSH monitored by satellite altimetry and the geoid height N, which describes a geopotential surface at the sea level. The quantities SSH and N have different spectral properties so that both need to be filtered in a consistent way (Bingham et al. 2008) to compute the difference. In particular, the geoid height N is a satellite-only gravity field as GOCO03S (Mayer-Gürr et al. 2012) and is rather smooth. In contrast, the sea surface heights were computed from the altimeter missions ENVISAT, GFO, Jason-1 and TOPEX/Poseidon and contain a rich spectrum of details observed by the satellite altimeters. The filtering is performed using the approach by Bosch and Savcenko (2010). Here, the instantaneous SSH is filtered along-track and at the locations where they have been observed. This leads to estimates of instantaneous DOT (iDOT) profiles. To ensure that the along-track SSH is filtered in the same way as N, a filter-correction term was computed using the ultra-high resolving gravity field model EGM2008 (Pavlis et al. 2012) in spherical harmonics up to degree and order 2160. This filter-correction term accounts for the difference in the instantaneous one-dimensional filtering of the along-track sea surface heights and the two-dimensional filtering of the geoid. The filtering was applied with an isotropic Gauss-type filter as proposed by Jekeli (1981) with a filter length of 69 km, corresponding approximately to spherical harmonic degree \(L = 210\). For the present paper, the full data set has been evaluated for nearly all sea surface height profiles observed by altimeter satellites operated between 1993 and 2011. A comprehensive cross-calibration of the multi-mission altimeter scenario has been performed in advance (Bosch et al. 2014). For this, a 10-day sampling was used such that all iDOT profiles observed within the 10-day intervals were first edited for spurious profiles, then averaged to a global grid with 30\('\) spacing and subsequently interpolated to the nodes of the model grid. Ensemble filter method The data assimilation is performed using the error-subspace transform Kalman filter (ESTKF, Nerger et al. 2012b) with observation localization (see Nerger et al. 2012a). The filter algorithm is provided in the Appendix. Here, we present a short overview of the assimilation concept. The ESTKF is an ensemble square root Kalman filter that assimilates the observational data sequentially in time. For this, an ensemble of model states is used to represent the state estimate and its uncertainty. A forecast ensemble is computed by integrating all ensemble members with the FESOM until the time \(t_k\) when observations are assimilated. At this time, the ensemble mean state represents the forecast state estimate, while the uncertainty is estimated by the covariance matrix sampled by the forecast ensemble. At the observation time, an analysis step is computed that incorporates the information from the observations into the model state ensemble. The analysis is computed locally for each water column of the model grid considering only observations within a specified influence radius l. In addition, the observations are weighted according to their horizontal distance from the water column using a correlation function with compact support. This function is the 5th-order piecewise rational function of Gaspari and Cohn (1999) whose shape is similar to a Gaussian function. The weighting function is isotropic and decreases monotonically with distance depending on the correlation length scale l / 2. The function is positive only for distances that are less than l and zero otherwise. The localized ESTKF is implemented in the parallel data assimilation framework (PDAF, Nerger and Hiller 2013, http://pdaf.awi.de). FESOM is coupled to PDAF into a single parallel programme that computes both the ensemble forecasts as well as the analysis step. Configuration of assimilation system The assimilation experiment is performed over the full year 2004. Observations are assimilated at each 10th day. An ensemble of 32 members is used. A preliminary sensitivity study was performed to tune the influence radius for the localization showing that a radius of 580 km provided the smallest Observation-minus-Forecast (OmF) errors. Before each analysis step, a covariance inflation is applied by applying a so-called forgetting factor which increases the spread of the forecast ensemble by 11.8% to stabilize the data assimilation process. The state vector includes the two-dimensional \(\eta \) field as well as the three-dimensional fields of temperature, salinity, and the three components of the velocity. In addition, the variables of the sea-ice model are included. The ensemble for the assimilation is initialized by combining an initial state estimate with an estimate of the uncertainty. The ensemble mean, representing the initial state estimate, was chosen to be the state at January 1, 2004, from the spin-up run over 35 years initialized from climatology and using the CORE-II surface forcing. The ensemble perturbations prescribe the uncertainty of the state estimate. They have been computed using second-order exact sampling (Pham 2001) from each tenth day of the trajectory of the reference run during the year 2004. The resulting ensemble spread was reduced by a factor 0.3 so that initial variance estimate of the ensemble was close to the root-mean-square difference between the initial state estimate and the observations. To account for the mean difference between the observations and the modelled a constant of 47 cm was added to the model values. For the data assimilation an observation error has to be specified, which represents a combined standard deviation of the observational and modelling (representativeness) errors. Pail (2015) reports geoid uncertainties on the 2–3 cm level, Rio et al. (2014) demonstrate an accuracy of the MDT of 2–3 cm for the Mediterranean. However we deal with the full, time-dependent DOT. Sakov et al. (2012) assume an observational error of 3–4 cm. Since the DOT data does not include a specification of observation errors, we assume a constant of 5 cm (including representativeness error) consistent with our earlier studies (e.g. Janjić et al. 2012b). We did not attempt to apply a spatially variable representation errors, which might be estimated from sea-level anomalies (see Sakov et al. 2012), because of our assimilation of absolute DOT. The observation errors are assumed to be Gaussian and uncorrelated, so that they are represented by a diagonal observation error covariance matrix in the data assimilation. Three experiments have been conducted in this study: FREE: This is a control model simulation over 360 days without data assimilation. ASSIM: In this experiment, the data assimilation system described in Sect. 4 is applied and the observations are assimilated each 10th day over 360 days. In this experiment, we distinguish between the analysis states directly after the observations at some time are assimilated (referred to as ASSIM-A) and the forecast fields (denoted ASSIM-F), i.e. the model fields obtained at the end of each 10-day model integration of the ensemble states. INFOR: In this experiment, an initialized long forecast is computed from day 30. For this, the model is initialized with the analysis model state (ensemble mean) from the experiment ASSIM on day 30. This experiment is used to assess how the changes in the model states induced by 30 days of data assimilation remain preserved during the following model forecast over 11 months. The performance of the experiments is assessed first by using the root-mean-square deviation (RMSD) of the modelled DOT from altimetry observations. A comparison with independent data is performed using Steric Height (SH) at a depth of 2000 m from ARGO-Jamstec (Ishii and Kimoto 2009) data during the simulation year. This comparison is performed with monthly averages for both the model fields and the ARGO observations. Next to the assessment of variance over time, yearly mean differences of modelled \(\eta \) and SH from the observations are examined. A comparison of modelled and measured temperatures at the deepest level of the ARGO data set shows the impact of assimilation at the end of the integration period. Dynamic ocean topography Figure 1 shows the RMSD of the modelled \(\eta \) from the altimetric DOT over the time period from day 10 to day 360. Without DA, the RMSD varies in the range between 10 and 12 cm during the year in the experiment FREE. In the experiment ASSIM, the DA reduces this deviation to 6.2 cm in the first analysis step at day 10. This deviation is further reduced by the continued DA such that the deviation after the final analysis step is about 3.5 cm. During the 10-day forecasts, the RMSD grows by 2.1 cm at day 10 and later by less than 1 cm per 10 day interval, which is a typical behaviour for sequential data assimilation. The difference between ASSIM-A and ASSIM-F grows during the experiment, but is corrected in each analysis step. The experiment INFOR starts with the analysis of ASSIM at day 30. Thus, it has the same RMSD as ASSIM-A at day 30. Then, the RMSD grows during the 11 months of free model integration. Until day 240, the RMSD from INFOR gets closer to that of FREE. After this day, the growth levels of and the RMSD from INFOR stays between 2.0 and 2.5 cm lower than the RMSD from FREE. Thus, some information from the changes induced by the three analysis steps in the first month remain in the model state even after 11 months of model integration. Root-mean-square differences of the modelled DOT from the altimetry observations. The data assimilation results in a strong reduction of the difference, while in the initialized forecast INFOR the RMSD grows slowly The spatial distribution of the difference between the altimeter data and the modelled \(\eta \) averaged over the simulation year is shown in Fig. 2. Here, the maps of the mean differences are shown separately for the analyses and the 10-day forecasts. The FREE simulation (top left) shows significant differences from the data of partly more than 40 cm in the Southern Ocean and in the Kuroshio and Gulf Stream regions. In the Tropical Pacific, the model overestimates \(\eta \) by about 20 cm just north of the equator and underestimates \(\eta \) about 10 cm at the equator, which is due to the limited resolution of the model grid. Annual mean differences (in cm) between altimetry data and modelled \(\eta \) for the four cases FREE (upper left); ASSIM-A (upper right); ASSIM-F (bottom left); INFOR (bottom right). The assimilation strongly reduces the differences which slowly grow again in the initialized forecast INFOR Histograms of \(\eta \) differences between altimetry and model results: (top) total domain; (bottom right) Tropical Belt; (bottom left) Southern Ocean. The assimilation reduces the deviations in all regions RMSD between altimetry data and modelled \(\eta \). The assimilation strongly reduces the RMSD. In the initialized forecast INFOR the RMSD are larger, but remain below the case FREE The data assimilation considerably reduces the \(\eta \)-differences, both in the analysis (ASSIM-A, top right) and the forecast (ASSIM-F, bottom left). All regions with high deviations in the experiment FREE are strongly improved. Further improvements are also visible in the Indian Ocean and the North Pacific. The errors are slightly lower in the analysis than the forecast, e.g. in the Tropical Pacific and the Gulf Stream region as is expected from the RMSD discussed before. As indicated by the RMSD in Fig. 1, some of the improvements of the modelled \(\eta \) by the DA remain in the initialized 11-month forecast experiment INFOR. This effect is also visible in the annual mean differences in the lower right panel of Fig. 2. Most notably, the differences in the Southern Ocean and the Kuroshio and Golf Stream regions remain significantly lower in INFOR compared to FREE. In the equatorial Pacific and Atlantic, the differences are about 5 cm smaller in INFOR than in the experiment FREE. The reduction of deviations by the DA is quantified in Fig. 3 in the form of histograms. The histograms present the probability of differences between altimetry data and model \(\eta \). They are displayed for the total model domain and separately for the Tropical Belt (\(20^{\circ }\hbox {S}\) to \(20^{\circ }\hbox {N}\)) and the Southern Ocean (south of \(40^{\circ }\hbox {S}\)). The histograms are truncated to the range of \(\pm \,25\) cm because larger deviations are very unlikely except for the Southern Ocean as discussed below. For the whole model domain, the histogram for FREE is rather wide with only about 52% of the surface grid points showing a deviation within the range \(\pm \,5\) cm, and 79% of the deviations within the range \(\pm \,10\) cm. The data assimilation results in narrower histograms with a more peaked shape. For ASSIM-F, 85% of the grid points show a deviation within the range of \(\pm \,5\) cm, and 96% within the range of \(\pm \,10\) cm. These probabilities are ever higher for ASSIM-A with 89% for \(\pm \,5\) cm and 97% for \(\pm \,10\) cm. Next to the spread of the deviations also the magnitude of the mean deviation is reduced from 0.87 cm for FREE, to 0.33 cm in ASSIM-F and 0.24 cm in ASSIM-A. The long forecast in INFOR results in a wider distribution of the deviations compared of ASSIM. However, with 90% of the deviations within the range of \(\pm \,10\) cm and 67% within \(\pm \,5\) cm, the histogram is still much more narrow than for the case FREE. For the Tropical Belt the histograms are very similar to those of the total domain. In fact the histograms are a bit more narrow so that the probability of deviations in the range of \(\pm \,5\) cm is about 5% points larger for all four cases. The largest deviations are found in the Southern Ocean. For the case FREE, the histogram is very wide and only about 50% of the deviations are within the range of \(\pm \,10\) cm and the mean deviation is 1.08 cm. As visible in Fig. 2, the data assimilation results in a strong reduction of the deviations in the Southern Ocean. Accordingly, the histograms for ASSIM-A and ASSIM-F are much more narrow and 93% of the grid points are within the range of \(\pm \,10\) cm for ASSIM-F (95% for ASSIM-A). However, the histograms are wider than for the total domain, so that the probability of deviations in the range of \(\pm \,5\) cm is lower. The data assimilation also reduced the mean error to 0.45 cm in ASSIM-F and 0.41 cm in ASSIM-A. For the total domain and the Tropical Belt, the long forecast in INFOR results in a wider histogram compared to ASSIM-A and ASSIM-F, but a smaller spread of 79% of the grid points within the range of \(\pm \,10\) cm, than in the case FREE. Figure 4 summarizes the values of the RMSD over the simulation year, for the whole model domain and separately for the Tropical Belt and the Southern Ocean. For FREE, the averaged RMSD for the total domain is 10.85 cm. This deviation is reduced to about 47% (5.04 cm) in the analysis. In the 10-day forecasts, the averaged RMSD grows to 6.19 cm (i.e., about 57% of the RMSD from FREE). The RMSD in the Tropical Belt is lower than for the total domain. Here, the case FREE has only an RSTD of 7.29 cm, which is reduced to 3.78 cm in the analysis field of ASSIM-A. The error increase in the 10-day forecasts in the Tropical Belt is 1.18 cm, and hence, a little bit larger than the increase of 1.15 cm in the total domain. The Southern Ocean shows the largest deviations, but also the largest influence of the data assimilation. For the case FREE, the RMSD is 17.60 cm. The data assimilation reduces the RMSD by 59% to 7.14 cm in the analysis. The RMSD in the forecast is 1.17 cm larger. For the initialized forecast experiment INFOR, one sees that the increase in RMSD relative to that from FREE is particularly large in the Tropical Belt. Here, the RMSD is 6.26 cm and hence about 85% of that from the case FREE. The increase is particularly low for the Southern Ocean, where the RMSD for INFOR is about 63% of the RMSD from FREE, while for the total domain the RMSD increases to 73% of the value from FREE. Root-mean-square differences of the SH from the model from the ARGO-Jamstec data. The continued assimilation keeps the RMS differences below 8 cm, while they grow slowly in the initialized forecast INFOR Annual mean differences in SH (in cm) between ARGO-Jamstec data and modelled SH for the four cases FREE (upper left); ASSIM-A (upper right); ASSIM-F (bottom left); INFOR (bottom right). Regions with depth below 2000 m are excluded. The differences in the initialized forecast INFOR are larger than in the continued data assimilation, but stay below those of FREE Histograms of SH differences between ARGO data and model results. (top) Total domain; (bottom right) Tropical Belt; (bottom left) Southern Ocean. As for DOT, the differences are significantly reduced by the assimilation Steric height In this section, a further analysis of the results is performed using the independent observations of the steric height from ARGO-Jamstec. While the DOT provides direct information only about the ocean surface, the SH is computed with a reference to 2000 m and is hence giving a vertically integrated information. Because SH depends primarily on depth, the same depth levels must be chosen for a comparison. However model depth and that of the ARGO analysis may differ in shallow areas and only regions with depths of at least 2000 m are considered here. The observational data represents monthly means. To obtain a monthly mean of SH for the experiment ASSIM, the three analysis steps within each month are averaged. For the experiments FREE and INFOR, daily fields are averaged over each month. Figure 5 shows the area average RMSD between the SH from the model and the SH data from ARGO floats over the 12 months of the experiments. For the experiment FREE without data assimilation, the RMSD is between 11.3 and 12.1 cm. The assimilation reduces the RMSD of the SH to about 7.5 cm in the analysis fields. As for the DOT, the 10-day forecasts increase the RMSD. However, for the SH this increase is only of the order of 0.2 cm after 10 days and hence much lower than for the DOT (see Fig. 1). The initialized long forecast INFOR also shows a growing RMSD for SH. Similar to the RMSD of the DOT, the RMSD of SH stabilizes at an asymptotic value at day 210 of the experiment, from which the RMSD of SH is about 9.4 cm, which is about 2.5 cm lower than the RMSD of the experiment FREE. Compared to the DOT, the RMSD of SH in INFOR grows slower. Further, the RMSD shows significantly less variability over time, which is a combined effect of the monthly averages combined with the vertically integrated character of the SH. The spatial distributions of the SH difference between ARGO-Jamstec data and the model results are shown in Fig. 6 for all the three experiments, where for ASSIM, the analysis and forecast are again shown separately. Regions with ocean depths below 2000 m are shown in white. As for the DOT, the largest differences of partly more than 35 cm are observed in the Southern Ocean in the experiment FREE. Large differences are also visible in the Kuroshio and Gulf Stream regions and also the equatorial shows deviations similar to those of the DOT. The data assimilation reduces all differences significantly, while in the initialized long forecast case INFOR, the differences grow again. Figure 7 shows differences in the form of histograms depicting the probability of SH differences analogous to Fig. 3. For the case FREE, the histograms are similar for the total domain and the Tropical belt, while the histogram for the Southern Ocean is much wider. Compared to DOT, the mean deviation is larger for the SH, with 4.85 cm for the total domain, 5.75 cm for the Tropics and 2.54 cm in the Southern Ocean. The data assimilation leads to much stronger peaked histograms such that for the total domain 84% of the grid points show deviations within the range of \(\pm \,5\) cm in ASSIM-A, compared to only 41% in FREE. Further, the mean error is reduced to about 2 cm. Also the shape of the histograms is closer to Gaussian than for FREE. The histograms for the experiment INFOR in the different regions and globally are wider than those for ASSIM-F and ASSIM-A. Striking for the case INFOR is the non-Gaussian shape of the histogram, in particular in the Tropics and the total domain. This indicates that the growth of the deviations in the long forecast is rather linear such that the peaked shape of ASSIM-A is widened without creating significant tails of larger deviations. The summary of RMSD values averaged over the simulation year is shown in Fig. 8. For the total domain, the assimilation reduced the RMSD of SH to about 62% from 11.82 to 7.34 cm. The RMSD increases only slightly to about 66% in the 10-day forecasts of the case KF. The RMSD of the analysis in ASSIM-A is lower in the Tropical Belt with 6.64 cm and higher with 8.18 cm in the Southern Ocean. In these regions, the assimilation reduces the RMSD to 64% and 51%, respectively. As for the total domain, the 10-day forecasts increase the RMSD slightly with the largest increase of about 4% in the Southern Ocean. Overall, the increase in the RMSD for SH is much less than that of DOT (see Fig. 4). In the initialized long forecast case FOR, the RMSD of SH grows further. The growth of the deviation is largest for the Tropical Belt, where the average RMSD is 79% of the RMSD from FREE. For the Southern Ocean, the RMSD is 62% of that from FREE. Hence, while in the Southern Ocean the RMSD grows faster than in the Tropics in the 10-day forecasts, it shows a slower increase on the longer term. RMSD of SH between ARGO and model results. The error growth in the initialized forecast INFOR is smaller than for \(\eta \) Comparing the effect of the data assimilation on the DOT and the SH, one sees that the correction at each single analysis step is larger for the observed DOT. This causes the larger differences between the ASSIM-A and ASSIM-F in Fig. 4 compared to Fig. 8. However, the long persistence of the corrections in the DOT and the strong impact on the SH show that the initially estimated covariance between \(\eta \) and the temperature and salinity fields are sufficiently realistic. This allows the data assimilation method to significantly correct the model state in the multivariate analysis step from the very beginning of the data assimilation experiment with further improvements in the course of the assimilations. Compared to our earlier studies, this successful multivariate assimilation can be explained by the improved model and realistic CORE-II forcing. Impact of the data assimilation at different depths and the surface To assess the impact of the data assimilation at different depths, Fig. 9 shows the percentage of model grid points for which the steric height is changed by the data assimilation by a certain amount over time. Considered at changes of up to 2 cm, between 2 and 5 cm, and more than 5 cm. The panels of the figure show the percentages for different depth regions. For a depth between 0 and 200 m, the upper left panel shows that initially the SH of about 19% of the grid points is changed by more than 5 cm. About 33% of the grid points are changed between 2 and 5 cm, while about 48% of the grid points are only changed up to 2 cm. During the course of the data assimilation experiment (solid lines), the changes grow as is seen from a decrease of the percentage of grid points changed by up to 2 cm and an increase of the percentage of grid points with changes in the bins from 2 to 5 cm and more than 5 cm. At the end of the experiment, about 38% of the grid points are changed up to 2 cm, while 37% are changed between 2 and 5 cm and 25% are change by more than 5 cm. The influence of the model dynamics is visible from the temporal behaviour of the curves. In particular, the percentage of grid points changed between 2 and 5 cm reaches a maximum after about 140 days. After about 170 days, the percentage of grid points changed by more than 5 cm shows a minimum of only about 17%, which the percentage of grid points changed by up to 2 cm shows a local maximum. After this time, the percentage of grid points changed by more than 5 cm grows, while the percentage of smaller changes shrinks. Percentage of grid points for which the difference between steric height of the model state without data assimilation from that with data assimilation lies within a specified magnitude. Considered are (solid) the analysis states ASSIM-A of the data assimilation over 1 year and (dashed) the forward run INFOR initialized from the assimilation analysis mean state after 1 month. The colors show the different magnitudes (red: up to 2 cm, blue: 2–5 cm, green: more than 5 cm). The four panels represent different depth intervals for which the steric heights are computed. Significant changes are visible up to 2000 m depth For the depth interval between 200 and 750 m, the initial and final changes are only slightly smaller than for the shallower depth interval. Initially about 51% of the grid points are changed by up to 2 cm, 28% between 2 and 5 cm, and 20% by more than 5 cm. At the end of the experiment, the changes are larger and about 42% of the grid points are changed by up to 2 cm, 34% between 2 and 5 cm, and 24% by more than 5 cm. Compared to the depth region of up to 200 m, the maximum of changes between 2 and 5 cm are reached later in the experiment around day 270, where also the minimum of grid points changed by up to 2 cm is reached. Below 750 m depth, the data assimilation still induces notable changes. In the range between 750 and 2000 m depth, about 12% of the grid points are changed by more than 5 cm, which about 24% change between 2 and 5 cm. Also in this depth region the continued data assimilation induced growing changes in the steric height. At the end of the experiment, about 48% of the grid points are changed by more than 2 cm and 16% by even more than 5 cm. In the largest depth interval, between 2000 m and the ocean bottom, the changes are much smaller. During the course of the experiment only about 1,5% of the grid points are changed by more than 5 cm. Changes between 2 and 5 cm are initially induced for 11% of the grid points. This number grows to about 16% until the end of the experiment. For the initialized forecast experiment INFOR, Fig. 9 shows that the assimilation-induced changes in the steric height are nearly preserved over the forecast period of 11 months. The dashed lines in Fig. 9 show that for the largest depth, the percentage of grid points changed by more than 5 cm and those changed in between 2 and 5 cm remains nearly constant. The number of grid points changed by up to 2 cm only grows by 1% point. For the depth interval between 750 and 2000 m, the number of grid points changed by more than 5 cm only shrinks from about 12 to 11%, while the number of grid points changed between 2 and 5 cm shrinks from 24 to 20%. For the shallower depth intervals about 48% of the grid points are changed by more than 2 cm. The largest changes of more than 5 cm are observed at the end of the experiment for 18% of the grid points in the depth interval of 200–750 m and about 12% for less than 200 m depth. Comparing the experiments ASSIM and INFOR, one sees that the continued data assimilation leads to further growing changes compared to the initialized forecast run INFOR. Nonetheless, the most of the induced changes in the steric height remain in the model state when the data assimilation is stopped after 1 month and the state estimate at this time is used to initialize a model forward simulation. Figure 10 compares the computed temperature of three experiments with the ARGO data at a depth of 2000 m for the end of the assimilation cycle. At this depth, the free-running model (FREE) is too cold in the Pacific especially in the South. In the Southern Ocean, large areas show a warm bias in the model. The assimilation reduces the errors considerably as is visible for ASSIM-F. The differences between the INFOR integration and ARGO data are very similar to those from ASSIM-F. Essentially only error growth in the Southern Ocean is visible in INFOR. This indicates that most of the error reduction occurs during the first month of assimilations. Fig. 10 Temperature fields in December 2004 at a depth of 2000 m. Upper left panel: Measurements made by the ARGO profiling buoy system; Upper right panel: Difference between ARGO data and the free model; Bottom left panel: Difference between ARGO data and model after assimilation; Bottom right panel: Difference of ARGO data and the INFOR run. After assimilation the errors are reduced considerably Left panel: DOT section along the latitude 59S in the Antarctic Circumpolar Current at the end of the assimilation. Right panel: the corresponding temperature section along the same latitude and the same time at a depth of 2000 m. The assimilated model is rather close while the INFOR experiment lies between the others, although not everywhere. By assimilation of DOT also deep temperature fields are corrected Left panel: A southnorth DOT section through the deep Pacific Ocean along the longitude 202E and at the end of the assimilation. Right panel: The corresponding temperature section along the same longitude and the same time at a depth of 2000 m. By assimilation of DOT the position of the Antarctic Circumpolar Current can be shifted by a few 100 km to its correct location. Modelled temperatures have a cold bias. To correct the position of the ACC, temperature is increased substantially in the Southern Ocean Figure 11 shows DOT and temperature at a depth of 2000 m along the section at a latitude of 59\(^{\circ }\)S in the Antarctic Circumpolar Current at the end of the assimilation. The case FREE follows the observed DOT only very generally and differs locally by up to 0.4 m. The case ASSIM-F is much closer to the data while the INFOR experiment generally lies between FREE and ASSIM-F, although not everywhere. As also shown in Fig. 10, the assimilation of DOT corrects deep temperature fields. Figure 11 shows that there is a close relationship between temperature and DOT in this region. While the observed DOT can be approached fairly closely, the modelled temperature remains too cold in most areas by a few tenths of a degree. The ocean state at the end of the assimilation year in a south–north section through the deep Pacific Ocean along the longitude \(202^{\circ }\hbox {E}\) is considered in Fig. 12. Again both DOT and the temperature at a depth of 2000 m are shown. Between \(50^{\circ }\hbox {S}\) and \(60^{\circ }\hbox {S}\) the assimilation of DOT data shifts the position of the Antarctic Circumpolar Current by a few 100 km to its correct location. This correction is kept until the end of the year in the INFOR experiment. Further north the case ASSIM is able to follow the observed DOT while INFOR exhibits the same tendency as FREE in underestimating DOT south of the equator and overestimating it north of it. Over all, the modelled temperatures have a cold bias. To correct the position of the ACC, the temperature is increased substantially in the Southern Ocean. However, the temperature always stays below the temperature measured by ARGO. The case INFOR is nearly identical to ASSIM-F over most of the section except in the equatorial region. Here, an interesting feature is visible as ASSIM-F compensates missing ocean dynamics by changing temperature and thus the steric height on a short spatial scale. Absolute dynamic sea surface height (DOT) obtained from combining satellite altimetry and geoid data has been assimilated in a global configuration of the finite-element sea-ice ocean model (FESOM). The model was configured with a rather coarse horizontal resolution of about 100 km in the open ocean and finer resolution in the vicinity of Greenland and in the equatorial band. The ocean surface was forced by the realistic CORE-II forcing. The assimilation applied the ensemble-based error-subspace transform Kalman filter (ESTKF) with localization. The assimilation experiment shows that the assimilation has the largest impact at the first analysis step in which the root-mean-square difference (RMSD) between the modelled DOT and altimetry observations is reduced from about 10.5 to 6.3 cm. Subsequent analysis steps at each 10th day continue to reduce the deviation. However, their impact is only of order of 1–1.5 cm and mainly reduce the deviation-increase resulting from the model forecast of each 10 days so that the RMSD decreases gradually. The assimilation efficiently corrects large deviations of partly more than 40 cm, e.g. in the Southern Ocean and the Gulf Stream and Kuroshio regions. An assessment of the difference of the steric height (SH) in the model state and data from ARGO-Jamstec shows that the assimilation corrects the model state also in the deep ocean. As for the modelled \(\eta \), the largest corrections of SH are at the initial analysis time, while later analysis steps cause smaller corrections. The smallest deviations for both DOT and SH are found in the Tropical Belt, while the deviations are largest in the Southern Ocean. A single model forecast over 11 months was initialized from the state estimate after three analysis steps. This forecast shows that part of the improvements induced by assimilating DOT data remain in the model state over the full 11 months. The deviation from the satellite altimetry data increases gradually for about 6 months. After this time, the deviation shows no trend any more, and even after 11 months, the deviation is about 2.5 cm less than the RMSD in the free model forecast without any data assimilation. The deviation of the SH from ARGO-Jamstec data shows an analogous behaviour. Analysing the influence of the assimilation in the deeper parts of the ocean, one finds that for SH below 2000 m about 10% of the grid points are influenced between 2 and 5 cm during the first few analysis steps. This value remains about constant for the forecast run initialized after 3 analysis steps. For the continued assimilation the fraction of grid points influenced between 2 and 5 cm gradually increases to about 17%. In the depth range between 750 and 2000 m, the corrections are larger. About 37% of the grid points are changed by more than 2 cm and 12% by even more than 5 cm at the beginning of the data assimilation. The continued assimilation increases the number of grid points corrected by more than 2 cm to about 48%, while in the 11-month initialized forecast, the fraction is gradually reduced to about 32%. Overall, the experiments show that with the realistic CORE-II forcing, the model is able to produce realistic dynamics so that the correlations between DOT and temperature and salinity in the ensemble of model states that is used to initialize the data assimilation process are also realistic. This allows the ensemble filter to successfully correct the three-dimensional model fields even in deeper layers. Assessing the temperature field at a depth of 2000 m at the end of the assimilation a clear improvement is visible due to the assimilation of DOT data. Also for temperature the INFOR experiment keeps the memory of the corrections induced during the first month of assimilation for the rest of the year. These results are very encouraging and possibly helpful when model trends have to be reduced for future applications. However, the assimilation was unable to fully reduce the cold bias of our model at depth. This, however, is not unexpected as ensemble Kalman filters assumed unbiased errors, which is not the case here. Thus more research work, like the addition of a bias-correction scheme, is necessary to this end. While the assimilation impact is largest at the first analysis step, the continued assimilation has a gradual effect. Using the model state estimate after just three analysis steps shows that the correction are retained in the model state for a long time period. This effect is likely caused by the rather coarse model resolution which induces rather slow dynamics. The next natural steps for continuing this series of studies would be to first to assimilate ARGO profiles in addition to DOT. However, the ARGO array is coarse with a nominal resolution of 3 by 3\(^{\circ }\). Anyway, we may expect an impact on the large-scale fields and hopefully a reduction in the cold bias of our model. The second natural extension is to increase the model resolution substantially such that eddies and oceanic fronts are realistically represented. Then, one could make use of the full resolution of the geodetic DOT and improve our understanding of ocean dynamics. Appendix: The error-subspace transform Kalman filter The error-subspace transform Kalman filter (ESTKF, Nerger et al. 2012b) combines the information from a forecast ensemble \(\mathbf {X}^f_k\) of m model states of size n in the columns of this matrix and the observations \(\mathbf {y}_k\) of dimension p at the time \(t_k\) by the transformation $$\begin{aligned} \mathbf {X}_k^a = \overline{\mathbf {X}}_k^f + \mathbf {X}^f_k\mathbf {W}_k \end{aligned}$$ of the forecast ensemble into an analysis ensemble \(\mathbf {X}^a_k\) representing the analysis state estimate and its uncertainty. Here, \(\mathbf {W}_k\) is a transformation matrix of size \(m \times m\). The matrix \(\overline{\mathbf {X}}_k^f\) holds the ensemble mean state in each column. As all computations in the analysis refer to the time \(t_k\), the time index k is omitted below. The ESTKF computes the ensemble transformation matrix \(\mathbf {W}\) in the error-subspace of dimension \(m-1\) that is represented by the forecast ensemble. An error-subspace matrix is defined by $$\begin{aligned} \mathbf {L} := \mathbf {X}^f \mathbf {T} \end{aligned}$$ where the matrix \(\mathbf {T}\) is a projection matrix of size \(m \times (m-1)\) defined by $$\begin{aligned} \mathbf {T}_{j,i} :=\left\{ \begin{array}{ll} 1 - \frac{1}{m}\frac{1}{\frac{1}{\sqrt{m}}+1}&{}\quad \mathrm {for}\ i=j, j<m \\ - \frac{1}{m}\frac{1}{\frac{1}{\sqrt{m}}+1}&{}\quad \mathrm {for}\ i\ne j, j<m\\ - \frac{1}{\sqrt{m}}&{}\quad \mathrm {for}\ j=m .\\ \end{array}\right. \end{aligned}$$ A model state vector \(\mathbf {x}^f\) and the vector of observations \(\mathbf {y}\) are related through the observation operator \(\mathbf {H}\) by $$\begin{aligned} \mathbf {y} = \mathbf {H}\left( \mathbf {x}^f\right) + \epsilon \ \mathrm {.} \end{aligned}$$ The vector of observation errors, \(\epsilon \), is assumed to be a white Gaussian distributed random process with the observation error covariance matrix \(\mathbf {R}\). For the analysis step, a transform matrix in the error-subspace is defined by $$\begin{aligned} \mathbf {A}^{-1} := \rho (m-1)\mathbf {I} + ({\mathbf {H}}{\mathbf {X}^f}\mathbf {T})^\mathrm{T} \mathbf {R}^{-1} \mathbf {H}{\mathbf {X}^f}\mathbf {T}\ \mathrm {.} \end{aligned}$$ The matrix has size \((m-1)\times (m-1)\) and \(\mathbf {I}\) is the identity matrix. The factor \(\rho \) with \(0 < \rho \le 1\) is called the "forgetting factor" and is used to inflate the forecast error covariance matrix. The analysis ensemble is given by $$\begin{aligned} \mathbf {X}^a = \overline{\mathbf {X}^f} + \mathbf {X}^f \left( \overline{\mathbf {W}} + \tilde{\mathbf {W}}\right) \end{aligned}$$ with \(\overline{\mathbf {W}} := \left[ \overline{\mathbf {w}}, \ldots , \overline{\mathbf {w}}\right] \) and $$\begin{aligned} \overline{\mathbf {w}}:= & {} \mathbf {T}\mathbf {A} \left( \mathbf {H} {\mathbf {X}^f}\mathbf {T}\right) ^\mathrm{T} \mathbf {R}^{-1} \left( \mathbf {y}-\mathbf {H} \overline{\mathbf {x}^f}\right) \ ,\end{aligned}$$ $$\begin{aligned} \tilde{\mathbf {W}}:= & {} \sqrt{m-1} \mathbf {T}\mathbf {C} \mathbf {T}^\mathrm{T}\ . \end{aligned}$$ Here, \(\overline{\mathbf {w}}\) is vector of size m that corrects the ensemble mean, while \(\tilde{\mathbf {W}}\) transforms the ensemble perturbations. In Eq. (8), \(\mathbf {C}\) is the symmetric square root of \(\mathbf {A}\) that is computed from the eigenvalue decomposition \(\mathbf {U}\mathbf {S}\mathbf {V} = \mathbf {A}^{-1}\) such that \(\mathbf {C} = \mathbf {U}\mathbf {S}^{-1/2}\mathbf {U}^\mathrm{T}\). For the localized analysis, each vertical water column of the model grid is updated independently by a local analysis step. We denote a water column by the index \(\sigma \), i.e. the local sub-state for a water column is \(\mathbf {x}^f_{\sigma }\). For each local analysis only observations within a horizontal influence radius l are taken into account so that also observations outside the water column \(\sigma \) are used. A local observation domain is denoted by the index \(\delta \). The local observation operator \(\mathbf {H}_{\delta }\) now computes an observation vector within the local observation domain from the global model state. With localization, Eq. (1) is applied with an individual matrix \(\mathbf {W}\) for each local analysis domain. Each observation is weighted according to its distance from the water column by the observation localization (Hunt et al. 2007). The weight is applied by replacing the inverse observation error covariance matrix in Eqs. (5) and (7) by $$\begin{aligned} \tilde{\mathbf {R}} = \mathbf {D}_{\delta } \circ \mathbf {R}^{-1}_{\delta }\ . \end{aligned}$$ Here, \(\circ \) denotes the element-wise matrix product. \(\mathbf {D}_{\delta }\) is the localization weight matrix that is constructed from a correlation function with compact support. The value of the localization function decreases with the distance between the updated water column and the location of the observation until it becomes zero at the specified influence radius. A typical localization function is a 5th-order polynomial with a shape similar to a Gaussian function (Gaspari and Cohn 1999). An implementation of the local ESTKF, including parallelization, is implemented in the open-source release of the parallel data assimilation framework (PDAF, Nerger and Hiller 2013, http://pdaf.awi.de). Albertella A, Savcenko R, Janjic T, Rummel R, Bosch W, Schröter J (2012) High resolution dynamic ocean topography in the Southern Ocean from GOCE. Geophys J Int 190:922–930 Bertino L, Lisaeter K (2008) The topaz monitoring and prediction system for the atlantic and arctic oceans. J Oper Oceanogr 1(2):15–18 Bingham RJ, Haines K, Hughes CW (2008) Calculating the ocean's mean dynamic topography from a mean sea surface and a geoid. J Atmos Ocean Technol 25:1808–1822 Birol F, Brankart JM, Castruccio F, Brassuer P, Verron J (2004) Impact of ocean mean dynamic topography on satellite data assimilation. Marine Geod 27:59–78 Birol F, Brankart JM, Lemoine JM, Brasseur P, Verron J (2005) Assimilation of satellite altimetry referenced to the new GRACE geoid estimate. Geophys Res Lett 32(6):L06601 Bosch W, Dettmering D, Schwatke C (2014) Multi-mission cross-calibration of satellite altimeters: constructing a long-term data record for global and regional sea level change studies. Remote Sens 6:2255–2281 Bosch W, Savcenko R (2010) On estimating the dynamic ocean topography—a profile approach. In: Mertikas In (ed) Gravity, Geoid and Earth Observation, IAG Symposia, vol 135. Springer, Berlin, pp 263–269 Chapter Google Scholar Carrere L, Faugere Y, Ablain M (2016) Major improvement of altimetry sea level estimations using pressure-derived corrections based on ERA-Interim atmospheric reanalysis. Ocean Sci. 12:825–842. https://doi.org/10.5194/os-12-825-2016 Cheney EE, Marsh JG (1981) Seasat altimeter observations of dynamic topography in the gulf stream region. J Geophys Res Oceans 86(C1):473–483 Danabasoglu G, Yeager SG, Bailey D, Behrens E, Bentsen M, Bi D, Biastoch A, Böning C, Bozec A, Canuto VM, Cassou C, Chassignet E, Coward AC, Danilov S, Diansky N, Drange H, Farneti R, Fernandez E, Fogli PG, Forget G, Fujii Y, Griffies SM, Gusev A, Heimbach P, Howard A, Jung T, Kelley M, Large WG, Leboissetier A, Lu J, Madec G, Marsland SJ, Masina S, Navarra A, Nurser AG, Pirani A, y Mélia DS, Samuels BL, Scheinert M, Sidorenko D, Treguier A-M, Tsujino H, Uotila P, Valcke S, Voldoire A, Wang Q, (2014) North Atlantic simulations in coordinated ocean-ice reference experiments phase II (CORE-II). Part I: mean states. Ocean Model 73:76–107 Danilov S, Kivman G, Schröter J (2004) A finite-element ocean model: principles and evaluation. Ocean Model 6:125–150 De Mey P, Benkiran M (2002) A multivariate reduced-order optimal interpolation method and its application to the Mediterranean basin-scale circulation. In: Pinardi N, Woods J (eds) Ocean forecasting, conceptual basis and applications. Springer, Berlin, pp 281–305 Defant A (1941) Die absolute Topographie des physikalischen Meeresniveaus und der Druckflächen, sowie die Wasserbewegungen im Atlantischen Ozean. Wiss Ergebn Dtsch Atlant Exped Meteor 6/2(5):318 Dobricic S (2005) New mean dynamic topography of the mediterranean calculated from assimilation system diagnostics. Geophys Res Lett 32(11):L11606 Douglas BC, Cheney R (1990) Geosat: beginning a new era in satellite oceanography. J Geophys Res Oceans 95(C3):2833–2836 Eden C, Greatbatch R, Böning C (2004) Adiabatically correcting an eddy-permitting model using large-scale hydrographic data: application to the Gulf Stream and the North Atlantic current. J Phys Oceanogr 34:701–719 Evensen G (1994) Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J Geophys Res 99(C5):10143–10162 Fukumori I, Raughunath R, Fu L, Chao Y (1999) Assimilation of TOPEX/Poseidon altimeter data into a global ocean circulation model: how good are the results? J Geophys Res 104:25647–25665 Gaspari G, Cohn SE (1999) Construction of correlation functions in two and three dimensions. Q J R Meteorol Soc 125:723–757 Gent P, McWilliams J (1990) Isopycnal mixing in ocean circulation models. J Phys Oceanogr 20:150–5 Haines K, Johannessen JA, Knudsen P, Lea D, Rio M-H, Bertino L, Davidson F, Hernandez F (2011) An ocean modelling and assimilation guide to using goce geoid products. Ocean Sci 7(1):151–164 Hunt BR, Kostelich EJ, Szunyogh I (2007) Efficient data assimilation for spatiotemporal chaos: a local ensemble transform Kalman filter. Physica D 230:112–126 Ishii M, Kimoto M (2009) Reevaluation of historical ocean heat content variations with time-varying XBT and MBT depth bias corrections. J Oceanogr 65:287–299 Janjić T, Nerger L, Albertella A, Schröter J, Skachko S (2011) On domain localization in ensemble based Kalman filter algorithms. Month Weather Rev. 139:2046–2060 Janjić T, Schröter J, Albertella A, Bosch W, Rummel R, Schwabe J, Scheinert M (2012a) Assimilation of geodetic dynamic ocean topography using ensemble based Kalman filter. J Geodyn 59–60:92–98 Janjić T, Schröter J, Savcenko R, Bosch W, Albertella A, Rummel R, Klatt O (2012b) Impact of combining Grace and GOCE gravity data on ocean circulation estimates. Ocean Sci 8:65–79 Jekeli C (1981) Alternative methods to smooth the earth's gravity field. Dept. Geod. Sci. and Surv., Ohio State University, Columbus., rep. 327 Large WG, Yeager SG (2008) The global climatology of an interannually varying air-sea flux data set. Clim Dyn 33(2):341–364 Lea DJ, Drecourt J-P, Haines K, Martin MJ (2008) Ocean altimeter assimilation with observational- and model-bias correction. Q J R Meteorol Soc 134(636):1761–1774 LeGrand P, Minster J-F (1999) Impact of the GOCE gravity mission on ocean circulation estimates. Geophys Res Lett 26(13):1881–1884 LeGrand P (2001) Impact of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) mission on ocean circulation estimates: volume fluxes in a climatological inverse model of the Atlantic. J Geophys Res 106(19):19597–19610 Löhner R, Morgan K, Peraire J, Vahdati M (1987) Finite-element flux-corrected transport (FEM-FCT) for the euler and Navier–Stokes equations. Int J Numer Methods Fluids 7:1093–1109 Mayer-Gürr T et al (2012) The new combined satellite only model GOCO03s. Presentation at GGHS 2012, Venice, October 2012 Nerger L, Hiller W (2013) Software for ensemble-based data assimilation systems: implementation strategies and scalability. Comput Geosci 55:110–118 Nerger L, Janjić T, Schröter J, Hiller W (2012a) A regulated localization scheme for ensemble-based Kalman filters. Q J R Meteorol Soc 138:802–812 Nerger L, Janjić T, Schröter J, Hiller W (2012b) A unification of ensemble square root Kalman filters. Month Weather Rev 140:2335–2345 Pail R (2015) Globale Schwerefeldmodellierung am Beispiel von GOCE. In: Freeden W, Rummel R (eds) Handbuch der Geodasie. Springer Reference Naturwissenschaften, Springer Spektrum, Berlin, Heidelberg Pakanowski R, Philander S (1981) Parametrization of vertical mixing in numerical models of tropical oceans. J Phys Oceanogr 11:1443–1451 Pavlis NK, Holmes SA, Kenyon SC, Factor JK (2012) The development and evaluation of the Earth Gravitational Model 2008 (EGM2008). J Geophys Res 117:B04406 Pham DT (2001) Stochastic methods for sequential data assimilation in strongly nonlinear systems. Month Weather Rev 129:1194–1207 Redi MH (1982) Oceanic isopycnal mixing by coordinate rotation. J Phys Oceanogr 12(10):1154–1158 Reigber C, Balmino G, Schwintzer P, Biancale R, Bode A, Lemoine J-M, König R, Loyer S, Neumayer H, Marty J-C, Barthelmes F, Perosanz F, Zhu SY (2002) A high-quality global gravity field model from CHAMP GPS tracking data and accelerometry (EIGEN-1S). Geophys Res Lett 29(14):1–4 Rintoul SR, Sokolov S (2001) Baroclinic transport variability of the antarctic circumpolar current south of australia (WOCE repeat section SR3). J Geophys Res 106:2795–2814 Rio M-H, Hernandez F (2004) A mean dynamic topography computed over the world ocean from altimetry, in situ measurements, and a geoid model. J Geophys Res Oceans 109(C12):C12032 Rio M-H, Pascual A, Poulain P-M, Menna M, Barcelo B, Tintore J (2014) Computation of a new mean dynamic topography for the Mediterranean Sea from model outputs, altimeter measurements and oceanographic in situ data. Ocean Sci 10:731–744. https://doi.org/10.5194/os-10-731-2014 Rummel R (1999) Bright prospects for a significant improvement of the earth's gravity field knowledge. Marine Geod 23:219–220 Sakov P, Counillon F, Bertino L, Lisæter KA, Oke PR, Korablev A (2012) TOPAZ4: an ocean-sea ice data assimilation system for the North Atlantic and Arctic. Ocean Sci 8:633–656 Schröter J, Losch M, Sloyan B (2002) Impact of the gravity field and steady-state ocean circulation explorer (GOCE) mission on ocean circulation estimates 2. Volume and heat transports across hydrographic sections of unequally spaced stations. J Geophys Res Oceans 107(C2):4-1–4-20 Sheng J, Greatbatch R, Wright D (2001) Improving the utility of ocean circulation models through adjustment of the momentum balance. J Geophys Res 106:16711–16728 Skachko S, Danilov S, Janjić T, Schröter J, Sidorenko D, Savcenko R, Bosch W (2008) Sequential assimilation of multi-mission dynamical topography into a global finite-element ocean model. Ocean Sci 4(4):307–318 Seufer V, Schröter J, Wenzel M, Keller W (2003) Assimilation of altimetry and geoid data into a global ocean model. In: Reigber C, Lühr H, Schwintzer P (eds) First CHAMP mission results for gravity, magnetic and atmospheric studies. Springer, Berlin, pp 187–192 Stammer D (1997) Geosat data assimilation with application to the eastern north Atlantic. J Phys Oceanogr 27(1):40–61 Stammer D, Wunsch C, Giering R, Eckerts C, Heimbach P, Marortzke J, Adcroft A, Hill C, Marshall J (2002) The global ocean circulation during 1992–1997, estimated from ocean observations and a general circulation model. J Geophys Res 107(C9):3001. https://doi.org/10.1029/2001JC000888 Stammer Detlef, Köhl Armin, Wunsch Carl (2007) Impact of accurate geoid fields on estimates of the ocean circulation. J Atmos Oceanogr Technol 24(8):1464–1478. https://doi.org/10.1175/JTECH2044.1 Stommel H (1956) On the determination of the depth of no meridional motion. Deep Sea Res 3(4):273–278 Talagrand O, Courtier P (1987) Variational assimilation of meteorological observations with the adjoint vorticity equation. I: Theory. Q J R Meteorol Soc 113:1311–1328 Timmermann R, Beckmann A (2004) Parameterization of vertical mixing in the Weddell sea. Ocean Model 6(1):83–100 Timmermann R, Danilov S, Schröter J, Böning C, Sidorenko D, Rollenhagen K (2009) Ocean circulation and sea ice distribution in a finite element global sea ice-ocean model. Ocean Model 27(3–4):114–129 Verron J (1992) Nudging satellite altimeter data into quasi-geostrophic ocean models. J Geophys Res 97(C5):7479–7491 Wang Q, Danilov S, Schröter J (2008) Finite element ocean circulation model based on triangular prismatic elements with application in studying the effect of topography representation. J Geophys Res 113:C05015 Wang Q, Danilov S, Sidorenko D, Timmermann R, Wekerle C, Wang X, Jung T, Schröter J (2014) The finite element sea ice-ocean model (fesom) v. 1.4: formulation of an ocean general circulation model. Geosci Model Dev 7:663–693 Wenzel M, Schröter J, Olbers D (2001) The annual cycle of the global ocean circulation as determined by 4D VAR data assimilation. Prog Oceanogr 48:73–119 Wunsch C (1978) The North Atlantic general circulation west of \(50^\circ \text{ w }\) determined by inverse methods. Rev Geophys Space Phys 16:583–620 Wunsch C, Gaposchkin EM (1980) On using satellite altimetry to determine the general circulation of the oceans with application to geoid improvement. Rev Geophys 18:725–745 This study was supported by the Deutsche Forschungsgemeinschaft (DFG) priority programmes SPP1257 (Massentransporte) and 1788 (Dynamic Earth). Alfred Wegener Institute Helmholtz Center for Polar and Marine Research, Am Handelshafen 12, Bremerhaven, 27568, Germany Alexey Androsov, Lars Nerger, Jens Schröter & Sergey Danilov Shirshov Institute of Oceanology, Moscow, Russia Alexey Androsov O.A.Sys GmbH, Hamburg, Germany Reiner Schnur Politecnico di Milano, Milan, Italy Alberta Albertella IAPG, TU Munich, Munich, Germany Reiner Rummel DGFI-TUM, Munich, Germany Roman Savcenko & Wolfgang Bosch Meteorological Research Division, Environment and Climatic Change Canada, Dorval, Canada Sergey Skachko Lars Nerger Jens Schröter Roman Savcenko Wolfgang Bosch Sergey Danilov Correspondence to Alexey Androsov. Androsov, A., Nerger, L., Schnur, R. et al. On the assimilation of absolute geodetic dynamic topography in a global ocean model: impact on the deep ocean state. J Geod 93, 141–157 (2019). https://doi.org/10.1007/s00190-018-1151-1 Issue Date: 01 February 2019 Ensemble Kalman filter ESTKF Multi-satellite altimetry
CommonCrawl
Leon A. Takhtajan Lee-Peng Teo Amer Mathematical Society Serie: Memoirs of the American Mathematical Society Weil-petersson Metric on the Universal Teichmuller Space In this memoir, we prove that the universal Teichmuller space $T(1)$ carries a new structure of a complex Hilbert manifold and show that the connected component of the identity of $T(1)$ - the Hilbert submanifold $T_{0}(1)$ - is a topological group. We define a Weil-Petersson metric on $T(1)$ by Hilbert space inner products on tangent spaces, compute its Riemann curvature tensor, and show that $T(1)$ is a Kahler-Einstein manifold with negative Ricci and sectional curvatures. We introduce and compute Mumford-Miller-Morita characteristic forms for the vertical tangent bundle of the universal Teichmuller curve fibration over the universal Teichmuller space.As an application, we derive Wolpert curvature formulas for the finite-dimensional Teichmuller spaces from the formulas for the universal Teichmuller space. We study in detail the Hilbert manifold structure on $T_{0}(1)$ and characterize points on $T_{0}(1)$ in terms of Bers and pre-Bers embeddings by proving that the Grunsky operators $B_{1}$ and $B_{4}$, associated with the points in $T_{0}(1)$ via conformal welding, are Hilbert-Schmidt.We define a 'universal Liouville action' - a real-valued function ${\mathbf S}_{1}$ on $T_{0}(1)$, and prove that it is a Kahler potential of the Weil-Petersson metric on $T_{0}(1)$.We also prove that ${\mathbf S}_{1}$ is $-\tfrac{1}{12\pi}$ times the logarithm of the Fredholm determinant of associated quasi-circle, which generalizes classical results of Schiffer and Hawley. We define the universal period mapping $\hat{\mathcal{P}}: T(1)\rightarrow\mathcal{B}(\ell^{2})$ of $T(1)$ into the Banach space of bounded operators on the Hilbert space $\ell^{2}$, prove that $\hat{\mathcal{P}}$ is a holomorphic mapping of Banach manifolds, and show that $\hat{\mathcal{P}}$ coincides with the period mapping introduced by Kurillov and Yuriev and Nag and Sullivan.We prove that the restriction of $\hat{\mathcal{P}}$ to $T_{0}(1)$ is an inclusion of $T_{0}(1)$ into the Segal-Wilson universal Grassmannian, which is a holomorphic mapping of Hilbert manifolds. We also prove that the image of the topological group $S$ of symmetric homeomorphisms of $S^{1}$ under the mapping $\hat{\mathcal{P}}$ consists of compact operators on $\ell^{2}$.The results of this memoir were presented in our e-prints: Weil-Petersson metric on the universal Teichmuller space I. Curvature properties and Chern forms, arXiv:math.CV/0312172 (2003), and Weil-Petersson metric on the universal Teichmuller space II. Kahler potential and period mapping, arXiv:math.CV/0406408 (2004). Av författare:Leon A. Takhtajan Av författare:Lee-Peng Teo Av förlag: Amer Mathematical Society I samma serie: Memoirs of the American Mathematical Society
CommonCrawl
communications physics Optomechanical resonator-enhanced atom interferometry High-precision multiparameter estimation of mechanical force by quantum optomechanics László Ruppert, Andrey Rakhubovsky & Radim Filip Gravimetry through non-linear optomechanics Sofia Qvarfort, Alessio Serafini, … Sougato Bose Taking atom interferometric quantum sensors from the laboratory to real-world applications Kai Bongs, Michael Holynski, … Albert Roura Continuous force and displacement measurement below the standard quantum limit David Mason, Junxin Chen, … Albert Schliesser Twin-lattice atom interferometry Martina Gebbe, Jan-Niclas Siemß, … Ernst M. Rasel Femtometer-amplitude imaging of coherent super high frequency vibrations in micromechanical resonators Lei Shao, Vikrant J. Gokhale, … Jason J. Gorman Measurement-based quantum control of mechanical motion Massimiliano Rossi, David Mason, … Albert Schliesser Christian Rothleitner Resolving the energy levels of a nanomechanical oscillator Patricio Arrangoiz-Arriola, E. Alex Wollack, … Amir H. Safavi-Naeini Logan L. Richardson1,2, Ashwin Rajagopalan1, Henning Albers1, Christian Meiners1, Dipankar Nath1, Christian Schubert1,3, Dorothee Tell1, Étienne Wodey ORCID: orcid.org/0000-0001-9522-85581, Sven Abend1, Matthias Gersemann1, Wolfgang Ertmer1,3, Ernst M. Rasel1, Dennis Schlippert ORCID: orcid.org/0000-0003-2168-17761, Moritz Mehmet4, Lee Kumanchik5,6, Luis Colmenero5,6, Ruven Spannagel5,6, Claus Braxmaier5,6 & Felipe Guzmán ORCID: orcid.org/0000-0001-9136-929X2,5,6,7 Communications Physics volume 3, Article number: 208 (2020) Cite this article Atomic and molecular physics Matter-wave interferometry and spectroscopy of optomechanical resonators offer complementary advantages. Interferometry with cold atoms is employed for accurate and long-term stable measurements, yet it is challenged by its dynamic range and cyclic acquisition. Spectroscopy of optomechanical resonators features continuous signals with large dynamic range, however it is generally subject to drifts. In this work, we combine the advantages of both devices. Measuring the motion of a mirror and matter waves interferometrically with respect to a joint reference allows us to operate an atomic gravimeter in a seismically noisy environment otherwise inhibiting readout of its phase. Our method is applicable to a variety of quantum sensors and shows large potential for improvements of both elements by quantum engineering. The ability to coherently manipulate massive particles by means of interaction with light has given rise to a variety of realisations of matter-wave interferometers. Today, these are widely employed in metrology and tests of fundamental physics1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16. Especially the class of interferometers based on light pulses, first pioneered by Kasevich and Chu17, finds a broad range of applications in inertial sensing18,19,20,21,22,23,24,25,26,27,28,29. In these measurements, the phase reference is usually realised by a mirror retroreflecting the light pulses towards the matter waves. Inertial effects acting on the matter waves and on the phase reference are indistinguishable. As a result, seismic noise contributes significantly to the instability of atomic inertial sensors, and is even the dominant noise source in state-of-the-art matter-wave gravimeters19,21. As a countermeasure, besides vibration isolation systems, commercial sensors have been exploited to track the motion of atom interferometers' inertial references, thus extending the measurement dynamic range, suppressing vibration noise30,31,32,33,34,35, and allowing for comparisons with other atom interferometers21,36,37. In recent years, developments in the quantum engineering of optomechanical resonators have yielded devices with exciting applications in fields such as quantum information, fundamental physics and, likewise, in inertial sensing38,39. To this end, optically reading out length variations of an optical cavity allows determination of acting accelerations with high bandwidth and resolution. In addition, interfaces between cold atoms and micromechanical cantilevers40 or nanomembranes41 have been demonstrated, showing the potential of these hybrid systems for fundamental physics. Here, we combine a high-bandwidth optomechanical resonator with a long-term stable light-pulse atom interferometer, and measure the accelerations of the resonator's test mass and a freely falling cloud of atoms relative to the atom interferometer's inertial reference. Our atom interferometer measures gravity under strong seismic perturbations without loss of phase information. In contrast to previous approaches, our method merges two systems both benefiting from the large toolbox of methods usually exploited in optical spectroscopy and photonics into a highly customisable device for atom interferometry in rough environments. We operate a Kasevich–Chu interferometer17 combined with an optomechanical resonator attached to the mirror providing the phase reference for the interferometer as a gravimeter (Fig. 1a, b and details in the "Methods" section). Ambient vibrational noise couples to the retroreflector at a weighted acceleration level of 3 mm s−2 per cycle. This leads to phase excursions exceeding a single fringe during one interferometric measurement with the readout appearing to be random due to the underlying 2π phase ambiguity (Fig. 1c). Accordingly, the atom interferometer signal, i.e., the relative population of the two ports, features a bimodal distribution visible in Fig. 2a. However, ambient vibrational noise also results in a displacement of the resonator test mass, which is recorded by the signal retroreflected from the optomechanical resonator (Fig. 1d). The records of the optomechanical resonator make it possible to reconstruct the atomic interference pattern (Fig. 2b). The signal from the optomechanical resonator is constrained to the band of interest. We apply high-pass filters at 0.8 Hz to suppress low-frequency drifts, as well as a digital low-pass filter at 50 Hz, the atom interferometer's corner frequency (see "Methods" section). We subsequently sample it digitally over 60 ms centred around the central light pulse of each interferometer cycle. The phase correction is finally calculated from the signal utilising the acceleration sensitivity function describing the atom interferometer's phase response42,43. Residual systematic biases can be experimentally analysed as shown for commercial sensors in the ref. 31. Fig. 1: Representation of measurement principle. a Schematic of the experimental setup (not to scale) comprising an optomechanical resonator (OMR) enhancing the optical mode between the flat end of a polarisation-maintaining fibre (light blue), and a side face of the test mass as detailed in the enlarged view and a Mach–Zehnder-type atom interferometer measuring the gravitational acceleration g and b its spacetime diagram. The sensor is attached to a retroreflection mirror which rests in strapdown configuration on a platform on the laboratory floor. c Ambient vibrations drive the retroreflector's motion beyond the reciprocal range of the atom interferometer and thus obscure the correspondence between phase and population in the ports 1 and 2. An interferometric fringe can be restored by measuring d the test mass motion of the optomechanical resonator and digitally convolving with the atom interferometer's acceleration response function. Measurement intervals of both devices of durations tOMR and 2T are synchronised (details in the "Methods" section). Typical resonator signals are shown before (blue solid line) and after (blue dashed line) low-pass filtering (LPF) which removes the dominant mechanical resonance at 678.5 Hz. Fig. 2: Exemplary measurement data under vibrational noise. a Histogram distribution of the normalised interferometer output ports, b a fringe (orange circles) recovered by post-correction based on the resonator signal, and a sinusoidal fit to the corrected data (orange solid line) for a pulse separation time T = 10 ms. c Ambient vibrations with a Gaussian 1/e width of 3.2 mm s−2, if uncorrected, obscure the phase information of the atom interferometer. By convolving the recorded time series of the resonator signal with the interferometer acceleration response function we can recover each data point's phase information and reconstruct the fringe pattern. Each histogram and the corresponding interferometer response represent a segment of 500 data points. Using our method, we measure the local gravitational acceleration g in an approximately 22 h-long, interruption-free, measurement series otherwise impossible when operating both sensors alone. By suppressing vibrational noise, our sensor fusion method improves the overall short-term stability by a factor 8 (Fig. 3). Figure 4 illustrates the present and projected features of the atom interferometer, the optomechanical resonator, and the combination of both. Fig. 3: Analysis of instrument stability. Allan deviations σa of estimated ambient noise (blue triangles) and the measured gravitational acceleration from post-corrected data (orange diamonds) as a function of integration time τ. Displayed error bars mark 1 σ confidence intervals. Post correction improves the instability at 1 s by a factor of 8 and, here, reduces the measurement time necessary to achieve a target instability by a factor 64. The dashed lines reflect the gain by averaging. We estimate the ambient noise from the phase correction data obtained in our post correction (see "Methods" section). Fig. 4: Sensitivity of sensor fusion setup. Current and anticipated acceleration linear (acc. lin.) spectral density of an atom interferometer enhanced by an optomechanical resonator. Current intrinsic performance (blue dash-dotted line) is achieved using a T = 10 ms interferometer with cycle frequency fc = 0.6 Hz, momentum separation of two photon recoils and residual technical noise of 30 mrad. It can be improved with a pulse separation time T = 35 ms, a cycle frequency fc = 1 Hz, momentum separation of eight photon recoils and residual phase noise 3 mrad (orange dash-dotted line). Similarly, current optomechanical resonator acceleration sensitivity at quiet conditions (solid blue curve) with a resonance of 678.5 Hz and optical finesse 2 can be optimised by choosing a resonance frequency of 1500 Hz and finesse of 1600 (solid orange curve). Dashed lines indicate the intrinsic noise of the optomechanical resonator weighted by the sensitivity function of the atom interferometer and high-pass filtered at 0.8 Hz (1 Hz) for the current (advanced) scenario to suppress additional noise in the low-frequency band. The shaded areas bounded by fc (discs) and the atom interferometer's corner frequency (triangles) mark the respective dominant frequency bands most relevant for optimal postcorrection of seismic noise, and motivate high-pass filtering the resonator signal (light blue and orange solid lines). Our optomechanical resonator features a sensitivity comparable to commercial accelerometers, and we expect a large potential for improvements for both quantum-optical devices. The sensor fusion performance is nevertheless limited by low frequency noise (Fig. 4, solid blue trace). It displays a RMS white acceleration noise of 1 × 10−5 m s−2 Hz−1/2 between 10 and 50 Hz. Pink noise ( ∝ f−1) dominates from 1 to 10 Hz. Below 1 Hz, Brownian noise ( ∝ f−2) processes mainly caused by the optical fibre employed for interrogating the resonator prevail. Since the resonator's sensitivity to accelerations increases quadratically with decreasing mechanical resonance frequency and linearly with optical finesse, there is room for improvements by trading sensitivity against larger bandwidth and dynamic range. We foresee an optimisation of the hybrid sensor (Fig. 4, solid orange trace) by tuning the resonance frequency to 1500 Hz to increase the bandwidth, improving the optical finesse to 1600 by high-reflectivity coating38, and the readout by an order of magnitude as compared to ref. 38 by means of spectroscopy techniques developed for ultrastable resonators, e.g., Pound–Drever–Hall locking44,45. Millimetre-sized optomechanical resonators have already demonstrated sensitivities of 1 × 10−6 m s−2 Hz−1/2 over bandwidths up to 12 kHz. Additionally, pathways exist for future atom interferometers customised for gravimetry, as discussed in ref. 46. The sensitivity can be enhanced by operating the device with T = 35 ms, a cycle rate of 1 Hz, higher-order Bragg processes transferring \(4\cdot {\overrightarrow{k}}_{{\rm{eff}}}\) and a reduced phase noise of 3 mrad. By improving the atom interferometer and tuning the optomechanical resonator, it is plausible that the intrinsic noise can be lowered to 6 × 10−8 m s−2 Hz−1/2 under seismic noise as described in the ref. 47. This target performance is comparable to the noise obtained in a quiet environment with an active vibration isolation19 and outperforms transportable, commercial devices23. Many atomic gravimeters employ rubidium and generate the light for manipulating the atoms by second harmonic generation from fibre lasers in the telecom C-band48,49. The inclusion of the optomechanical resonator therefore requires only minor hardware changes and can be performed with an all-fibred setup. The resonator can be implemented directly into inertial reference mirrors under vacuum, thus improving the mechanical quality factor while supporting miniaturisation of the overall setup. It does not emit notable heat, and is nonmagnetic. Consequently, it does not induce systematic errors due to black body radiation50 or due to spurious magnetic fields coupling to the matter waves30,51, and neither do external magnetic fields couple to the resonator test mass. Moreover it can be easily merged with the retroreflection mirror of the atom interferometer. In addition, devices with different resonance frequencies or different orientations will grant access to larger bandwidth and multiple sensitive axes. Last but not least, the small volume of cubic millimetres offers great prospects for being integrated on atom chip sensors46, and, hence, a large potential for miniaturisation of the sensor head. Our method shares analogies with atomic clocks by hybridisation of long-term and short-term references. Beyond this, our optical sensor might be used for compensating inertial noise in the resonators of optical clocks, e.g., in transportable setups52. In conclusion, we have demonstrated an atom interferometer enhanced by an optomechanical resonator. We show operation of the atom interferometer under circumstances otherwise impeding phase measurements. Inertial forces on the atoms and on the resonator mirror are measured to the same reference permitting a direct comparison, and high common mode noise suppression in the differential signal. Our method is not restricted to atomic gravimeters and could be beneficial to nearly all atom interferometric sensors and even improve laser interferometers53 in environments with large inertial noise, thus replacing bulky vibration isolation and motion sensors. In particular, the achievable large dynamic range opens up great perspectives for the use of atomic sensors for inertial navigation35,54 and airborne gravimetry55. Finally, a possible modification of our setup's topology would ensure that the atom-optics light field is reflected directly off a micromechanical test mass. In future experiments, we envisage exciting research on coherent light-mediated coupling of matter waves and mechanical systems41,56 with pulsed instead of cw-interaction. Atom interferometer Our setup (Fig. 1a, b), which was employed as a differential gravimeter in the refs. 6,57, comprises a Kasevich–Chu interferometer17. In a π/2 − π − π/2 pulse sequence, stimulated two-photon Raman transitions coherently split, redirect, and recombine matter waves of 87Rb. The interferometer phase is determined by measuring the number of atoms in output ports 1 and 2 with state-selective fluorescence detection. To leading order, a constant acceleration \(\overrightarrow{a}\) of the atoms induces a phase shift $$\Delta \phi ={\overrightarrow{k}}_{{\rm{eff}}}\cdot \overrightarrow{a}\cdot {T}^{2}\!,$$ where \(\hslash {\overrightarrow{k}}_{{\rm{eff}}}\) is the photon recoil transferred to the atoms via a Raman process, and T denotes the time between two subsequent light pulses. A chirp of the relative frequency of the lasers cancels the phase induced by acceleration, and is a measure for the latter. The atom interferometer's response to vibrational noise can be described using the sensitivity formalism42. Notably, the response is flat in a band between DC and up to the corner frequency 1/(2T), above which it features a low pass behaviour. Typically, the interferometer's response is adjusted by varying T such that ambient noise induces phase shifts well within one fringe. At quiet conditions, i.e., with an operating vibration isolation system, the interferometer, which we operate at a cycle rate of 0.6 Hz, features a fringe contrast of ≈30% and a Raman phase locked loop-limited phase noise of 30 mrad for a time T = 10 ms. In order to suppress systematic shifts independent of the direction of momentum transfer we use the k-reversal method58,59. The interferometer measures ten times in each direction of momentum transfer over a period of 18 s. For each scattering direction, we create histograms out of the normalised output population of the interferometer. Hereby each individual histogram comprises data accumulated over a 9000 s period. From each histogram, we extract the interferometer response's amplitude and offset60 as shown in Fig. 2. We subsequently extract g for intervals of 180 s (approximately 50 shots per direction) by fitting the postcorrected data to the expected atom interferometer response using the parameters determined by the histogram fit, solely leaving the interferometer phase as a free parameter. After 18,000 s integration we determine a local gravitational acceleration 9.812675 m s−2 with a standard uncertainty of 4 × 10−6 m s−2. We estimate the ambient noise in Fig. 3 from the phase corrections made during postcorrection. Accordingly, the data resembles the underlying ambient acceleration noise after weighting it with the atom interferometer's transfer function. The first uncorrected value of ≈3 mm s−2 per cycle also manifests in the 1/e width of the Gaussian-shaped spread in Fig. 2c. Optomechanical sensor Mirrors forming the optomechanical resonator, which has a volume on the order of a few hundred mm3, are made from the flat tip of a polarisation maintaining fibre and a side face of the fused silica test mass supported by a stiff u-shaped flexible mount, the cantilever (Fig. 1a), following the design of ref. 61. Our sensor features an optical finesse of about two, a resonance frequency of ω0 = 2π 678.5 Hz, and a mechanical quality factor of Q = 630. Due to its stiffness the optomechanical resonator can be described as an ideal harmonic oscillator. Below the resonance frequency, displacement of the test mass X as a function of vibration frequency ω linearly depends on the acting acceleration A, $$\frac{X(\omega )}{A(\omega )}=\,-\,\frac{1}{{\omega }_{0}^{2}\,-\,{\omega }^{2}\,+\,i\frac{{\omega }_{0}}{Q}\omega } ,$$ and is therefore flat. By means of more advanced data analysis, the upper limit of the usable bandwidth can be extended beyond the mechanical resonance. Using adhesive bonding, the resonator is attached to a two-inch square mirror retroreflecting the light pulses driving the atom interferometer. The resonator's acceleration-sensitive axis is aligned collinearly with the retroreflector's normal vector (Fig. 1a) by orienting the outer edges of both devices parallel. The motion of the test mass is read out with a fibre-based optical setup based on telecom components comprising a tunable laser operating at a wavelength near 1560 nm protected by an optical isolator (Fig. 5). The sensor has a quarter wave plate incorporated in its lead fibre thereby enabling us to separate the signal reflected off the resonator using a polarising beam splitter. Additionally, a small fraction of the laser light is split off before the resonator using a 90:10 splitter. Making use of differential data acquisition of photo detectors PD 1 and 2 we can therefore cancel common mode laser intensity noise on the resonator signal. Finally, the processed signal depends on the transmission of the optomechanical resonator and hence the distance between the two reflective surfaces. Consequently, it is a direct measure of the acting acceleration. The optomechanical resonator and the retroreflection mirror are operated under normal atmospheric conditions. We place the mirror with the resonator attached onto a solid aluminium plate in strapdown configuration on the laboratory floor. During the initialisation of the optomechanical sensor, a commercial force-balance accelerometer (Nanometrics Titan) was utilised to perform test measurements for comparison as well as post correction for reference purposes. Fig. 5: Displacement readout system for the optomechanical resonator. Laser light split off for intensity noise correction and light reflected back from the optomechanical resonator (OMR) are detected on photo diodes (PD 1 & 2). Both signals are passed to current-to-voltage converters (I/V) followed by high-pass filters (HPF) at 0.8 Hz and are subsequently enhanced by amplifiers (Amp). Finally, the difference signal is digitally low-pass filtered (LPF) at 50 Hz. The data used in this manuscript are available from the corresponding author upon reasonable request. Fixler, J. B., Foster, G. T., McGuirk, J. M. & Kasevich, M. A. Atom interferometer measurement of the Newtonian constant of gravity. Science 315, 74 (2007). Rosi, G., Sorrentino, F., Cacciapuoti, L., Prevedelli, M. & Tino, G. M. Precision measurement of the Newtonian gravitational constant using cold atoms. Nature 510, 518 (2014). Spagnolli, G. et al. Crossing over from attractive to repulsive interactions in a tunneling Bosonic Josephson junction. Phys. Rev. Lett. 118, 230403 (2017). Fray, S., Diez, C. A., Hänsch, T. W. & Weitz, M. Atomic interferometer with amplitude gratings of light and its applications to atom based tests of the equivalence principle. Phys. Rev. Lett. 93, 240404 (2004). Bonnin, A., Zahzam, N., Bidel, Y. & Bresson, A. Simultaneous dual-species matter-wave accelerometer. Phys. Rev. A 88, 043615 (2013). Schlippert, D. et al. Quantum test of the universality of free fall. Phys. Rev. Lett. 112, 203002 (2014). Tarallo, M. G. et al. Test of Einstein equivalence principle for 0-spin and half-integer-spin atoms: search for spin-gravity coupling effects. Phys. Rev. Lett. 113, 023005 (2014). Zhou, L. et al. Test of equivalence principle at 10−8 level by a dual-species double-diffraction Raman atom interferometer. Phys. Rev. Lett. 115, 013004 (2015). Kovachy, T. et al. Quantum superposition at the half-metre scale. Nature 528, 530 (2015). Pandey, S. et al. Hypersonic Bose-Einstein condensates in accelerator rings. Nature 570, 205–209 (2019). Hamilton, P. et al. Atom interferometry in an optical cavity. Phys. Rev. Lett. 114, 100405 (2015). Jaffe, M. et al. Testing sub-gravitational forces on atoms from a miniature in-vacuum source mass. Nat. Phys. 13, 938 (2017). Bouchendira, R., Cladé, P., Guellati-Khélifa, S., Nez, F. & Biraben, F. New determination of the fine structure constant and test of the quantum electrodynamics. Phys. Rev. Lett. 106, 080801 (2011). Décamps, B., Gillot, J., Vigué, J., Gauguet, A. & Büchner, M. Observation of atom-wave beats using a Kerr modulator for atom waves. Phys. Rev. Lett. 116, 053004 (2016). Parker, R. H., Yu, C., Zhong, W., Estey, B. & Müller, H. Measurement of the fine-structure constant as a test of the standard model. Science 360, 191–195 (2018). Article ADS MathSciNet MATH Google Scholar Shayeghi, A. et al. Matter-wave interference of a native polypeptide. Nat. Commun. 11, 1447 (2020). Kasevich, M. & Chu, S. Atomic interferometry using stimulated Raman transitions. Phys. Rev. Lett. 67, 181–184 (1991). Peters, A., Chung, K. Y. & Chu, S. Measurement of gravitational acceleration by dropping atoms. Nature 400, 849 (1999). Hu, Z.-K. et al. Demonstration of an ultrahigh-sensitivity atom-interferometry absolute gravimeter. Phys. Rev. A 88, 043610 (2013). Fang, B. et al. Metrology with atom interferometry: inertial sensors from laboratory to field applications. J. Phys. 723, 012049 (2016). Freier, C. et al. Mobile quantum gravity sensor with unprecedented stability. J. Phys. 723, 012050 (2016). Hardman, K. S. et al. Simultaneous precision gravimetry and magnetic gradiometry with a Bose-Einstein condensate: a high precision, quantum sensor. Phys. Rev. Lett. 117, 138501 (2016). Ménoret, V. et al. Gravity measurements below 10-9 g with a transportable absolute quantum gravimeter. Sci. Rep. 8, 12300 (2018). Bidel, Y. et al. Absolute marine gravimetry with matter-wave interferometry. Nat. Commun. 9, 627 (2018). Stockton, J. K., Takase, K. & Kasevich, M. A. Absolute geodetic rotation measurement using atom interferometry. Phys. Rev. Lett. 107, 133001 (2011). Berg, P. et al. Composite-light-pulse technique for high-precision atom interferometry. Phys. Rev. Lett. 114, 063002 (2015). Dutta, I. et al. Continuous cold-atom inertial sensor with \(1{\rm{nrad}}/{\rm{sec}}\) rotation stability. Phys. Rev. Lett. 116, 183003 (2016). Hinton, A. et al. A portable magneto-optical trap with prospects for atom interferometry in civil engineering. Philos. Trans. R. Soc. A 375, 20160238 (2017). Chen, Z., Lim, H. M., Huang, C., Dumke, R. & Lan, S.-Y. Quantum-Enhanced Velocimetry with doppler-broadened atomic vapor. Phys. Rev. Lett. 124, 093202 (2020). Gouët, J. L. et al. Limits to the sensitivity of a low noise compact atomic gravimeter. Appl. Phys. B 92, 133–144 (2008). Merlet, S. et al. Operating an atom interferometer beyond its linear range. Metrologia 46, 87 (2009). Lautier, J. et al. Hybridizing matter-wave and classical accelerometers. Appl. Phys. Lett. 105, 144102 (2014). Geiger, R. et al. Detecting inertial effects with airborne matter-wave interferometry. Nat. Commun. 2, 474 (2011). Barrett, B. et al. Dual matter-wave inertial sensors in weightlessness. Nat. Commun. 7, 13786 (2016). Cheiney, P. et al. Navigation-compatible hybrid quantum accelerometer using a Kalman filter. Phys. Rev. Appl. 10, 034030 (2018). Peters, A., Chung, K. Y. & Chu, S. High-precision gravity measurements using atom interferometry. Metrologia 38, 25–61 (2001). Merlet, S. et al. Comparison between two mobile absolute gravimeters: optical versus atomic interferometers. Metrologia 47, L9–L11 (2010). Cervantes, F. G., Kumanchik, L., Pratt, J. & Taylor, J. M. High sensitivity optomechanical reference accelerometer over 10 kHz. Appl. Phys. Lett. 104, 221111 (2014). Pisco, M. et al. Opto-mechanical lab-on-fibre seismic sensors detected the Norcia earthquake. Sci. Rep. 8, 6680 (2018). Hunger, D. et al. Resonant coupling of a Bose-Einstein condensate to a micromechanical oscillator. Phys. Rev. Lett. 104, 143002 (2010). Camerer, S. et al. Realization of an optomechanical interface between ultracold atoms and a membrane. Phys. Rev. Lett. 107, 223001 (2011). Cheinet, P. et al. Measurement of the sensitivity function in a time-domain atomic interferometer. IEEE Trans. Instrum. Meas. 57, 1141–1148 (2008). Bonnin, A., Zahzam, N., Bidel, Y. & Bresson, A. Characterization of a simultaneous dual-species atom interferometer for a quantum test of the weak equivalence principle. Phys. Rev. A 92, 023626 (2015). Pound, R. V. Electronic frequency stabilization of microwave oscillators. Rev. Sci. Ins. 17, 490–505 (1946). Drever, R. W. P. et al. Laser phase and frequency stabilization using an optical resonator. Appl. Phys. B 31, 97–105 (1983). Abend, S. et al. Atom-chip fountain gravimeter. Phys. Rev. Lett. 117, 203003 (2016). Peterson, J. Observations and Modeling of Seismic Background Noise, USGS Open-File Report 93-322, https://doi.org/10.3133/ofr93322 (U. S. Geological Survey, Albuquerque, New Mexico, 1993). Luo, Q. et al. A compact laser system for a portable atom interferometry gravimeter. Rev. Sci. Instrum. 90, 043104 (2019). Theron, F. et al. Frequency-doubled telecom fiber laser for a cold atom interferometer using optical lattices. Opt. Commun. 393, 152–155 (2017). Haslinger, P. et al. Attractive force on atoms due to blackbody radiation. Nat. Phys. 14, 257–260 (2017). Hu, Q.-Q. et al. Observation of vector and tensor light shifts in 87Rb using near-resonant, stimulated Raman spectroscopy. Phys. Rev. A 97, 013424 (2018). Grotti, J. et al. Geodesy and metrology with a transportable optical clock. Nat. Phys. 14, 437–441 (2018). Niebauer, T. M., Sasagawa, G. S., Faller, J. E., Hilt, R. & Klopping, F. A new generation of absolute gravimeters. Metrologia 32, 159–180 (1995). Jekeli, C. Navigation error analysis of atom interferometer inertial sensor. Navigation 52, 1–14 (2005). Bidel, Y. et al. Absolute airborne gravimetry with a cold atom sensor. J. Geod. 94, 20 (2020). Karg, T. M., Gouraud, B., Treutlein, P. & Hammerer, K. Remote Hamiltonian interactions mediated by light. Phys. Rev. A 99, 063829 (2019). Albers, H. et al. Quantum test of the Universality of Free Fall using rubidium and potassium. Eur. Phys. J. D. 74, 145 (2020). McGuirk, J. M., Foster, G. T., Fixler, J. B., Snadden, M. J. & Kasevich, M. A. Sensitive absolute-gravity gradiometry using atom interferometry. Phys. Rev. A 65, 033608 (2002). Louchet-Chauvet, A. et al. The influence of transverse motion within an atomic gravimeter. N. J. Phys. 13, 065025 (2011). Varoquaux, G. et al. How to estimate the differential acceleration in a two-species atom interferometer to test the equivalence principle. N. J. Phys. 11, 113010 (2009). Gerberding, O., Cervantes, F. G., Melcher, J., Pratt, J. R. & Taylor, J. M. Optomechanical reference accelerometer. Metrologia 52, 654 (2015). We thank H. Ahlers, J. Lautier-Gaud, L. Timmen, J. Müller, S. Herrmann, S. Schön, K. Hammerer, D. Rätzel, P. Haslinger, and M. Aspelmeyer for comments and fruitful discussions and acknowledge financial support from Deutsche Forschungsgemeinschaft (DFG) within CRC 1128 (geo-Q), projects A02, A06, and F01 and CRC 1227 (DQ-mat), project B07, and under Germany's Excellence Strategy—EXC-2123 QuantumFrontiers—390837967 (research unit B02). D.S. acknowledges support by the Federal Ministry of Education and Research (BMBF) through the funding program Photonics Research Germany under contract number 13N14875. This project is furthermore supported by the German Space Agency (DLR) with funds provided by the Federal Ministry for Economic Affairs and Energy (BMWi) due to an enactment of the German Bundestag under Grant No. DLR 50WM1641 (PRIMUS-III), 50WM1137 (QUANTUS-IV-Fallturm), and 50RK1957 (QGYRO), and by "Niedersächsisches Vorab" through the "Quantum-metrology and Nano-Metrology (QUANOMET)" initiative within the project QT3, and by "Wege in die Forschung (II)" of Leibniz University Hannover. Open Access funding enabled and organized by Projekt DEAL. Leibniz Universität Hannover, Institut für Quantenoptik, Welfengarten 1, 30167, Hannover, Germany Logan L. Richardson, Ashwin Rajagopalan, Henning Albers, Christian Meiners, Dipankar Nath, Christian Schubert, Dorothee Tell, Étienne Wodey, Sven Abend, Matthias Gersemann, Wolfgang Ertmer, Ernst M. Rasel & Dennis Schlippert College of Optical Sciences, University of Arizona, Tucson, AZ, 85721, USA Logan L. Richardson & Felipe Guzmán German Aerospace Center (DLR) – Institute for Satellite Geodesy and Inertial Sensing, Hannover, Germany Christian Schubert & Wolfgang Ertmer Leibniz Universität Hannover, Institut für Gravitationsphysik / Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut), Callinstraße 38, 30167, Hannover, Germany Moritz Mehmet German Aerospace Center (DLR) – Institute of Space Systems, Bremen, Germany Lee Kumanchik, Luis Colmenero, Ruven Spannagel, Claus Braxmaier & Felipe Guzmán University of Bremen – Center of Applied Space Technology and Microgravity (ZARM), Robert-Hooke-Straße 7, 28359, Bremen, Germany Department of Aerospace Engineering & Physics, Texas A&M University, College Station, TX, 77843, USA Felipe Guzmán Logan L. Richardson Ashwin Rajagopalan Henning Albers Christian Meiners Dipankar Nath Christian Schubert Dorothee Tell Étienne Wodey Sven Abend Matthias Gersemann Wolfgang Ertmer Ernst M. Rasel Dennis Schlippert Lee Kumanchik Luis Colmenero Ruven Spannagel Claus Braxmaier W.E., E.M.R., C.S., and D.S. designed the atom interferometer and its laser system. L.L.R., H.A., D.N., and D.S. contributed to the design of the atom interferometer and its laser system and realised the overall setup. A.R., M.M., L.K., L.C., R.S., C.B., and F.G. designed, built, and tested the optomechanical resonator and designed the readout laser system. A.R., C.M., D.T., and É.W. built and characterised the laser system for readout. L.L.R., F.G., L.K., and A.R. implemented the optomechanical resonator in the atom interferometer setup. L.L.R., H.A., D.N., and A.R. operated the final experimental setup. Sv.A., M.G., and C.S. contributed to the data acquisition system utilised for post correction. A.R., D.N., C.S., and L.L.R. performed the analysis of the data presented in this manuscript. L.L.R., D.S., and F.G. drafted the initial manuscript. A.R., C.M., C.S., D.T., É.W., E.M.R., and L.K. provided major input to the manuscript and all authors critically reviewed and approved of the final version. Correspondence to Dennis Schlippert or Felipe Guzmán. Related to the patent Optomechanical Inertial Reference for Atom Interferometers (WO 2020/168314 A1) filed by the University of Arizona (UA), L.L.R. (UA affiliate), F.G. (UA affiliate and inventor), E.M.R. (inventor), and D.S. (inventor) declare competing financial interests. The patent covers the use of opto-mechanical systems as inertial references for atom interferometry. All other authors declare no competing interests. Richardson, L.L., Rajagopalan, A., Albers, H. et al. Optomechanical resonator-enhanced atom interferometry. Commun Phys 3, 208 (2020). https://doi.org/10.1038/s42005-020-00473-4 Multi-loop atomic Sagnac interferometry Exploring the foundations of the physical universe with space tests of the equivalence principle Baptiste Battelier Joël Bergé Martin Zelan Experimental Astronomy (2021) Communications Physics (Commun Phys) ISSN 2399-3650 (online)
CommonCrawl
ESN Group Analysis Platform Graph Metrics Degree Closeness Betweenness Eigenvector Density Clustering Nodes & Edges ESN Metrics Messages Created Avery Time First Reply Avery Replies per Thread Reciprocity Thread Reciprocity Reply Creation Ratio Thread Creation Ratio User Activity over Time Group Activity Passivity Registered Date Background Social Capital Enterprise Social Networks Social Network Analysis Research Methodology Limitations Literature Warning: This website has been archived and some features are not available anymore. Degree Centrality The degree or degree centrality measures the number of edges connected or adjacent to a vertex $v\ |\ v \in V$ (Newman, 2010). In directed graphs such as ours, it can be split up into in-degree and out-degree, reflecting the incoming edges to a node and the outgoing edges from a node (Newman, 2010). Since the degree measure counts the edges for a specific node, the scope of the measure is ego-centric. It shows how strongly connected a node is in terms of relationships with other nodes. This metric was proposed by Smith et al. (2009), Hacker et al. (2015), Viol et al. (2016), Angeletou et al. (2011) and Berger et al. (2014). The in-degree of a node $d_{in}(v_i)\ |\ v_i \in V$ is equal to the number of edges $e_k$ in the form of $$e_k = (v_j,v_i) \qquad \text{ for all } e_k \in E \text{ and } v_j \in N.$$ The out-degree of a node $d_{out}(v_i)\ |\ v_i \in V$ is equal to the number of edges $e_k$ in the form of $$e_k = (v_i,v_j) \qquad \text{ for all } e_k \in E \text{ and } v_j \in N.$$ The degrees can be calculated by using the adjacency matrix: $$d_{in}(v_i) = \sum_{v_j \in V} a_{j,i} \qquad\qquad d_{out}(v_i) = \sum_{v_j \in V} a_{i,j}.$$ A problem with the degree is that its interpretation depends on the size of the network $g$. Therefore, to compare the degree of differently sized networks, Wasserman et al. (1994) propose the following standardization through dividing by the maximum possible degree -- which is the network size $g$ minus one: $$d^{'}_{in}(v_i) = \frac{d_{in}(v_i)}{g-1} \qquad\qquad d^{'}_{out}(v_i) = \frac{d_{out}(v_i)}{g-1}.$$ With regards to Social Networks Wasserman et al. (1994) define the in-degree as a measure of popularity i.e. how many people sent at least one message to a user. The out-degree is defined as a measure of expansiveness i.e. to how many people did the user sent at least one message. Wasserman et al. (1994) state that high degree centrality is recognized as a major channel of information. Newman (2010) adds that a user with a high degree centrality and thus a high number of connections to others may have more influence than users with a lower degree centrality. Therefore a high degree centrality is an indicator for key users. This is also expressed by Berger et al. (2014), who claim that a high in-degree is distinctive of key users. According to Angeletou et al. (2011) a low in-degree indicates an elitist user. Such a user communicates with only a small group of other users, but has strong reciprocal interactions with those users. A high in-degree indicates a popular initiator and participant kind of user. This type of user contributes with a high intensity, persistence and reciprocity to many other users (Angeletou et al., 2011). Elitists and popular users drive the discussion and increase the activity of a community (Angeletou et al., 2011), making information available and interactions feasible. Smith et al. (2009) correlate a high degree centrality with an answer person and discussion person, seeking to actively engage in other people's threads. They participate in discussions of considerable length. He describes those kind of people as influencers, which aligns with other literature. Contrary to the influence indication, the degree metric is not a direct indicator of a user's performance according to Riemer et al. (2015). Thus, an influential user does not automatically make a productive employee. If multiple users exhibit a high degree centrality, it leads to a dense and cohesive network. A cohesive network structure with redundant relationships - also called "closure" (Riemer, 2005) - leads to the creation of Social Capital according to Coleman (1990). Characteristics of such a network include a collective mindset and effective norms, which results in Social Capital (Riemer, 2005). Cohesive networks and effective norms are required for cooperation and trust in networks (Riemer, 2005), which ultimately leads to superior performance.
CommonCrawl
Evidence for electrodynamics in curved spacetime Field theories in curved spacetime is usually formulated by integrating their Lagrangian over the curved spacetime. For example, for electrodynamics, we have the action $$ S = \int d^4x \left( -\frac{1}{4} F^{\alpha \beta}F_{\alpha \beta} \sqrt{-g} + A_\alpha J^\alpha \right) $$ It can also be straightforwardly coupled to gravity. The equation of motion can then be obtained using Hamilton's principle. While it is a natural framework from a theoretical point of view, I am not aware of any experimental / observational evidence supporting results obtained from such a formulation. Is there any empirical evidence for electrodynamics in curved spacetime? For the purpose of this question, only classical EM is concerned, although evidence for QED in a curved spacetime (if any) would be even better. This question is partly inspired by What is the most compelling evidence of General Relativity in the presence of matter and energy? electromagnetism general-relativity experimental-physics classical-electrodynamics qft-in-curved-spacetime ChenfengChenfeng $\begingroup$ Related: physics.stackexchange.com/q/78600/50583 The question is now: Has anyone observed, for example, Hawking radiation? The main predictions seem to all relate to black hole dynamics, which is experimentally a bit...difficult to access. $\endgroup$ – ACuriousMind♦ Feb 11 '15 at 22:34 $\begingroup$ @ACuriousMind I don't think Hawking radiation has been observed, and I agree that it'd be difficult to do so. However, I think classical EM in a curved spacetime is a weaker assertion than QED in curved spacetime, and may be easier to verify. $\endgroup$ – Chenfeng Feb 11 '15 at 22:44 $\begingroup$ Maybe I'm just being dumb, but since when is the term $-A_\alpha\partial_\beta F^{\alpha\beta}$ in the EM Lagrangian? $\endgroup$ – Ryan Unger Feb 11 '15 at 23:37 $\begingroup$ Since light is EM radiation, gravitational lensing can be seen as another example of electrodynamics in curved spacetime. $\endgroup$ – Paul Feb 11 '15 at 23:49 $\begingroup$ Begs the question: "Is there a measurable effect from the Sun's gravitational field on its magnetic field?" $\endgroup$ – Keith Feb 12 '15 at 0:18 Classical electrodynamics is certainly studied in curved spacetimes to understand real phenomena. What better place for gravity and electromagnetism to work together than the ionized, magnetized plasma surrounding an accreting black hole? In particular, we observe quasars with extremely powerful relativistic jets. Quasars are the supermassive black holes at the centers of galaxies when they are accreting matter and emitting copious quantities of light. Much of the emitted energy is in the form of jets, and it is natural to ask how the energy of the system is converted into this form. The most commonly believed answer is the Blandford–Znajek process, in which the coupling of a magnetic field to a rotating black hole extracts the rotational energy of the black hole itself. The original paper and those that follow it go into much more detail, but the simplest approach is to assume the plasma continuum has infinite electrical conductivity (ideal magnetohydrodynamics) and negligible mass (force-free). Magnetic fields are "frozen" into such a fluid, and the dragging of this fluid through the ergosphere leads to the effect. Indeed the entire field of general relativistic magnetohydrodynamics (GRMHD) is based on coupling electrodynamics (and the evolution of fluids) to curved spacetime. Sometimes this is a one-way coupling to a stationary spacetime, as is sufficient for studying systems where the stress-energy is dominated by a nearby black hole. In other cases, such as studying core-collapse supernovae or neutron star mergers, the matter/EM field affect the dynamical evolution of spacetime itself. Thus I'd say a broad swath of high-energy astrophysics is using (and therefore testing) electrodynamics on a curved spacetime on a daily basis. A very simple example of electromagnetism in curved spacetime is the observed bending of light due to gravitational fields. Usually this is presented as the statement that "photons follow null geodesics." This statement can be derived in a geometric optics approximation to Maxwell's equations in curved spacetime (i.e. it is not just an additional postulate in GR). Assume that the potential has the form $$A_\mu(x) = \epsilon_\mu(x) A(x) e^{i \phi(x)},$$ where the polarization $\epsilon_\mu (x)$ and amplitude $A(x)$ are slowly varying compared to the phase $\phi(x)$. Then from Maxwell's equations (and an appropriate gauge condition for $A_\mu$) you can derive that the wavefront $\nabla_\mu \phi$ is a null vector that is also geodesic. The fact that we have observed gravitational lensing in many, many telescope images (like this happy guy) is a confirmation of electrodynamics in curved spacetime. asperanzasperanz Not the answer you're looking for? Browse other questions tagged electromagnetism general-relativity experimental-physics classical-electrodynamics qft-in-curved-spacetime or ask your own question. What is the most compelling evidence of General Relativity in the presence of matter and energy? Is a QFT in a classical curved spacetime background a self-consistent theory? Nonlinear optics as gauge theory Suggested reading for quantum field theory in curved spacetime Maxwell's equation in curved spacetime - how come? And experimental evidence? QFT in stationary curved spacetime How does one describe how a basis vector changes through space using the Christoffel symbols? Dirac equation in curved spacetime
CommonCrawl
Geometric Series Sequences of lengths occur naturally, and with notable frequency they fall into the category of the Geometric Series . The series of ratios expressable by a compact formula, being considered first as a finite ($n$ any real number) series of ratios and then taken a step further to the case when the number of ratios being summed goes to infinity. The Geometric series is (like the trigonometric functions sine and cosine are) best simply memorized, because as a set of operations textually surfacing the salient characteristics of the number, one of which the sum of ratios is, they are utilized matching suitability going forward with analysis of physical systems. The series for a common ratio of $1/2=0.5$ is solvable geometrically by drawing the sequence, as the Wikipedia page on the subject displays, which doesn't itself hint at the algrebra required to discover the generalized formula. For each set of sequential additive (and subtractive) member terms (a.k.a. an infinite series), there is the use of a parameter (variable) which shows how the sequence is constructed from a summation symbol, $\sum$, Greek capital sigma letter, examples: $\sum\limits_0^3 n = 0 + 1 + 2 + 3 = 6 $, where the formula under summation expands as a series of nomials organized sequentially, constant numbers (in the coefficient, base, and/or exponent), $\sum\limits_{1}^4 n \pi = (1 + 2 + 3 + 4)\pi = 10\pi $, where $\pi$ can be any number $\sum\limits_{-1}^1 n^2 = (-1)^2 + 0^2 + 1^2 = 2 $, where the exponent is constant one or more appearances of the variable being summed over, signified in the subscript of the $\sum$-symbol, conventionally written as $n$. $\sum\limits_{n=1}^3 (n-\frac{3}{2})(n)=(-\frac{1}{2}) + 1 + \frac{9}{2} = 5$ Naturally, with familiarity, one associates the exponential place (as opposed to as part of a binomial, multiple factors of binomials, or something which naturally exists in the formal analysis of fields around real distributed matter) of the variable being summed over (varying from term to term) with the Geometric series type of series, i.e. the Geometric Series is associated with one appearance of $n$ and it's in the exponent position (unlike the formulas in $n$ displayed above, which don't have $n$ in the exponent). The Geometric series is expressed as a sum of terms formulated as $r^n$ where $0\lt r\lt 1$ (that is to say $r$ is anywhere between zero and one): $$ \sum r^n = r^0 + r^1 + r^2 + r^3 + \cdots $$ when the starting term is $n=0$ then the series is simply offset by one regardless of $r$ ( $1=r^0$ ) so it is elided without confusion (the $n\gt0$ terms are what's important, that's $n$ greater and not equal to zero), and used that way in the following presentation. For the $r=0.5$ Geometric series, we observe the sequence displayed (each on top the previous) in a unit-length horizontal rectangle as forever adding a half again of the previous term in the summation thus quite probably equalling one, by induction on the following seven terms of the series: This is a diagram of the geometric series for a base ($r$) of $1/2$, going up to the $2^{-7}$, $128^{-1}$, or ("$n=7$") -term. When we employ an $r$ value of less than one half then the summation is less than one, which is graphically depicted in the same unit rectangle, and left to the reader's imagination what $r\lt 3^{-1} = 0.3\overline{3}$ for any $r$ being somewhere in the sequence (sorry, lol) $\{0.5, 0.3\overline{3}, \ldots\}$ This is a diagram of the geometric series for a base ($r$) of $1/3$, going up to the $3^{-7}$, $2187^{-1}$, or ("$n=7$") -term. The Wikipedia link above has a picture of this drawn with contiguous (the next beginning where the previous ended) squares inscribed in a unit square (a square with a side length equal to one). That sequence of sides ($1/2 + 1/4 + 1/8 +...$) correctly (appears to) sum to one, because the sequence is comprised of steps which each leave a remainder of the unit (1) as $2^{-n}$—halving the distance to the unit, thus never reaching it in an infinitely precise way, and at the same time the partial-sum G.S. up to $n=3$ amounts to $0.875$, leaving a maximum of $0.125$ for the remaining terms in the infinite series sum giving appearance that the $r=0.5$ series summing to one. $r=1/3$ converges to $1/2$, which really is not obvious without the algebra for the formula. The formula for the geometric series is as follows: $$ S_{m}=\sum^{m}_{n=1} r^n = \frac{r - r^{m+1}}{1-r}= (r - r^{m+1})(1-r)^{-1} $$ where $r$ is a constant fraction $0\lt r \lt 1$. This compact formula is apparent after the following steps. First we take the partial series (sum of the series to some finite $m$), and multiply by the binomial $(1-r)$: $$ (1-r)\left(\sum^{m}_{n=0} r^n\right) $$ Note the series we want to work with is the one that starts with $n=0$. Which is to say subtract, $r$-times the zero-based series from that same series, like so: $(1-r)S_m=(1)(S_m)-(r)(S_m)$. $$ = (1 +r + r^2 +\dots+r^m)-(r + r^2 + r^3 + \dots+r^{m+1}) $$ Since there isn't a one in the second set of parentheses (series on the right starts with the $n=1$ term), and there is a $n=0$ term in the left parentheses, we have a one in the sum output followed by the last term left unpaired in negation $rr^m=r^{m+1}$. $$ = (1) +(r-r) + (r^2-r^2) +\dots+(r^m-r^m) - r^{m+1}) $$ Finally we are left with the following reduction: $$ = 1-r^{m+1} $$ And so the formula is completed, after adjusting the formula for the appropriate starting term of the desired series. [Arfken] Changing the variable in the Geometric series formula, as $m=n-1$, shifts the formula by one on the input: $$ S_{n-1}=\sum^{n-1}_{x=0} r^x = \frac{1 - r^n}{1-r} $$ Below are plots of the partial-sums of the Geometric series for $r=\{1/4, 1/3, 1/2, 3/5\}$, which includes the points continuously between each of the integer points ($n$) by using a continuous variable $x$ and highlights drawn for $n=x=\{1, 2, 5\}$. Note that the plot ordinate range starts at one because the $n=0$ term is being included. This is a plot of the geometric series partial sum for a common ratio of 1/4. Plot of the geometric series infinite sum over a range, as a function of the variable $r$, of common ratios $[0.1,0.7]$. Plot of the geometric series infinite sum over a range, as a function of the variable $r$, of common ratios $[0.7,0.999]$. The one-over-one-minus-$r$ function (the infinite sum formula) is valid for ratios $r<1$, and as $r$ gets closer to one the function gets arbitrarily large, because you can make the denominator, $1-r$, arbitrarily small. Back to Physics Listing Copyright © 2019-2020 G.D.B.F.
CommonCrawl
Biosynthesis of silver nanoparticles using Haloferax sp. NRS1: image analysis, characterization, in vitro thrombolysis and cytotoxicity Hend M. Tag1, Amna A. Saddiq2, Monagi Alkinani3 & Nashwa Hagagy ORCID: orcid.org/0000-0002-8725-55881,4 Haloferax sp strain NRS1 (MT967913) was isolated from a solar saltern on the southern coast of the Red Sea, Jeddah, Saudi Arabia. The present study was designed for estimate the potential capacity of the Haloferax sp strain NRS1 to synthesize (silver nanoparticles) AgNPs. Biological activities such as thrombolysis and cytotoxicity of biosynthesized AgNPs were evaluated. The characterization of silver nanoparticles biosynthesized by Haloferax sp (Hfx-AgNPs) was analyzed using UV–vis spectroscopy, transmission electron microscopy (TEM), X-ray diffraction (XRD), and Fourier-transform infrared spectroscopy (FTIR). The dark brown color of the Hfx-AgNPs colloidal showed maximum absorbance at 458 nm. TEM image analysis revealed that the shape of the Hfx-AgNPs was spherical and a size range was 5.77- 73.14 nm. The XRD spectra showed a crystallographic plane of silver nanoparticles, with a crystalline size of 29.28 nm. The prominent FTIR peaks obtained at 3281, 1644 and 1250 cm− 1 identified the Functional groups involved in the reduction of silver ion reduction to AgNPs. Zeta potential results revealed a negative surface charge and stability of Hfx-AgNPs. Colloidal solution of Hfx-AgNPs with concentrations ranging from 3.125 to 100 μg/mL was used to determine its hemolytic activity. Less than 12.5 μg/mL of tested agent showed no hemolysis with high significant decrease compared with positive control, which confirms that Hfx-AgNPs are considered non-hemolytic (non-toxic) agents according to the ISO/TR 7405-1984(f) protocol. Thrombolysis activity of Hfx-AgNPs was observed in a concentration-dependent manner. Further, Hfx-AgNPs may be considered a promising lead compound for the pharmacological industry. The haloarchaeon Haloferax sp. strain NRS1 (MT967913), isolated from solar saltern in Jeddah, Saudi Arabia, has promising ability when synthesized with silver nanoparticles. Characterization of silver nanoparticles (AgNPs) synthesized by haloarchaeon Haloferax sp. strain NRS1 was performed by UV–vis spectroscopy, transmission electron microscope (TEM), X-ray diffraction (XRD), Fourier-transform infrared spectroscopy (FTIR). The low hemolytic activity exhibited by Hfx-AgNPs, confirmed their promising role as nano-drug delivery agents. Hfx-AgNPs displayed clot lysis's properties which may be attributed to the activation of the cascade reaction of clot lysis. Haloarchaea may be the best candidates for synthesizing nanoparticles because to S-layer glycoproteins present in their membrane. The unique property of haloarchaeal cells to quickly lyse at low concentrations of salt, leading to intracellular and membrane components being released, allowing S-proteins recovery efficient in cost. S-layer proteins of haloarchaea serve as building blocks for the synthesis of main biomolecules (proteins, lipids, glycans, nucleic acids, or their combinations) which are appropriate for nanobiotechnology (Sleytr et al. 2014). Hence, the intracellular biosynthesis of nanoparticles in haloarchaea, suggests these extremophilic microorganisms are promising in the field of nanobiotechnology (Beeler and Singh 2016). Nanoparticles are promising in different fields such as biomedicine, drug delivery, bio-imaging, bio-labeling, photovoltaics, photocatalysis, solar cells, and data storage (Bera et al. 2010; Garcia 2011; Issa et al. 2013; Javed et al. 2020; Aisida et al. 2019a, b, c, 2021a, b). Biosynthesis of nanoparticles by microorganisms is a non-toxic and eco-friendly method (Srivastava et al. 2014). Biosynthesized nanoparticles by prokaryotes have attracted significant attention in the biomedical field because of their biocompatibility, nontoxicity (Abdullaeva 2017), and antimicrobial nature (Aisida et al. 2021c, d, e). Depending on microorganisms' metabolic activity and nature, the nanoparticle's biosynthesis can either be an intra- or extracellular process. Haloarchaea possess different metal resistance mechanisms and biosynthesis of nanoparticles, however, most are still unknown (Srivastava and Kowshik 2013). Few haloarchaeal strains Halococcus salifodinae BK3, Halococcus salifodinae BK6, have been reported for intracellular synthesis of silver nanoparticles (AgNPs) with antibacterial activity against gram-positive and gram-negative bacteria (Srivastava et al. 2013, 2014). The same action has been reported by the cells of Haloferax alexandrinus (Patil et al. 2014). Also, Costa et al. (2020) reported the ability of Haloferax volcanii to synthesize silver and gold nanoparticles biosynthesis with distinct properties for nanobiotechnological applications. Haloferax sp playing a potential role as an in-vitro antioxidant and antimicrobial agent (Zalazar et al. 2019). Recently, De Castro et al. (2020) reported that haloarchaea have a biotechnological potential that still needs to be explored for its significance in pharmaceutical applications. Further studies are therefore required to explore more biosynthesized nanoparticles from microorganisms with unique properties for future applications. In this study, we report the biosynthesis of silver nanoparticles intracellularly by unclassified species of Haloferax, Haloferax sp strain NRS1, and bioactivities for AgNPs. As well, characterization of biosynthesized AgNPs by UV- vis spectroscopy, TEM, XRD, and FTIR. The main objective of this work is finding a novel application for non-toxic biosynthesized AgNPs by Haloferax sp strain NRS1 as a significant fibrinolytic agent. Site description and sampling Sediment and brine samples were collected from a solar saltern located at the southern coast of Jeddah, Saudi Arabia (21o10′16.04''N, 39o11′5.94''E) in April 2019 (Fig. 1) into 1000 ml bottles of sterile Pyrex. All samples have been stored at 4 °C and upon arrival, the samples have been subjected to microbiological examination within 24 h. Map and A satellite image showing the location of the study area. B The studied solar saltern in the southern coast of Jeddah; C. The salt crystals were collected from the saltern Isolation and growth conditions Isolation of haloarchaea from sediment and brine described by Srivastava et al. (2013) which involves NTYE (25% NaCl, 0.5% tryptone, 0.3% yeast extract, 2% MgSO4-7H2O and 0.5% KCl, 2% Agar) was used. This was also in accordance to the isolation procedures described by Dyall-Smith (2008). Pure isolates were obtained by successive cultivation on NTYE and was also maintained on the same medium. Screening for silver-resistant haloarchaeal isolates Five different haloarchaeal isolates, obtained after two weeks of incubation at 40 °C, were tested for the growth in the presence of AgNO3 in a concentration range of 0.05–1 mM according to the method described by Srivastava et al. (2014). 1 ml aliquot of cell culture was measured at 600 nm on a UV–Visible spectrophotometer (UV-2600 Series, SHIMADZU) at intervals of 24 h, culture medium without AgNO3, and non-inoculated medium with AgNO3 considered as positive and negative control respectively. Silver nitrate biosynthesis The selected haloarchaeal isolate (NRS1), kept in the dark for 10 days was inoculated in NTYE-medium supplemented with 0.5 mM AgNO3, incubated at 37 °C and stirred at 110 rpm. The synthesis of nanoparticles was observed visually by the change of biomass color from orange-red to brown-black. Then the cells were harvested by centrifugation (10,000 rpm for 20 min). The obtained pellet was washed several times with distilled water, then freeze dried at 60 °C over-night then the nanoparticles were obtained as a powder. Identification of the isolates Genomic DNA was extracted from the selected isolate using a modified method from Experimental Techniques in Bacterial Genetics, described by Maloy (1990). The 16S rRNA gene was amplified with a set of Archaea-universal primers (Invitrogen, USA), 5'-ATTCCGGTTGATCCTGCCGG-3' primers (positions 6–25 in Escherichia coli numbering) and 5'AGGAGGTGATCCAGCCGCAG-3' primers (positions 1540–1521) reported by Ventosa et al. (2004). The PCR conditions were as follows: 50 μl of reaction system, reaction cycles 30 times, 95 °C pre-denaturation 5 min, 94 °C denaturation 1 min, 60 °C annealing 1 min, 72 °C extension 1 min 30 s, 72 °C final extension 10 min, 4 °C hold. A total of 50 ng/μl of each PCR product was used to prepare the samples delivered to MacroGen Company in Korea following their specifications. The sequences were analyzed using BLAST (http://www.ncbi.nlm.nih.gov/BLAST) to get a preliminary identify of the strain. The cluster analysis was performed using the MEGA 7software package. Hemolytic activity Cytotoxicity of Hfx-AgNPs was evaluated through hemolytic assay according to the method described by Powell et al. (2000). To prepare erythrocyte suspension, 1 ml of fresh blood was centrifuged for five minutes at 10,000 rpm. 200 μl of the erythrocytes precipitate was added to 9.8 ml of Phosphate Buffered Saline (PBS; pH 7.4). Then the mixture was centrifuged for 15 min at 4000 rpm. The supernatant was discarded and the process repeated three times. Finally, the washed erythrocytes were diluted with PBS to make the cell suspension (2%). Hfx-AgNPs were prepared in six different concentrations (3.125, 6.25, 12.5, 25, 50, 100 μg/ml). Each concentration of Hfx-AgNPs was tested in three replicates. Aliquots of 20 µl of Hfx-AgNPs have aseptically placed into a microcentrifuge tube; then, RBC suspension (200 µl) was added and mixed gently. The samples were incubated at 35 °C for 60 min, then centrifuged at 3000 rpm for fifteen minutes. The absorbance (Abs) of the supernatant was read at 545 nm. PBS and Deionized water were added to erythrocyte suspension represents; the negative control (no hemolysis) and the positive control (100% hemolysis), respectively. Percentage hemolysis (%) was calculated according to the following formula (Taniyama et al. 2003). $$ {\text{Percentage of hemolytic activity}}\; = \;\frac{{{\text{Experimental sample Abs}}\; - \;{\text{Negative control Abs}}}}{{{\text{Positive Control Abs}}}}\; \times \;100 $$ Clot lysis Clot lysis experiments have been conducted as previously mentioned by Prasad et al. (2006). Concisely, venous blood was drawn from the healthy volunteers, then 500 μl of blood was transferred to ten pre-weighed sterile Eppendorf. The blood was incubated for 45 min at 37 °C for allowing blood to coagulate. Then the samples were centrifuged for 15 min, at 5000 rpm. Serum has been entirely removed without disrupting the clot. Every Eppendorf has been weighed again to assess the clot's weight; 200 μl of different concentrations of synthesized NPs colloidal solution (100, 50, 25, 12.5, 6.25 and 3.125 μg/ml) were added separately. As a positive control, 200 μl of streptokinase and as a negative non-thrombolytic control, 200 μl of saline solution were separately added to the clot. All the tubes were then incubated at 37 °C for 90 min and observed for clot lysis. After incubation, the supernatant was removed, and tubes were again weighed to monitor the weight difference after clot disruption. The experiment was repeated with the blood samples of the 5 volunteers. Characterization of synthetized silver nanoparticles The extinction coefficient of the synthesized Hfx-AgNPs colloidal solution was performed to monitor the bioreduction of silver by Haloferax sp strain NRS1 using a UV–Visible spectrophotometer (UV-2600 Series, SHIMADZU). Transmission electron microscopy (TEM HF-3300- Hitachi High-Tech Canada, Inc.) was used to determine the shape and size of Hfx-AgNPs. The size distributions from the TEM images for Hfx-AgNPs were further analyzed using image j Freeware Version 1.53d downloaded from the NIH website (http://rsb.info.nih.gov/ij). Zeta potential of the synthesized Hfx-AgNPs was determined by nanoparticle analyzer (Zetasizer Ver. 7.13, Malvern Instrument Ltd, UK). The structural characterization was determined using X-ray diffractometer (XRD Malvern Panalytical, UK). Finally, the structural units were identified with the use of Fourier Transmission Infra-Red spectroscopic (FTIR) (Perkin Elmer Spectrum GX Range Spectrometer). Statistical analysis was performed using the software SPSS for Windows (IBM Corporation, New York, USA). One-way ANONA test was applied for the statistical analysis for the present data according to the mathematical principles described by Kutner et al. (2005) followed by Post hoc comparisons using the Duncan's test. The result was considered to be significant when P is less than 0.05. Biosynthesis of silver nanoparticles Out of five tested isolates, one isolate (NRS1) exhibited a wide range of growth in presence of different concentrations of AgNO3 (0.05 up to 1 mM) and showed the highest growth rate at 0.5 mM AgNO3 Biosynthesis of silver nanoparticles change in the color from orange-red to brown-black was observed after one week of incubation at 37 °C, in the presence of 0.5 mM AgNO3. No color change was observed in control (culture medium without AgNO3). The appearance of brown-black color indicated the formation of AgNPs by the potential strain (Fig. 2). A peak at 458 nm in the UV–Vis spectrum further confirmed the formation of AgNPs (Fig. 3). Growth of Haloferax sp strain NRS1 on NTYE medium in the absence and presence of 0.5 mM AgNO3. A, B Haloferax sp strain NRS1 on NTYE medium appears orange-red on agar-plate and broth respectively; C, D Haloferax sp strain NRS1 turned dark brown in the presence of 0.5 mM AgNO3 on NTYE agar and liquid media, respectively UV–Vis-Spectrum of biosynthesized AgNPs by Haloferax sp strain NRS1 Identification of the potential strain Analysis of 16S rRNA gene sequencing indicated that the potential strain is a member of Haloferax (unidentified species) with 90% sequence similarity. The 16S rRNA gene data of the haloarchaeal strain reported in this study have been deposited in the NCBI and GenBank nucleotide sequence databases under the accession number MT967913 (Fig. 4). The strain has been deposited in the Egyptian Microbial Culture Collection (EMCC) under the code Hfx-NRS1 EMCC 23999. Maximum likelihood phylogenetic tree based on 16S rRNA gene sequences showing the relationship between Haloferax sp strain NRS1 and closely related taxa. Scale bar indicates 0.005 substitutions per nucleotide position Characterization of silver nanoparticles TEM investigation was conducted to determine Hfx-AgNPs size, as shown in Fig. 5. The shape of Ag nanoparticles synthesized by Haloferax sp strain NRS1 is mostly spherical; and the average particle size is 27.7 nm. The diameter range is 5.77- 73.14 nm. Geometric mean equal value of 24.96 ± 1.65 nm. The stability of Hfx-AgNPs was evaluated in terms of zeta potential (Fig. 6A). It was found that the zeta potential value was negative, i.e., − 25.5 ± 3.15 mV, which implies the good colloidal nature of synthesized nanoparticles. A Histogram of the particle diameter size distribution of the Hfx-AgNPs based on TEM image analysis. Red line: Gaussian distribution fit. B Transmission electron microscopy (TEM) images of (AgNPs) (scale bar = 100 nm). C High-resolution TEM image of Ag nanoparticle (scale bar = 20 nm) Physical characterization. A Zeta Potential of biosynthesized AgNPs by Haloferax sp strain NRS1. B X-ray diffraction pattern of biosynthesized AgNPs by Haloferax sp strain NRS1. C Williamson–Hall analysis of Hfx-Ag nanoparticles. The slope and crystalline size from the intercept of the fit were used to determine strain lattice Figure 6B shows the powder XRD pattern of the Hfx-AgNPs. The diffraction peaks appeared at four 2θ angles equal to 38.21°, 44.63°, 64.39° and 77.61° which correspond to (hkl) planes (111), (200), (220) and (311), respectively. The crystallite of Hfx-AgNPs sizes was 29.28 nm as calculated using Debye Scherrer's equation. The full width at half maximum (FWHM) values of the nanoparticles was consistent with their size and was ranged from 0.190 to 0.511 (Table 1). Lattice strain in the biosynthesized AgNPs by Haloferax sp strain is determined from the slope W–H plot (Fig. 6C), which equal to − 0.259 ± 0.114. Table 1 The crystalline size of Silver Nano-powder synthesized by Haloferax sp strain NRS1 The FTIR spectrum showed characteristic absorption frequencies of the O–H stretching group at 3281 cm−1 wavelengths, which probably present either in phenol or alcohol. The C=O stretch of carbonyl groups was observed at 1644 cm−1, which was related to amino acid residues and peptides and had a strong capability to bind to metal nanoparticles. Furthermore, C–O stretching vibrations were determined at 1250 cm−1, being related to aromatic esters, where Ag2O has a stretching vibration; Ag2O may form at Ag nanoparticles' surface (Fig. 7). FTIR spectra of Hfx-AgNPs Bioactivity of silver nanoparticles All materials that enter the blood get in contact with red blood cells (RBC). A hemolysis test was accomplished to assess Hfx-AgNPs impact on erythrocyte by measuring hemoglobin release after exposure to various concentrations of Hfx-AgNPs colloidal solution. The hemolysis assay performance was tested by the negative control PBS and positive control deionized distilled water (Fig. 8). It was observed that the hemolytic percentage of Hfx-AgNPs was concentration-dependent. Hemolysis was not detected at concentrations below 12.5 µg/ml, while 2.01% hemolysis was seen at the concentration of 100 µg/ml. According to the ISO/TR 7405-1984(f), the samples were considered as hemolytic if the hemolytic percentage was above 5%. Hemolysis assay for Hfx-AgNPs colloidal solution against human erythrocytes, using DI as a positive control (+) and PBS as a negative control (−). Different concentration of Hfx-AgNPs were incubated with RBCs for 60 min for determining hemolytic activity. Data is represented as mean ± SD (n = 3). The mean difference between treatments considered significant at p < 0.05 using a one-way analysis of variance followed by Duncan's post hoc test. a statistically significant difference when compared with negative control; b statistically significant difference when compared with positive control The effectiveness clot lysis by Hfx-AgNPs, positive thrombolytic control (streptokinase), and negative control is represented in Fig. 9. The addition of 200 µl SK, a positive control to the clots at 37 °C, showed 76.7% clot lysis. When treated with a saline solution (negative control), clots showed only negligible clot lysis (6.0%). The mean difference in clot lysis percentage between positive and negative control was significant (p value < 0.05). After treating clots with different concentrations of Hfx-AgNPs colloidal solution, the highest percentage of clot lysis was displayed at 100 μg/ml of Hfx-AgNPs with 50.218%, which means the percentage of the clot lysis compared with the negative control was significant (p-value < 0.05). Also, it has been observed that the lysis activity of tested agent was dose-dependent. In vitro thrombolysis activity of Hfx-AgNPs. Positive control- streptokinase (SK), Negative control (saline solution). Data are expressed as mean ± SD (n = 3). The mean difference between different Hfx-AgNPs concentrations was considered significant at p < 0.05 using a one-way analysis of variance followed by Duncan's post hoc test for multiple comparisons. Lowercase letters compare the positive and negative control with different concentrations of Hfx-AgNPs colloidal solution. Same lowercase letters mean no statistically significant difference. However, Different lowercase letters mean a statistically significant difference The results of the present study demonstrated the biosynthesis of silver nanoparticles mediated Haloferax sp strain NRS1. The brown-black color of the culture confirmed the reduction of silver ions (Ag+) with the formation of AgNPs, according to Abdollahnia et al. (2020). Silver nanoparticles synthesis by Hfx. sp strain NRS1 was potential occurred in the presence of 0.5 mM AgNO3, 25% NaCl (w/v), at 37 °C. The nanostructures' size, shape, and distribution are related to the intensity and frequency of the plasmon surface's absorption bands (Petryayeva and Krull, 2011). Sharp peaks in silver nanoparticles' spectral extinction at visible and near-infrared frequencies rely on the resonances of localized surface plasmon (Mayer and Hafner, 2011). In the wavelength range of 300–800 nm at a resolution of 1 nm (Peiris et al. 2017). Owing to the elevated optical density (OD) of the colloidal suspension, a 1 ml colloidal aliquot was diluted with 15 mL of distilled water. Distilled water was used as blank. The analysis of Hfx-AgNPs colloidal solution by UV–Vis spectroscopy showed a characteristic absorbance peak at 458 nm, mostly similar to those for AgNPs synthesized by Halococcus salifodinae with maximum absorption peaks at 440 nm (Srivastava et al. 2013). Also, a peak equal to 446 nm was demonstrated for colloidal nanoparticles synthesized by Haloferax denitrificans (Abdollahnia et al. 2020). Transmission electron microscopy (TEM) The size and morphology of Hfx-AgNPs were done using TEM. Typically, the TEM pictures of 10 replicates of tested samples were taken at different 3 magnifications (30,000, 80,000, and 100,000 X). A drop of aqueous AgNPs sample was placed on a carbon-coated copper grid. The samples were allowed to dry under an infrared lamp before the examination; the micrographs were obtained using TEM operating at an accelerating voltage of 200 kV (Schrand et al. 2010). Dimension and morphology of Hfx-AgNPs were performed using the threshold method after set scale than the "Analyze Particle "feature to measure each particle area and diameter (Woehrle et al. 2006; Schneider et al. 2012). The particle size has been measured using image analysis. The present results confirmed the spherical-oblate shape of biosynthesized nanoparticles by Haloferax sp strain NRS1, in size ranges from 5.77 to 73.14 nm. The average particle size for AgNPs synthesized by Halococcus salifodinae was found to be around 50.3 nm (Srivastava et al. 2013). Zeta potential analysis Zeta potential determination is an essential and easiest means of predicting the stability and understanding of nanoparticle surface quality (Gregory et al. 2016); which displays the magnitude of repulsions between adjacent, equally charged dispersed particles whose values are correlated to colloidal dispersion stability (Larsson et al. 2012). the aqueous suspension of Hfx-AgNPs was placed in a DTS0112-low volume disposable cuvette. The potential of zeta is determined by the electrophoretic mobility principle, induced by applying an electric field through dispersion media. The silver nanoparticles have carried the charge of capping agents (Singh et al. 2014). The current investigation results revealed that; zeta potential shows negative value (− 25.5 ± 3.15 mV) for the Hfx-AgNPs reaction mixture. The negative values of zeta potential analysis confirmed the stability of Hfx-AgNPs. As formulated by Meléndrez et al. (2010), the magnitude of zeta potential gives an implication of colloid's possible stability. Similar observation revealed that; zeta potential with a negative value equal to − 18.9 mV is considered stable (Salvioni et al. 2016). The zeta potentials of silver nanoparticles produced by other halophiles such as Haloferax sp. and Halomonas sp isolated from solar saltern were equal to − 33.12, − 35.9, respectively (Gregory et al. 2016). These data confirmed that the haloarchaeon Haloferax sp strain NRS1 used in the present study produced the nanoparticles which are comparably stable as previously synthesized silver nanoparticles. Dried Hfx-AgNPs were coated on an XRD grid to determine their structural characteristics using an XRD analysis. X-ray diffractometer was recorded by monochromatic Cu kα radiation (λ = 1.78 Å) with TD-2500 XRD which ran at 40 kV. The scanning was measured in the region of 30°–80°. For 2θ (Bragg diffraction angle) at 0.02°/min, and the time constant was 2 s (Klung and Alexander 1962). FWHM (full width at half maximum) of peaks was determined by Origin pro using the "Quick fit" command. Then the crystallite size was calculated using the Debey-Scherrer equation (Eq. 1) (Holzwarth and Gibson 2011): $$ FWHM = \frac{K\lambda }{{L{\text{Cos}} \theta }} $$ L is particle size, θ is peak position (2θ/2) in radian. λ is the wavelength of the X-ray diffraction. K = 0.89 is the Debye–Scherrer constant. The lattice strain, ε, were estimated using the Williamson–Hall (WH) plot, determined according to Singaravelan and Alwar (2015) using the following equation (Eq. 2). $$ \beta \cos \theta = \frac{C\lambda }{T} + 4\varepsilon \sin \theta $$ where β is FWHM in radian, t is the grain size in nm, ε is the strain, and C is a correction factor taken as 0.94. The XRD peaks of Hfx-AgNPs exhibited a strong and narrow pattern, implying that nanomaterials synthesized by Haloferax sp strain NRS1 displayed a high degree of crystallinity. Peaks from other impurities were not detected, which confirming the high purity of the synthesized AgNPs. XRD provided significant evidence for the complete reduction of silver ions resulting in nanomaterials' production (Bhainsa and D'souza 2006). The XRD patterns of silver nanoparticles produced by archaeal strain Haloferax sp strain NRS1 showed an unassigned peak at 38.21°, 44.63°, 64.39°, and 77.61°. Thus, the XRD pattern clearly illustrated that the Ag-NPs formed in this study were crystalline in nature. According to the previous study, the peaks corresponding to our findings confirm that the main crystalline phase was silver (Sadhasivam et al. 2010; Shameli et al. 2012; Gurunathan et al. 2013). Concerning the FWHM data, the silver nanocrystalline structure obtained by haloarchaeal strain Haloferax sp. NRS1 (29.28 nm), which is similar to a previous report for NPs by the halophilic strain, Halococcus salifodinae BK 3 (22 nm) (Srivastava et al. 2013). In comparison with our previous work at Sayed et al. (2018) who investigate the characterization of nano silver powder (Sigma Aldrich, 576832) and nano silver ink (Metalon, JS-B25HV) X-ray diffraction pattern revealed four peaks 111, 200, 220, 311, these results were in accordance with our findings. Approving the formation of silver nanoparticle by the studied strain NRS1. Fourier Transmission Infra-Red (FT-IR) spectroscopic For analysis of FTIR of Hfx-AgNPs, the colloidal solution was centrifuged at 10,000 rpm for 30 min. The pellet was washed three times using deionized water to eliminate the free proteins and enzymes. Then a small quantity of dried powder was grinded with potassium bromide (KBr). The sample's FTIR spectrum was recorded at 400–4000 cm−1 (Taran et al. 2016). The results of FTIR confirmed the capping agents of some functional groups responsible for the biosynthesized nanoparticles' stability (Ajitha et al. 2018). The bands were observed at 3281 cm−1, 1644 cm−1, 1250 cm−1 and 611 cm−1. The current result confirmed that the amide group from amino acid residues and peptides of proteins has a stronger ability to bind to metal. The amino group could form a coat covering the metal nanoparticles to stabilize silver nanoparticles formed intracellularly. This evidence suggests that the biological molecules could perform the function of stabilizing the intracellular biosynthesized nanoparticles (Soenen et al. 2010). According to Makarov et al. (2014), the silver nanoparticle surface proteins act as capping agent amino acid residues. Peptides have a strong ability to bind to silver ions. Comparing the present findings with our previous work, Sayed et al. (2018), FTIR of nano silver powder (Sigma Aldrich, 576832) and nano silver ink (Metalon, JS-B25HV) revealed presence of O–H stretch, C=C stretch and N–H groups which are almost similar to Hfx-AgNPs composition. Previous studies reported the potential antimicrobial activity of synthesized silver nanoparticles by Haloarchaea (Srivastava et al. 2014; Patil et al. 2014; Abdollahnia et al. 2020). The current study reports new bioactivities for green synthesized silver nanoparticles by Haloferax strain. Hemolytic activity against erythrocytes determines the tested material's potential membrane-stabilizing capability (Seeman and Weinstein 1966). The results of the present study revealed significant membrane-stabilizing potential associated with low toxicity of Hfx-AgNPs. This finding is supported by the negative value recorded for Hfx-AgNPs zeta potential. The nanoparticles' negative charge may prevent RBCs from interacting with the nanoparticles, consequently (Rothen-Rutishauser et al. 2006). Thus, Hfx-AgNPs may display their favorable hemolysis rate due to carboxylate group presence, leading to repulsion with RBC. Our results are in accordance with Singh et al. (2020). They stated that Fe3 O4 -Au Cys and Fe3 O4 -Au Cyt nanoparticles show low hemolytic activity due to their negative charges related to the presence of functional groups with a negative charge, which may lead to repulsion with RBC. According to Guidance for Industry and Food and Drug Administration Staff (FDA)-2013-D-0350 (ISO 10993-1) protocol, when the hemolysis level is above 5%, the tested material is considered a hemolytic agent. At the present time there is an urgent need to discover drugs to break up or remove blood clots, as the formation of clots is the main cause of heart disease and stroke (Almalk and Zhang 2020). Regarding clot lysis assay, negative control revealed that clot dissolution did not occur when saline was added to the clot. However, significant thrombolytic activity was observed after treating the clots with Hfx-AgNPs colloidal solution as compared with the negative control. The Hfx-AgNPs thrombolysis activity was compared with streptokinase. The current result identified the potent thrombolytic activity of the tested material. According to Marder et al. (2001), thrombolytic agents dissolve blood clots by activating plasminogen that converted to a proteolytic enzyme called plasmin. Plasmin is capable of breaking cross-links between fibrin molecules, which provide the structural integrity of blood clots. Recently Colasuonno et al. (2018) summarize that some nanotherapeutic agent exerts its action through tissue plasminogen activation, maximizing thrombolytic activity. A variety of nanoparticles with clot-specific targeting capability release some thrombolytic agents (Landowski et al. 2020). Moreover Deng et al. (2018) stated that fibrin-targeted nanoparticle considered as promising drug delivery system which improve systemic hemostasis in vivo. To sum up, the haloarchaeon Haloferax sp strain NRS1 (MT967913), isolated from solar saltern in Jeddah, Saudi Arabia, can accumulate silver in a nanoparticle form. AgNPs biosynthesized in NTYE medium with an average particle size equal to 27.7 nm. The biosynthesis of silver nanoparticles was confirmed using UV–visible spectroscopy and TEM, Moreover, FTIR showed that Hfx-AgNPs considered as both reducing and stabilizing agent. X-ray diffraction analysis revealed that the AgNPs were crystalline in nature. The relatively stable nature of Hfx-AgNPs could be attributed to their zeta-potential value, which displayed a realistic surface charge magnitude. Hfx-AgNPs exhibited low hemolytic activity, which may be related to negative zeta-potential, confirming their promising role as nano-drug delivery agents. However, increased clot lysis properties may be attributed to the activation of the clot lysis's cascade reaction. Available upon request. AgNPs: Silver nanoparticles Hfx-AgNPs: Silver nanoparticles biosynthesized by Haloferax sp TEM: XRD: FT-IR: Fourier Transmission Infra-Red FWHM: Full width at half maximum Abdollahnia M, Makhdoumi A, Mashreghi M, Eshghi H (2020) Exploring the potentials of halophilic prokaryotes from a solar saltern for synthesizing nanoparticles: the case of silver and selenium. PLoS ONE 15(3):e0229886. https://doi.org/10.1371/journal.pone.0229886 Abdullaeva Z (2017) Synthesis of nanomaterials by prokaryotes. Synthesis of nanoparticles and nanomaterials. Springer, Cham, pp 25–54 Aisida SO, Ugwu K, Akpa PA, Nwanya AC, Ejikeme PM, Botha S, Ahmad I, Maaza M, Ezema FI (2019a) Biogenic synthesis and antibacterial activity of controlled silver nanoparticles using an extract of Gongronema Latifolium. Mater Chem Phys 237:121859. https://doi.org/10.1016/j.matchemphys.2019.121859 Aisida SO, Ugwoke E, Iroegbu UA, Botha S, Ahmad I, Maaza M, Ezema FI (2019b) Incubation period induced biogenic synthesis of PEG enhanced Moringa oleifera silver nanocapsules and its antibacterial activity. J Polym Res 26:225. https://doi.org/10.1007/s10965-019-1897-z Aisida SO, Ugwu K, Akpa PA, Nwanya AC, Nwankwo U, Botha S, Ejikeme PM, Ahmad I, Maaza M, Ezema FI (2019c) Biosynthesis of silver nanoparticles using bitter leave (Veronica amygdalina) forantibacterial activities. Surf Interf 17:100359. https://doi.org/10.1016/j.surfin.2019.100359 Aisida SO, Ugwu K, Nwanya AC, Bashir AKH, Nwankwo NU, Ahmed I, Ezema FI (2021a) Biosynthesis of silver oxide nanoparticles using leave extract of Telfairia Occidentalis and its antibacterial activity. Mater Today Proc 36(2):208–213. https://doi.org/10.1016/j.matpr.2020.03.005 Aisida SO, Ugwu K, Akpa PA, Nwanya AC, Nwankwo NU, Bashir AKH, Madiba IG, Ahmad I, Ezema FI (2021b) Synthesis and characterization of iron oxide nanoparticles capped with Moringa Oleifera: the mechanisms of formation effects on the optical, structural, magnetic and morphological properties. Mater Today Proc 36(2):214–218. https://doi.org/10.1016/j.matpr.2020.03.167 Aisida SO, Ali A, Oyewande OE, Ahmad I, Ul-Hamid A, Zhao T-Z, Maaza M, Ezema FI (2021c) Biogenic synthesis enhanced structural, morphological, magnetic and optical properties of zinc ferrite nanoparticles for moderate hyperthermia applications. J Nanopart Res 23:47. https://doi.org/10.1007/s11051-021-05149-w Aisida SO, Ugwu K, Nwanya AC, Akpa PA, Madiba IG, Bashir AKH, Botha S, Ejikeme PM, Zhao T-Z, Ahmad I, Maaza M, Ezema FI (2021d) Dry Gongronema latifolium aqueous extract mediated silver nanoparticles by one-step in-situ biosynthesis for antibacterial activities. Surf Interf 24:101116. https://doi.org/10.1016/j.surfin.2021.101116 Aisida SO, Ugwu K, Akpa PA, Nwanya AC, Ejikeme PM, Botha S, Ahmad I, Ezema FI (2021e) Morphological, optical and antibacterial study of green synthesized silver nanoparticles via Vernonia amygdalina. Mater Today Proc 36(2):199–203. https://doi.org/10.1016/j.matpr.2020.03.167 Ajitha B, Reddy YAK, Jeon HJ, Ahn CW (2018) Synthesis of silver nanoparticles in an eco-friendly way using Phyllanthus amarus leaf extract: antimicrobial and catalytic activity. Adv Pow Technol 29(1):86–93. https://doi.org/10.1016/j.apt.2017.10.015 Almalki WH, Zhang W (2020) Emerging paradigms in treating cerebral infarction with nanotheranostics: opportunities and clinical challenges. Drug Discov Today 26(3):826–835. https://doi.org/10.1016/j.drudis.2020.12.018 Beeler E, Singh OV (2016) Extremophiles as sources of inorganic bio-nanoparticles. W J Microbiol Biotech 32(9):156. https://doi.org/10.1007/s11274-016-2111-7 Bera RK, Mandal SM, Raj CR (2010) Antimicrobial activity of fluorescent Ag nanoparticles. Lett App Microbiol 58:520–526. https://doi.org/10.1111/lam.12222 Bhainsa KC, D'souza SF (2006) Extracellular biosynthesis of silver nanoparticles using the fungus Aspergillus fumigatus. Colloids Surf B 47(2):160–164. https://doi.org/10.1016/j.colsurfb.2005.11.026 Costa MI, Álvarez-Cerimedo MS, Urquiza D, Ayude MA, Hoppe CE, Hoppe CE, Fasce DP, De Castro RE, Giménez MI (2020) Synthesis, characterization and kinetic study of silver and gold nanoparticles produced by the archaeon Haloferax volcanii. J App Microbiol 129:1297–1308. https://doi.org/10.1111/jam.14726 De Castro I, Mendo S, Caetano T (2020) Antibiotics from Haloarchaea: what can we learn from comparative genomics? Marine Biotechnol 22(2):308–316. https://doi.org/10.1007/s10126-020-09952-9 Deng J, Mei H, Shi W, Pang ZQ, Zhang B, Guo T, Hu Y (2018) Recombinant tissue plasminogen activator-conjugated nanoparticles effectively targets thrombolysis in a rat model of middle cerebral artery occlusion. Curr Med Sci 38(3):427–435. https://doi.org/10.1007/s11596-018-1896-z Dyall-Smith M (2008) The Halohandbook: protocols for halobacterial genetics, Mark Dyall-Smith, Martinsried, Germany Garcia MA (2011) Surface plasmons in metallic nanoparticles: fundamentals and applications. J Phys D App Phys 44(28):283001. https://doi.org/10.1088/0022-3727/44/28/283001 Gregory L, Reghan V, Hill J, Harper S, Rawle FA, Hendren OC, Klaessig F, Nobbmann UIF, Sayre P, Rumble J (2016) "Guidance to improve the scientific value of zeta-potential measurements in nanoEHS. Environ Sci Nano 3(5):953–965. https://doi.org/10.1039/c6en00136j Gurunathan S, Raman J, Abd Malek SN, John PA, Vikineswary S (2013) Green synthesis of silver nanoparticles using Ganoderma neo-japonicum Imazeki: a potential cytotoxic agent against breast cancer cells. Int J Nanomed 8:4399. https://doi.org/10.2147/IJN.S51881 Holzwarth U, Gibson N (2011) The Scherrer equation versus the "Debye-Scherrer equation." Nat Nanotech 6(9):534–534. https://doi.org/10.1038/nnano.2011.145 Issa B, Ihab M, Obaidat MI, Albiss AB, Haik Y (2013) Magnetic nanoparticles: surface effects and properties related to biomedicine applications. Int J Mol Sci 14:21266–21305. https://doi.org/10.3390/ijms141121266 Javed R, Zia M, Naz S, Aisida SO, Ul-Ain N, Ao Q (2020) Role of capping agents in the application of nanoparticles in biomedicine and environmental remediation: recent trends and future prospects. J Nanobiotechnol 18:172. https://doi.org/10.1186/s12951-020-00704-4 Klung HP, Alexander LE (1962) X-ray diffraction procedures, vol 1. Wiley, New York, p 974 Kutner MH, Nachtsheim CJ, Neter J, Li W (2005) Applied linear statistical models, vol 5. McGraw-Hill, Irwin, New York Landowski LM, Niego BE, Sutherland BA, Hagemeyer CE, Howells DW (2020) Applications of nanotechnology in the diagnosis and therapy of stroke. Seminars Thromb Hemostasis. 46(05):592–605. https://doi.org/10.1055/s-0039-3399568 Larsson M, Hill A, Duffy J (2012) Suspension stability; why particle size, zeta potential and rheology are important. Ann Trans Nord Rheol Soc 20:209–214. https://nrs.blob.core.windows.net/pdfs/nrspdf-6f0ecb18-3077-4226-af5c-166c2340e756.pdf Makarov VV, Love AJ, Sinitsyna OV, Makarova SS, Yaminsky IV, Taliansky ME, Kalinina NO (2014) "Green" nanotechnologies: synthesis of metal nanoparticles using plants. Acta Naturae. 6(1):20 Maloy SR (1990) Experimental techniques in bacterial genetics. Jones and Bartlet Publisher Inc, Burlington, pp 125–139 Marder JV, Landskroner K, Novokhatny V, Zimmerman PT, Kong M, Kanouse J, Jesmok G (2001) Plasmin induces local thrombolysis without causing hemorrhage: a comparison with tissue plasminogen activator in the rabbit. Throm Haemo. 86(09):739–745. https://doi.org/10.1055/s-0037-1616127 Mayer KM, Hafner JH (2011) Localized surface plasmon resonance sensors. Chem Rev 111(6):3828–3857. https://doi.org/10.1021/cr100313v Meléndrez MF, Cárdenas G, Arbiol J (2010) Synthesis and characterization of gallium colloidal nanoparticles. J Coll Interf Sci 346(2):279–287. https://doi.org/10.1016/j.jcis.2009.11.069 Patil S, Fernandes J, Tangasali R, Furtado I (2014) Exploitation of Haloferax alexandrinus for biogenic synthesis of silver nanoparticles antagonistic to human and lower mammalian pathogens. J Clust Sci 25:423–433. https://doi.org/10.1007/s10876-013-0621-0 Peiris MK, Gunasekara CP, Jayaweera PM, Arachchi ND, Fernando N (2017) Biosynthesized silver nanoparticles: are they effective antimicrobials? Mem Inst Oswaldo Cruz 112(8):537–543. https://doi.org/10.1590/0074-02760170023 Petryayeva E, Krull UJ (2011) Localized surface plasmon resonance: nanostructures, bioassays and biosensing—a review. Anal Chim Acta 706(1):8–24. https://doi.org/10.1016/j.aca.2011.08.020 Powell WA, Catranis CM, Maynard CA (2000) Design of self-processing antimicrobial peptides for plant protection. Lett Appl Microbiol 31:163–168. https://doi.org/10.1046/j.1365-2672.2000.00782.x Rothen-Rutishauser BM, Schürch S, Haenni B, Kapp N, Gehr P (2006) Interaction of fine particles and nanoparticles with red blood cells visualized with advanced microscopic techniques. Envir Sci Technol 40(14):4353–4359. https://doi.org/10.1021/es0522635 Sadhasivam S, Shanmugam P, Yun K (2010) Biosynthesis of silver nanoparticles by Streptomyces hygroscopicus and antimicrobial activity against medically important pathogenic microorganisms. Colloids Surf, B 81(1):358–362. https://doi.org/10.1016/j.colsurfb.2010.07.036 Salvioni L, Galbiati E, Collico V, Alessio G, Avvakumova S, Corsi F, Tortora P, Prosperi D, Colombo M (2016) Negatively charged silver nanoparticles with potent antibacterial activity and reduced toxicity for pharmaceutical prep-arations. Int J Nanomed 12:2517. https://doi.org/10.2147/IJN.S127799 Sayed R, Saad H, Hagagy N (2018) Silver nanoparticles: characterization and antibacterial properties. Rendiconti Lincei Scienze Fisiche e Naturali 29(1):81–86. https://doi.org/10.1007/s12210-017-0663-6 Schneider CA, Rasband WS, Eliceiri KW (2012) NIH Image to ImageJ: 25 years of image analysis. Nat Methods 9(7):671–675. https://doi.org/10.1038/nmeth.2089 Schrand AM, Schlager JJ, Dai L, Hussain SM (2010) Preparation of cells for assessing ultrastructural local-ization of nanoparticles with transmission electron microscopy. Nat Protoc 5(4):744. https://doi.org/10.1038/nprot.2010.2 Seeman P, Weinstein JI (1966) Erythrocyte membrane stabilization by tranquilizers and antihistamines. Biochem Pharma 15(11):1737–1752. https://doi.org/10.1016/0006-2952(66)90081-5 Shameli K, Ahmad MB, Zamanian A, Sangpour P, Shabanzadeh P, Abdollahi Y, Zargar M (2012) Green biosynthesis of silver nanoparticles using Curcuma longa tuber powder. Int j Nanomed 7:5603. https://doi.org/10.2147/ijn.s36786 Singaravelan R, Alwar SBS (2015) Electrochemical synthesis, characterization and phytogenic properties of silver nanoparticles. Appl Nanosci 5:983–991. https://doi.org/10.1007/s13204-014-0396-0 Singh S, Bharti A, Meena VK (2014) Structural, thermal, zeta potential and electrical properties of disaccharide reduced silver nanoparticles. J Mater Sci Mater Electro 25(9):3747–3752. https://doi.org/10.1007/s10854-014-2085-x Singh N, Sahoo SK, Kumar R (2020) Hemolysis tendency of anticancer nanoparticles changes with type of blood group antigen: An insight into blood nanoparticle interactions. Mat Sci Engin 109:110645. https://doi.org/10.1016/j.msec.2020.110645 Sleytr UB, Schuster B, Egelseer EM, Pum D (2014) S-layers: principles and applications. FEMS Microbiol Rev 38(5):823–864. https://doi.org/10.1111/1574-6976.12063 Soenen SJ, Himmelreich U, Nuytten N, Pisanic TR, Ferrari A, De Cuyper M (2010) Intracellular nanoparticle coating stability determines nanoparticle diagnostics efficacy and cell functionality. Small 6(19):2136–2145. https://doi.org/10.1002/smll.201000763 Srivastava P, Kowshik M (2013) Mechanisms of metal resistance and homeostasis in haloarchaea. Archaea 2013:1–16. https://doi.org/10.1155/2013/732864 Srivastava P, Bragança J, Ramanan SR, Kowshik M (2013) Synthesis of silver nanoparticles using haloarchaeal isolate Halococcus salifodinae BK 3. Extremophiles 17(5):821–831. https://doi.org/10.1007/s00792-013-0563-3 Srivastava P, Braganca J, Ramanan SR, Kowshik M (2014) Green synthesis of silver nanoparticles by Haloarchaeon Halococcus salifodinae BK6. Adv Mat Res 938:236–241 Taniyama S, Arakawa O, Terada M, Nishio S, Takatani T, Mahmud Y, Noguchi T (2003) Ostreopsis sp., a possible origin of palytoxin (PTX) in parrotfish Scarus ovifrons. Toxicon 42(1):29–33. https://doi.org/10.1016/s0041-0101(03)00097-7 Taran M, Rad M, Alavi M (2016) Characterization of Ag nanoparticles biosynthesized by Bacillus sp. HAI4 in different conditions and their antibacterial effects. J Appl Pharma Sci 6(11):094–099. https://doi.org/10.7324/japs.2016.601115 Ventosa A, Gutierrez MC, Kamekura M, Zvyagintseva IS, Oren A (2004) Taxonomic study of Halorubrum distributum and proposal of Halorubrum terrestre sp. nov. Int J Syst Evol Microbiol 54:389–392. https://doi.org/10.1099/ijs.0.02621-0 Woehrle GH, Hutchison JE, Özkar S, FINKE RG (2006) Analysis of nanoparticle transmission electron microscopy data using a public-domain image-processing program, image. Turk J Chem 30(1):1–13. https://journals.tubitak.gov.tr/chem/issues/kim-06-30-1/kim-30-1-1-0508-1 Zalazar L, Pagola P, Miró MV, Churio MS, Cerletti M, Martínez C, De Castro R (2019) Bacterioruberin extracts from a genetically modified hyperpigmented Haloferax volcanii strain: antioxidant activity and bioactive properties on sperm cells. J App Microbiol 126(3):796–810. https://doi.org/10.1111/jam.14160 This work was funded by the University of Jeddah, Jeddah, under Grant No. (UJ-02-061-DR). The authors, therefore, acknowledge with thanks the University of Jeddah for its technical and financial support. This work was funded by the University of Jeddah, Jeddah, under Grant No. (UJ-02–061-DR). Department of Biology, College of Science and Arts At Khulis, University of Jeddah, Jeddah, Saudi Arabia Hend M. Tag & Nashwa Hagagy Department of Biology, College of Science, University of Jeddah, Jeddah, Saudi Arabia Amna A. Saddiq Department of Computer Science, College of Computing and Information Technology, University of Jeddah, Jeddah, Saudi Arabia Monagi Alkinani Botany & Microbiology Department, Faculty of Science, Suez Canal University, Ismailia, 41522, Egypt Nashwa Hagagy Hend M. Tag AS, HT, MA and NH conceived and designed research. HT and NH conducted experiments and analyzed data. MA analyzed data. HT and NH wrote the manuscript. All authors read and approved the manuscript. Correspondence to Hend M. Tag or Nashwa Hagagy. Institutional Review Board Statement Ethical review and approval were waived for this study due to the study was applied on the blood from volunteer donors. An informed blood donors' consent form was submitted to the volunteer donors, which informed them of the title, the name, and details of the researchers and the research purpose. This individual study sample was limited and not presented in the consent form for future research projects. In this study, the potential donor harm, wounds, discomfort, or discomfort were added as a statement of informed consent. All authors have read and approved the manuscript for submission and publication. Tag, H.M., Saddiq, A.A., Alkinani, M. et al. Biosynthesis of silver nanoparticles using Haloferax sp. NRS1: image analysis, characterization, in vitro thrombolysis and cytotoxicity. AMB Expr 11, 75 (2021). https://doi.org/10.1186/s13568-021-01235-3 Silver-nanoparticles Haloferax sp Thrombolysis
CommonCrawl
Benefit of sling shot effect with a space elevator The upper end of a space elevator is moving considerably faster than orbital speed at that distance from Earth. As a result anything released from the top of elevator would get thrown away from the Earth instead of just entering into high Earth orbit. How much would being released from the end of a space elevator at twice geosynchronous radius reduce travel times or the amount of rocket thrust needed during Earth departure for interplanetary probes. I'm assuming the cable is capable of damping out the oscillations caused by releasing the payload. Doing so is necessary to release a payload at any altitude other than geosynchronous altitude; due to the amount of activity currently occurring at significantly lower altitudes being able to do so seems to be a necessary requirement for constructing an elevator in the first place. space-elevator interplanetary Dan is Fiddling by Firelight Dan is Fiddling by FirelightDan is Fiddling by Firelight $\begingroup$ There are two benefits from launching from the counterweight - the motion of the counterweight in orbit, but there's also a huge benefit from having left the bulk of the gravity well below you. I'd guess the latter is the primary benefit. $\endgroup$ – Don Branson Aug 18 '13 at 23:51 $\begingroup$ This question doesn't have a single answer because there's no telling what radius the end of the space elevator would be. If the counterweight has a huge mass, it will barely buy you anything. But @DanNeely does give an upper bound. There's no way it would extend further than that, because the material won't be strong enough. So the answer is between nothing and whatever you get at that limit. $\endgroup$ – AlanSE Aug 19 '13 at 1:00 $\begingroup$ @AlanSE Setting a material strength just strong enough to make the cable possible is an arbitrary cutoff. I could see a design where the bottom half of the cable was thicker than normal (or had multiple strands rising to join geo; and a single longer strand going up with a maximum altitude limited by interference from lunar gravity (where that cap would be is another question). $\endgroup$ – Dan is Fiddling by Firelight Aug 19 '13 at 1:06 $\begingroup$ @DanNeely I don't buy that because the outward limit is the material specific strength, which is fundamentally limited by bond strength. The acceleration increases linearly with radius, so the specific strength requirement past GEO will surpass the requirement for the space elevator below GEO, which is what requires carbon nanotubes. The force at GEO can be whatever you want, sure. But at some point the problem of going from GEO to release point gets harder than Earth surface to GEO. Harder in the sense that it can't be done beyond some radius with atomic matter. $\endgroup$ – AlanSE Aug 19 '13 at 3:19 $\begingroup$ @DanNeely If there is no counterweight it's a lot more than 2x geosync--IIRC it's around 5x. The thing is gravity is weaker out there. I believe a launch from the end of the cable can get you to almost anywhere in the solar system. (You'll still need a rocket when you get there unless you can aerobrake.) $\endgroup$ – Loren Pechtel Aug 19 '13 at 18:56 Your total energy is ${v^2\over 2}-{\mu\over r}$. If that's negative, you're still in orbit. If positive, you've escaped. At the point of release, $r$ is twice geosynchronous radius, $2\times 42164\,\mathrm{km}$, and $v$ is twice geosynchronous velocity, $2\times 3.075\,\mathrm{km\over s}$. The $\mu$ for Earth is $398600\,\mathrm{km^3\over s^2}$. The result is $14.18\,\mathrm{km^2\over s^2}$. You have escaped. The result is equal to your $v_\infty^2\over 2$, so your $C_3$, which is $v_\infty^2$, is $28.36\,\mathrm{km^2\over s^2}$. Not enough to get you directly to Jupiter, but you'll get well past Mars' orbit or well inside Venus' orbit, depending on your preference. A judicious set of Venus and Earth flybys could then get you to Jupiter. From there you could go pretty much anywhere, including potentially escaping the solar system. Also a Venus flyby could get you to Mercury. You will be constrained on the departure plane, but perhaps a lunar flyby could help get you going in the direction you want. The $\Delta V$ required to get to that $C_3$ from low-Earth orbit is about $4.5\,\mathrm{km\over s}$. Or from GEO, about $3.3\,\mathrm{km\over s}$ (by lowering periapsis to $150\,\mathrm{km}$ altitude and escaping from there). So it buys you a lot. Though that's one hell of a cable you'll have to build. Mark AdlerMark Adler How much would being released from the end of a space elevator at twice geosynchronous radius reduce travel times or the amount of rocket thrust needed during Earth departure for interplanetary probes? There are a lot of variables in this question and the answer, even given the clarity of the length of the elevator. The major variable being the reduction of non-defined values for times or thrust. So I will just focus on the velocities and destinations obtainable at different release elevations, with no added fuel costs. Where Earth geosynchronous orbits, whether circular or elliptical, have a semi-major axis of 42,164 km (26,199 mi), Twice would equal 84,328 km (52,398 mi). I personally don't have the math to define the exact velocity imparted at 84,328 km, and my research did not find a calculated value for this. But values for just above and below this are available. Wikipedia lists a few "free" destinations at release heights just above GSO. An object attached to a space elevator at a radius of approximately 53,100 km will be at escape velocity when released. Transfer orbits to the L1 and L2 Lagrangian points can be attained by release at 50,630 and 51,240 km, respectively, and transfer to lunar orbit from 50,960 km For destinations beyond the earth moon system, the calculations get a bit more complex, as you need to also consider the sun. In the same manor the elevator gives velocity in relation to it's height above earth, its position in relationship to the sun also effects relative velocity. An elevator cable of somewhat over 100, 000 km in length should suffice as a sling to launch spacecraft to Jupiter (at the outer end) and Mercury (at the inner end). Reaching Jupiter is critical, because we can take advantage of Jupiter's gravity assist to send spacecraft further outward or even beyond the solar system. Aravind, P. K. (2007). "The physics of the space elevator". American Journal of Physics (American Association of Physics Teachers) As we see velocity or obtainable destinations is simply a matter of height. Or said another way, the sky really is not the limit, the higher you go the faster and the farther you can fly. Not the answer you're looking for? Browse other questions tagged space-elevator interplanetary or ask your own question. How long would it take to ride to the top of a space elevator? Can a "free launch" from a space elevator really be free? Is it feasible to pump fuel into orbit? What is a "space elevator"? What's the path of something dropped from a space elevator Would a 'space elevator'/sling on a rotating asteroid work? Reverse Lunar Space Elevator How would the foundation/anchor for a space elevator work? "Propeller-head" polar space elevator? How will JAXA's space elevator-testing cubesat experiment work? Space elevator idea
CommonCrawl
Discussion Tag Cloud logic type-theory nLab > Latest Changes: cut rule CommentTimeJan 20th 2014 (edited Jan 20th 2014) Format: MarkdownItexsome bare minimum at _[[cut rule]]_ some bare minimum at cut rule CommentAuthorTobyBartels Author: TobyBartels Format: MarkdownItexI put in a remark about the nullary analogue (which I called the 'identity rule', although it goes by various names). I put in a remark about the nullary analogue (which I called the 'identity rule', although it goes by various names). CommentAuthorTodd_Trimble Author: Todd_Trimble Format: MarkdownItexIn my experience with Gentzen-style sequent calculi, there is not typically an 'identity rule' for all formulas, but only the identity axiom for the variables. Instances of identities for more complicated formulas are derived; e.g., for implications $a \multimap b$ we have a derivation $$\frac{\frac{a \vdash a\; \; \; b \vdash b}{a \multimap b, b \; \vdash \; a}}{a \multimap b \; \vdash \; a \multimap b}$$ I can see why you call the identity rule a nullary analogue of the cut rule (inasmuch as identity arrows are nullary compositions). It is interesting that Girard views identities as _dual_ to cuts (roughly in the same way that an arrow $\top \to \neg A \wp A = A \multimap A$ that names the identity is dual to the arrow $\neg A \otimes A \to \bot$ that implements an evaluation). In the one-sided version of MLL, it looks something like this: $$\frac{}{\vdash \neg A, A} \; identity \qquad \frac{\vdash \Gamma, \neg A \;\;\; \vdash \Delta, A}{\vdash \Gamma, \Delta} \; cut$$ In my experience with Gentzen-style sequent calculi, there is not typically an 'identity rule' for all formulas, but only the identity axiom for the variables. Instances of identities for more complicated formulas are derived; e.g., for implications a⊸ba \multimap b we have a derivation a⊢ab⊢ba⊸b,b⊢aa⊸b⊢a⊸b\frac{\frac{a \vdash a\; \; \; b \vdash b}{a \multimap b, b \; \vdash \; a}}{a \multimap b \; \vdash \; a \multimap b} I can see why you call the identity rule a nullary analogue of the cut rule (inasmuch as identity arrows are nullary compositions). It is interesting that Girard views identities as dual to cuts (roughly in the same way that an arrow ⊤→¬A℘A=A⊸A\top \to \neg A \wp A = A \multimap A that names the identity is dual to the arrow ¬A⊗A→⊥\neg A \otimes A \to \bot that implements an evaluation). In the one-sided version of MLL, it looks something like this: ⊢¬A,Aidentity⊢Γ,¬A⊢Δ,A⊢Γ,Δcut\frac{}{\vdash \neg A, A} \; identity \qquad \frac{\vdash \Gamma, \neg A \;\;\; \vdash \Delta, A}{\vdash \Gamma, \Delta} \; cut Format: MarkdownItexYou only need an identity axiom for the variables precisely *because* the more general identity rule may be proved. For the same reason, Gentzen\'s sequent calculi do not need a cut rule. But in both cases, it\'s vital that the missing rule can be proved, since after all people do use them! You only need an identity axiom for the variables precisely because the more general identity rule may be proved. For the same reason, Gentzen's sequent calculi do not need a cut rule. But in both cases, it's vital that the missing rule can be proved, since after all people do use them! Format: MarkdownItexOkay, so we agree it's a *derived* (or derivable) rule in that set-up, not an axiom or a basic rule of inference. That's all I meant to say (and should have said). What you wrote is fine, just as long as everyone is clear on that point. Okay, so we agree it's a derived (or derivable) rule in that set-up, not an axiom or a basic rule of inference. That's all I meant to say (and should have said). What you wrote is fine, just as long as everyone is clear on that point. Format: MarkdownItexActually, let me draw you out a little more here. Could you give more details what you mean where you wrote "Typically, a cut-elimination theorem will also eliminate the identity rule..."? I've never heard it put that way. I know how to eliminate cuts (by progressively "pushing them back" towards the axioms), and I know how to "eliminate" identities (again pushing them back, in favor of identities on subformulas), but it looks to me like these two schemes are treated separately. One has to do with rewriting binary compositions, and the other has to do with rewriting nullary compositions. So my question is: how do you understand "eliminating identities" as part of eliminating cuts? Actually, let me draw you out a little more here. Could you give more details what you mean where you wrote "Typically, a cut-elimination theorem will also eliminate the identity rule…"? I've never heard it put that way. I know how to eliminate cuts (by progressively "pushing them back" towards the axioms), and I know how to "eliminate" identities (again pushing them back, in favor of identities on subformulas), but it looks to me like these two schemes are treated separately. One has to do with rewriting binary compositions, and the other has to do with rewriting nullary compositions. So my question is: how do you understand "eliminating identities" as part of eliminating cuts? CommentAuthortonyjones Author: tonyjones Format: Textthis is a little of topic but Bruno Paleo has an interesting paper on applications of proof theory to physics (http://www.logic.at/staff/bruno/Papers/2010-PhysicsAndProofTheory-PC.pdf). He gives an example of formalizing energy conservation using the cut rule. He also discusses future perspectives like using cut-introduction ('reductionism in Science can generally be captured by the proof-theretical notion of cut. Consequently, a significant part of the usual scientific activity can be formally described as cut-introduction' and 'potential benefits of using proofs to formalize Physics is the possibility of applying cut-introduction techniques in order to automatically discover useful physical concepts'), other thoughts on cut-elimination ('By using cut-elimination algorithms, it might be possible to automatically transform a solution that uses a derived principle (i.e. a cut) such as energy conservation into a solution that uses only the basic laws of a theory' and 'Cut-elimination corresponds to beta-reduction, which is the execution of the program. Cut-introduction corresponds to structuring of the program and possibly to code reuse. By extrapolating this isomorphism, theories of Physics formalized as collections of proofs can be seen as collections of programs. This kind of computation, which is implicit in the formalization of Physics, is yet another link between Physics and computation that might be the target of future work'), instrumentalism, theory evolution and algorithmic information theory ('Another indication that AIT and proof theory fit well together is the natural relation between cut-introduction and kolmogorov complexity'). this is a little of topic but Bruno Paleo has an interesting paper on applications of proof theory to physics (http://www.logic.at/staff/bruno/Papers/2010-PhysicsAndProofTheory-PC.pdf). He gives an example of formalizing energy conservation using the cut rule. He also discusses future perspectives like using cut-introduction ('reductionism in Science can generally be captured by the proof-theretical notion of cut. Consequently, a significant part of the usual scientific activity can be formally described as cut-introduction' and 'potential benefits of using proofs to formalize Physics is the possibility of applying cut-introduction techniques in order to automatically discover useful physical concepts'), other thoughts on cut-elimination ('By using cut-elimination algorithms, it might be possible to automatically transform a solution that uses a derived principle (i.e. a cut) such as energy conservation into a solution that uses only the basic laws of a theory' and 'Cut-elimination corresponds to beta-reduction, which is the execution of the program. Cut-introduction corresponds to structuring of the program and possibly to code reuse. By extrapolating this isomorphism, theories of Physics formalized as collections of proofs can be seen as collections of programs. This kind of computation, which is implicit in the formalization of Physics, is yet another link between Physics and computation that might be the target of future work'), instrumentalism, theory evolution and algorithmic information theory ('Another indication that AIT and proof theory fit well together is the natural relation between cut-introduction and kolmogorov complexity'). Format: MarkdownItex>Could you give more details what you mean where you wrote "Typically, a cut-elimination theorem will also eliminate the identity rule..."? What I really mean is that one proves both results (cut elimination and identity elimination) in the same way, one can usually prove the latter whenever one can prove the former (and in fact I would be surprised to see an exception in any formal system not created ad hoc just to be an exception), and that one should view the two results together. It is much like saying that an associative algebra typically has a unit (although at least there are naturally occurring exceptions to that). Could you give more details what you mean where you wrote "Typically, a cut-elimination theorem will also eliminate the identity rule…"? What I really mean is that one proves both results (cut elimination and identity elimination) in the same way, one can usually prove the latter whenever one can prove the former (and in fact I would be surprised to see an exception in any formal system not created ad hoc just to be an exception), and that one should view the two results together. It is much like saying that an associative algebra typically has a unit (although at least there are naturally occurring exceptions to that). CommentAuthorNoam_Zeilberger Author: Noam_Zeilberger Format: MarkdownItexFor what it's worth, I also found the wording here a bit strange, in part because I *have* seen descriptions of the cut-elimination algorithm which also eliminate identity axioms, and that always struck me as conflating two different ideas. I am used to thinking of these as two separate theorems (admissibility of the cut rule and admissibility of the identity rule), which indeed go hand-in-hand and are in a certain sense dual (elimination of cuts being analogous to $\beta$-reduction in natural deduction, and elimination of identities analogous to $\eta$-expansion). For what it's worth, I also found the wording here a bit strange, in part because I have seen descriptions of the cut-elimination algorithm which also eliminate identity axioms, and that always struck me as conflating two different ideas. I am used to thinking of these as two separate theorems (admissibility of the cut rule and admissibility of the identity rule), which indeed go hand-in-hand and are in a certain sense dual (elimination of cuts being analogous to β\beta-reduction in natural deduction, and elimination of identities analogous to η\eta-expansion). Format: MarkdownItexThanks for the reminder, Noam! I've done a little editing at [[cut rule]], expanding the idea section, and taking into account the discussion above into other sections. Please see what you think. Thanks for the reminder, Noam! I've done a little editing at cut rule, expanding the idea section, and taking into account the discussion above into other sections. Please see what you think. Format: MarkdownItexVery nice, Todd! It\'s more than just an analogy between the identity rule and eta expansion, so I put in a bit about that. (The connection between cut and beta reduction is fuzzier in my mind.) Very nice, Todd! It's more than just an analogy between the identity rule and eta expansion, so I put in a bit about that. (The connection between cut and beta reduction is fuzzier in my mind.) Format: MarkdownItexThanks for that bit, Toby! Thanks for that bit, Toby! Format: MarkdownItexI added an example of reducing a cut on $A \multimap B$, to go along with the example of expanding an identity on $A \wedge B$. As for the analogy with eta/beta, I agree that it's a *strong* analogy, but it's not quite exact. (Typically, eta expansion is a principle which applies to *open* terms $t : \Gamma \to A$ or contexts $k : A \to \Delta$, depending on the polarity of $A$.) I hope the parenthetical remark after the examples is suggestive enough. I added an example of reducing a cut on A⊸BA \multimap B, to go along with the example of expanding an identity on A∧BA \wedge B. As for the analogy with eta/beta, I agree that it's a strong analogy, but it's not quite exact. (Typically, eta expansion is a principle which applies to open terms t:Γ→At : \Gamma \to A or contexts k:A→Δk : A \to \Delta, depending on the polarity of AA.) I hope the parenthetical remark after the examples is suggestive enough. Format: MarkdownItexThanks for the addition, Noam. I added just a few more words to the examples section. Thanks for the addition, Noam. I added just a few more words to the examples section. CommentTimeMay 5th 2016 Format: MarkdownItexI added the quote > "A logic without cut elimination is like a car without an engine" – Jean-Yves Girard to [[cut rule]]. I don't know where this quote is from, though I've seen it in a couple of places; does anyone know a citation? I added the quote "A logic without cut elimination is like a car without an engine" – Jean-Yves Girard to cut rule. I don't know where this quote is from, though I've seen it in a couple of places; does anyone know a citation? CommentAuthorThomas Holder (edited May 5th 2016) Author: Thomas Holder Format: MarkdownItexFor the Girard quote see page 13 of [this one](http://iml.univ-mrs.fr/~girard/Synsem.pdf.gz). Excuse me if I become paraconsistent here but there is an _Outline of a paraconsistent category theory_ (2004) by da Costa,Bueno& Volkov (available e.g. from the [publisher](http://link.springer.com/chapter/10.1007%2F978-3-662-05679-0_8)). For the Girard quote see page 13 of this one. Excuse me if I become paraconsistent here but there is an Outline of a paraconsistent category theory (2004) by da Costa,Bueno& Volkov (available e.g. from the publisher). Format: MarkdownItexThanks for the reference! I've added it to the page. Was your second sentence meant to be in another thread? Thanks for the reference! I've added it to the page. Was your second sentence meant to be in another thread? Format: MarkdownItexWell, the second sentence was meant for this thread since I didn't want to flood neither the polymath nor the paraconsistent logics threads with just this reference especially since I've never taken a closer look at the article. Thematically it relates to paraconsistent mathematics and could also qualify for [[ETCC]] where other references to formalizations of category theory are listed. Well, the second sentence was meant for this thread since I didn't want to flood neither the polymath nor the paraconsistent logics threads with just this reference especially since I've never taken a closer look at the article. Thematically it relates to paraconsistent mathematics and could also qualify for ETCC where other references to formalizations of category theory are listed. Format: MarkdownItexThe fact their 'complete category of sets' is a category of _material_ sets is a little odd, but perhaps that goes back to the original 1960s idea of the construction, before ETCS really became a thing. The fact their 'complete category of sets' is a category of material sets is a little odd, but perhaps that goes back to the original 1960s idea of the construction, before ETCS really became a thing. Format: MarkdownItexWell, this thread is about the cut rule, not about paraconsistent mathematics or ETCC. (-: Well, this thread is about the cut rule, not about paraconsistent mathematics or ETCC. (-: Format: MarkdownItexSorry :-) if people want to discuss (I'm not itching to do so) then they can take it up elsewhere Sorry :-) if people want to discuss (I'm not itching to do so) then they can take it up elsewhere Format: MarkdownItexI know we often go off on tangents, but that tendency makes it hard to find a discussion again if you're looking for it, and the subject doesn't seem even tangentially related here. (-: I'd be happy to discuss it in another thread though. I know we often go off on tangents, but that tendency makes it hard to find a discussion again if you're looking for it, and the subject doesn't seem even tangentially related here. (-: I'd be happy to discuss it in another thread though. CommentAuthornLab edit announcer CommentTimeDec 17th 2021 Author: nLab edit announcer Format: MarkdownItexFix typo in the beta-reduction example. Linas <a href="https://ncatlab.org/nlab/revision/diff/cut+rule/14">diff</a>, <a href="https://ncatlab.org/nlab/revision/cut+rule/14">v14</a>, <a href="https://ncatlab.org/nlab/show/cut+rule">current</a> Fix typo in the beta-reduction example.
CommonCrawl
What is the probability that, given the smallest of 50 random integers(>0), it will be the smallest of 50 other random integers (one being itself)? More generally, if an array of random integers (size N), and another array of random integers (size M), "overlap" by R numbers (have them in common): What is the chance that the smallest of one is the smallest of the other? You can assume these integers are in a finite interval, all positive. Edit: The numbers are uniformly distributed, bounded arbitrarily (say by 1000000), and chosen without replacement. And in the first case, with 50 random integers, you know that the second set contains the smallest element selected from the first. In the general case, you know the smallest in the first set is present in the "R" overlap between the two sets. probability combinatorics JimJim $\begingroup$ The question is slightly ambiguous because you don't specify a distribution fot the integers and you don't say what happens when two of them are the same. From what I suspect the intention of your question to be, I'd suggest to specify that $N$ real numbers are drawn independently and uniformly from $[0,1]$. $\endgroup$ – joriki May 8 '13 at 17:06 $\begingroup$ Let us assume you draw a set $A$ of $N$ numbers and a set $B$ of $M$ numbers uniformly out of $\{1,2,3,\ldots P\}$, each without replacement, the chance that $1$ is in both of them (and therefore a matching smallest) is $\frac {NM}{P^2}$ The chance that $1$ is in neither and $2$ is in both is $(1-\frac NP)(1-\frac MP)\frac {NM}{(P-1)^2}$ You can continue in this vein, but the expressions get messier and messier. I don't know an easy way to sum them up. This ignores the constraint that $R$ of the numbers match between $A$ and $B$. $\endgroup$ – Ross Millikan May 8 '13 at 17:23 $\begingroup$ @Ross: For this sort of problem it's often useful to ignore the concrete numbers and focus only on their ranking (see my answer). $\endgroup$ – joriki May 9 '13 at 9:18 You have $99$ numbers and two sets of $50$ of them that share exactly one number. Up to permutations of the non-shared numbers within the two sets, there are $99\binom{98}{49}$ different possible rankings of the numbers, and they're all equally likely. Here are two different ways in which to determine the probability of the shared number being the smallest of both sets under the condition that it's the smallest of a particular one of the sets: $1)$ Start out with the ranking of the set in which the number is known to be the smallest, and successively insert the $49$ remaining numbers of the other set. In each insertion, every possible place of insertion is equally likely. The shared number is the smallest if none of the numbers is inserted below it. The probablity for this is $$ \frac{50}{51}\cdot\frac{51}{52}\cdots\frac{97}{98}\cdot\frac{98}{99}=\frac{50}{99}\approx\frac12\;. $$ $2)$ Count the rankings that fulfill the two sets of requirements and divide the two counts. The count of rankings in which the shared number is the smallest of both sets is easy: The shared number has to be the smallest of all the numbers, and the rest can be ranked arbitrarily, which makes $\binom{98}{49}$ possibilities. To find the count of rankings in which the shared number is the smallest of a particular one of the sets, we can sum over all possible ranks of the shared number: If the shared number is the $k$-th smallest, there are $\binom{99-k}{49}$ possible rankings, so the total is $$ \sum_{k=1}^{50}\binom{99-k}{49}=\binom{99}{49}\;, $$ and the desired probability is the quotient $$ \frac{\displaystyle\binom{98}{49}}{\displaystyle\binom{99}{49}}=\frac{50}{99}\;. $$ In the general case, you can just ignore the additional shared numbers. So assume you have $N$ numbers and $M$ numbers sharing one number, which is the smallest of the $N$ numbers, and you want to know the probability of it also being the smallest of the $M$ numbers. Then method $1)$ above yields $$ \frac{N}{N+1}\cdots\frac{N+M-2}{N+M-1}=\frac{N}{N+M-1}\;. $$ Note that the cases $N=M$ (above), $N=1$ and $M=1$ come out right. jorikijoriki $\begingroup$ Thank you very much! I realize now, however, that this is simply given by the definition of the probability of A given B: The probability that the number is the smallest of both: 1/99 The probability that it was the smallest of the first: 1/50. The ratio: 50/99 $\endgroup$ – Jim May 9 '13 at 16:27 Not the answer you're looking for? Browse other questions tagged probability combinatorics or ask your own question. Probability that one random number is larger than other random numbers Probability that the numbers on the tags marked $ 1; 2;…; n$ will be consecutive integers. what is the probability that third number lies between first two if the first number is known to be smaller than the second? What is the probability that an element is in a subset of a set? Is it true that there will always be two among the selected integers so that one of them is equal to twice the other? Find the probability distribution for the smallest of three numbers If we select a random integer number of the set $[1000000]$ what is the probability of the number selected contains the digit $5$? Probability of one event happening first given the second event? What is the probability that at least one suit is missing from the selection? What is the probability that a random function from $\mathbb{N} \to \mathbb{N}$ is surjective?
CommonCrawl
Defining addition function through predicate logic Let universe be natural numbers: $\mathbb{N}=\{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, ...\}$ Let this be Addition set whose elements are 3-element arrays, such that $\langle x, y, x+y \rangle$. $\mathbb{A}=\{\langle 1, 1, 2 \rangle\, \langle 1, 2, 3 \rangle, \langle 2, 1, 3 \rangle, \langle 2, 2, 4 \rangle, \langle 1, 3, 4 \rangle, \langle 3, 1, 4 \rangle, \langle 2, 3, 5 \rangle, ...\}$ Can the following be the definition of addition in the language of predicate logic (note, $a(x, y)=x+y$): $\forall x \forall y \exists z (a(x, y)=z \land \mathbb{A}xyz)$? Is it a problem that this formula says that there exists $z$, but it does not say that there is only one $z$? predicate-logic HanlonHanlon $\begingroup$ To be a definition ,you would have to know existence and uniqueness .However if you want a predicate logic for addition you introduce a 2 place function symbol a(,) .This automatically has uniqueness .So your existence condition is essentially the introduction of a new predicate for your system . So essentially you are ok. The predicate paraphrased is is really :for every x and y there is a z so that Axyz. $\endgroup$ – StuartMN Jan 28 '18 at 21:19 $\begingroup$ My previous comment is not adequate as the answer of A. Blass indicates .But how does the predicate Axyz get into the predicate logic $\endgroup$ – StuartMN Jan 28 '18 at 22:21 A definition of a $k$-place function symbol $f$ should have the form $(\forall x_1)\dots(\forall x_k)(\forall y)\,[f(x_1,\dots,x_k)=y\iff\phi]$ where $\phi$ is in the original vocabulary. In your situation, the definition should read (with your choice of variables) $$ (\forall x)((\forall y)(\forall z)[a(x,y)=z\iff \mathbb Axyz]. $$ Andreas BlassAndreas Blass Not the answer you're looking for? Browse other questions tagged predicate-logic or ask your own question. Predicate logic describing a function that is not onto. Using expressions like $ \langle x,y \rangle$ in predicate logic formulas Is it notationally appropriate to use $\in$ and $\subseteq$ in the quantificational part of a predicate logic statement? Can't interpret this predicate logic formula Help with converting sentences into predicate logic Write on predicate logic language and truth domain of a predicate Why we have 2 quantifiers in predicate logic? Predicate Logic Natural Number Problems Predicate logic in set theory: what components can we write in an expression?
CommonCrawl
Strata of the World "You laugh at iterative statistical curve fitting, but it might not be too long before iterative statistical curve fitting is laughing at you" Review: Energy and Civilization: A History (Vaclav Smil) Book: Energy and Civilization: A History, by Vaclav Smil (2017) 3.8k words (≈13 minutes) The broad picture of civilizational energy use is often considered to look something like this: Hunter-gatherers rely on muscle power for their energy needs, expending energy primarily on hunting and foraging. Agriculture is invented; humans switch from nomadic to sedentary lifestyles and labor (and therefore energy use) is switched from humans to domesticated animals. Renewable, animate energy drives civilization for millennia. The potential of coal and steam engines brings about the industrial revolution, giving rise to mass production, industrialization, and rapid a switch from renewable animate to nonrenewable inanimate power in the middle of the 19th century. In the 20th century, oil replaces coal as the main energy source. In Energy and Civilization: A History (an ambitious title if there ever was one), Vaclav Smil shows that such a narrative is a simplification, and that transitions from one form of energy to another have typically been more complex. The work is in good company among other books that explain long-run historical trends from a certain perspective, like Yuval Noah Harari's famous Sapiens (cognition), Francis Fukuyama's The Origins of Political Order and Political Order and Political Decay (political organization), Jared Diamond's Guns, Germs, and Steel (geography), and Paul Kennedy's The Rise and Fall of the Great Powers (economic power). The lens of Energy and Civilization is, predictably, energy. However, while at least as broad in scope (and in contrast to, say, Sapiens), Smil's Energy and Civilization is also very ready to dive into details. What I mean by this is that Energy and Civilization is full of facts and statistics. I opened a random page and counted 8 figures on that page and 21 on the next one. From the first to the last page, Smil will assault you with facts about everything from the efficiency of different waterwheel designs to the mass/power ratio of the Saturn V to the energy density of seal meat (15-18 MJ/kg, if you were wondering). And yes, this does lend a certain dryness to the book. But it's well worth it: the result is a comprehensive outline - if there is such a thing - of energy generation and use since prehistoric times. What, then, is wrong with the simplified picture of energy transitions presented above? And what does it mean for the future transition to renewable energy? When dealing with global energy supplies, the numbers get fairly large. Since the larger SI prefixes are not used very often, here is a complete list of SI prefixes for quick reference, including reference points for power and energy shamelessly stolen from the book's excellent addenda and some other sources: Kilo- (k): 1 000 / $$10^3$$. 1 kW is the peak power of a strong horse. Mega- (M): 1 000 000 / $$10^6$$. 0.9 MW is the maximum power of a steam locomotive; a wind turbine provides several megawatts of power; a Boeing 747 uses 60 MW. Giga- (G): 1 000 000 000 / $$10^9$$. Nuclear power plants are typically in the several gigawatt range Tera- (T): 1 000 000 000 000 / $$10^{12}$$. The world energy consumption is 17 TW. The Hiroshima bomb released 63 TJ of energy. Peta- (P): $$10^{15}$$. 170 PW is the power of sunlight hitting the Earth's surface. Exa- (E): $$10^{18}$$. 500 EJ is around the world's annual energy consumption. Zeta- (Z): $$10^{21}$$. 15 ZJ is the total energy the Earth receives in sunlight in one day. About 40 ZJ of energy are estimated to be contained in the world's fossil fuel reserves. Fossil fuels in the 20th century provided about 10 ZJ of power. Yota- (Y): $$10^{24}$$. 300 YW is the sun's power. (Do not confuse power (measured in watts (W), which are joules per second) with energy (measured in joules (J), which are defined as force times distance)) A central concept in all the discussions in the book is that of a prime mover. Thankfully, this does not refer to the philosophical concept of an unmoved mover, but instead to something that uses energy from a source to do work. One surprise to the conventional narrative presented above is that animals were never were even 20% of prime movers. This is despite the fact that an animal like an ox or buffalo (both about 250-550 watts) or a horse (500-850 W) have power outputs for manual labor much higher than humans (70-150 W). While domesticated animals played an important role, the energy delivered by human muscle remained much greater. Further, the animal share of prime movers peaked fairly late. In the US, animal power capacity was overtaken by internal combustion engines only around 1910, and by electricity only around 1920. The number of American horses peaked in 1917. Though inanimate power achieved primacy only in the 1900s, it was an important part of energy supply for centuries before then. Waterwheels and, later, windmills had played an important role since Roman times (especially in Europe in grain milling, and later iron metallurgy and cloth fulling), and again their capacities were surpassed fairly late - installed capacity of steam engines in the US in 1849 was 920 MW compared to 500 MW of waterwheels, but because waterwheels had less downtime the energy delivered by them was 2.4 PJ / year, compared to about 1 PJ / year from coal (energy delivered by coal surpassed waterwheels in the 1860s). As late as the 1920s, there were more than 30 000 operational waterwheels in Germany. Even the rise of European colonial empires was based on two sources of inanimate power: wind and gunpowder. And today, only a fifth of humanity has fully completed the transition to full reliance on inanimate power sources. The second of the great great energy dichotomies - renewable versus non-renewable - also turns out to have a more complex history. In many pre-industrial areas, logging was far from sustainable, as evidenced by extensive deforestation in the Mediterranean, northern China, and later England and the United States (other pre-industrial fuels include dried dung, crop residues, and - in northern China - coal). As cities grew, supplying them with wood - about 650 kg per capita per year in Rome in 200 CE, 1 750 kg per capita per year in medieval London, and 3 000-6 000 kg in 19th century European cities - became increasingly problematic, requiring more and more complex logistics chains and affecting many parts of life. Perhaps the most serious effect was air pollution. Air pollution is often thought of as a modern or at least post-industrial problem, but air quality was often likely worse in pre-industrial rural environments than in industrialized cities. There are two main reasons for this: first, much of the combustion was done indoors (fireplaces, furnace, and so on), and secondly wood is simply a bad fuel: it is dirty, and is typically not very efficient (completely dry coniferous wood can approach coal in energy density, but air-dried wood in dry climates typically contains 15% moisture, reducing its heating potential). Charcoal is an improvement over wood, providing an energy density of 28-30 MJ/kg (about 50% higher than completely dry wood) and burning more cleanly, though its production involves losing about 60% of the wood's energy potential. How quickly did the world transition away from biomass (wood, dung, and crop residues) to coal? In 1800, the world's annual energy consumption from fuels was 20 EJ, of which 98% was wood. In 1900, energy consumption had doubled to 43 EJ, which was split about evenly between wood and fossil fuels. In other words: after a century of industrialization, the world used about as much wood as before! Of course, many European and American countries were ahead of the curve, but not always by much, or even at all: French oil and coal power reached the 50% level 1875, about 25 years before the world, the US in the 1880s, but Russia only around 1930. What about the latest historical energy transition, from oil to coal? We have already seen that the 19th century, often considered the century of coal, was dominated by wood. From this you might already guess that the 20th century was not the century of oil: oil delivered 4 ZJ* from 1900 to 2000, compared to 5.2 ZJ* from coal (and coal remains ahead even after non-energy uses of oil are accounted for). (*My copy of the book has these numbers as 4 YJ and 5.2 YJ, or 4 000 ZJ and 5 200 ZJ respectively, implying that from 1900-2000 the world consumed a total of $$9.2 \times 10^{24}$$ J from fossil fuels. However, the world's total annual energy consumption is on the order of only $$5 \times 10^{20}$$ J; a century at this level of consumption would bring the total to $$5 \times 10^{22}$$ J, two hundred times less than the amount that the book lists as being supplied by fossil fuels alone in the 20th century. This is obviously implausible. Since energy consumption today is several times higher than the 20th century average, the numbers are consistent with Smil having used yotajules when he meant zetajoules. I assumed this is the case and changed the numbers.) (Burning oil is far from good for the environment, but it is already a massive improvement over wood and coal. In a complete combustion reaction, every mole of carbon in the fuel results in another mole of carbon dioxide as a product, so minimizing the amount of carbon in the fuel directly reduces CO2 emissions. The hydrogen:carbon ratio of wood is about 0.5, compared to 1 for coal, 1.8 for gasoline and kerosene (though there is some variation because of differing concentrations of the constituent alkanes), and 4 for methane (CH4). CO2 emissions per gigajoule are 30 kg for coal but can be under 15 kg for natural gas. Wood and coal also produce far more side products (such as sulfur dioxide for coal and various toxic components of woodsmoke for wood).) Energy transitions are slow The two great industrial energy transitions have been the ones from muscle and wood to coal, and then from coal to oil. The greatest challenge of 21st century civilization will be enacting another transition, this time from oil to renewables. But the history shows that both of the previous energy transitions have been slow: "My reconstruction of global energy transitions shows coal (replacing wood) reaching 5% of the global market around 1840, 10% by 1855, 15% by 1865, 20% by 1870, 25% by 1875, 33% by 1885, 40% by 1895, and 50% by 1900 (Smil 2010a). The sequence of years needed to reach these milestones was 15–25–30–35–45–55–60. The intervals for oil replacing coal, with 5% of the global supply reached in 1915, were virtually identical: 15–20–35–40–50–60 (oil will never reach 50%, and its share has been declining). Natural gas reached 5% of the global primary supply by 1930 and 25% of it after 55 years, taking significantly longer to reach that share than coal or oil. The similar progress of three global transitions—it takes two or three generations, or 50–75, years for a new resource to capture a large share of the global energy market—is remarkable because the three fuels require different production, distribution, and conversion techniques and because the scales of substitutions have been so different: going from 10% to 20% for coal required increasing the fuel's annual output by less than 4 EJ, whereas going from 10% to 20% of natural gas needed roughly an additional 55 EJ/year (Smil 2010a). The two most important factors explaining the similarities in the pace of transitions are the prerequisites for enormous infrastructural investment and the inertia of massively embedded energy systems." Both of the past transitions have taken 55-75 years from the 5% to the 40% level. Compare this with the state of renewables today: in 2017 solar provided 1.7% and wind 4.4% of global energy consumption (hydropower is at 16%, but unlikely to grow too much since viable locations are limited). Accelerating the next energy transition The picture might look bleak. However, there is a pressing need for the next energy transition, meaning that significant resources will likely be devoted to accelerating it. So all we need is rapid, expansive international commitment, and … okay, it does look pretty bad. What advice does Smil have? He is not a fan of biofuels, which currently supply 1.8% of the world's energy; Smil writes: "Scaling this industry to supply a significant share of the world's liquid biofuels is, bluntly put, delusionary (Giampietro and Mayumi 2009, Smil 2010a)". He does, however, have three main ideas for hastening the energy transition. First, more nuclear power. This is not surprising. In my review of Enlightenment Now, I noted Pinker's strong support for it, as well as providing links to further statistics and articles on nuclear power that support its efficacy and safety. When all the knowledgeable and rigorous sources support something, I think it's time to listen. The world - and particularly the West - is not listening. Nuclear provided only 10.7% of the world's energy in 2015 (though the share was 17% before the Chinese surge in coal energy). Of the 67 reactors under construction worldwide, 60% are Chinese, Russian or Indian (25, 9, and 6 reactors respectively), leading Smil to conclude: "The West has essentially given up on this clean, carbon-free way of electricity generation" (though countries like France, with 77% nuclear power, are a notable exception). The second major step would be the invention of cheap, large-scale energy storage. This would allow fluctuating renewables like solar and wind to take over a far larger share of electricity generation. However, while battery technology continues to advance, the search for this Holy Grail has so far yielded as many results as the expedition in Monty Python and the Holy Grail. Efficiency? What efficiency? The third major step is more rational energy use. Smil notes that energy's true cost is not reflected in its price, driving uneconomic trends in energy use. For example, the power of an average American almost doubled from 90 kW in 1990 to 175 kW in 2015. It seems hard to imagine such an increase being driven by economic considerations - were the cars of 20 years ago really bottlenecked by their power output? Of the trend towards larger cars, particularly SUVs, Smil asks: "Where is the sport and what is the utility of driving these heavy minitrucks to a shopping center?" But perhaps the clearest damnation of the economic value of cars is the following: "After taking into account the time needed to earn monies for buying (or leasing) the car and to fuel it, maintain it, and insure it, the average speed of U.S. car travel amounted to less than 8 km/h in the early 1970s (Illich 1974)—and, with more congestion, by the early 2000s the speed was no higher than 5 km/h, comparable to speeds achieved before 1900 with horse-drawn omnibuses or by simply walking. In addition, with well-to-wheel efficiencies well below 10%, cars remain a leading source of environmental pollution; as already noted, they also exact a considerable death and injury toll (WHO 2015b)." Smil's disdain is not limited to modern cars. In the last chapter, he writes: "On a more mundane level, tens of millions of people annually take inter- continental flights to generic beaches in order to acquire skin cancer faster; the shrinking cohort of classical music aficionados has more than 100 recordings of Vivaldi's Quattro Stagioni to choose from; there are more than 500 varieties of breakfast cereals and more than 700 models of passenger cars. Such excessive diversity results in a considerable misallocation of energies, but there appears to be no end to it: electronic access to the global selection of consumer goods has already multiplied the choice available for Internet orders, and the customized production of many consumer items (using individualized adjustments of computer designs and additive manufacturing) would raise it to yet another level of excess. The same is true of speed: do we really need a piece of ephemeral junk made in China delivered within a few hours after an order was placed on a computer? And (coming soon) by a drone, no less!" Though Smil somewhat overstates his case (are classical music recordings and customized computers really egregious examples of misallocated resources?), I think he is correct in decrying the inefficiency of consumerism. Excess consumption of unnecessary goods is not just detrimental to the world, but also unlikely to serve the true interests of the consumers themselves; I'm not sure what the path to happiness and enlightenment is, but I will bet you it has little to do with designer clothes or 4K TVs. However, keep in mind that such energy use is far from the global norm: "[…] regardless of the indicators used, those kinds of wasteful, unproductive, and excessive final energy use are still in the global minority. When looking at average per capita energy supply, then only about one-fifth of the world's 200 countries have accomplished the transition to mature, affluent industrial societies supported by the high consumption of energy (>120 GJ/capita), and the share is even lower in population terms, about 18% (1.3 billion among 7.3 billion in 2015)." From an energy perspective, parts of the developed world's economies are wasteful. On the other hand, many countries remain constrained by energy considerations. How much energy is required for an industrialized welfare society? Here Smil provides a comprehensive scale: Hunter-gatherer energy consumption is hard to estimate, but given a daily food intake of 10 MJ per capita (about 2400 kcal), about 3.6 GJ of food energy is needed per capita per year. In addition, Smil estimates that the wood for cooking meat might very roughly translate to another 2 GJ 5 GJ per capita per year (120 kg oil equivalent) is required for even the most basic necessities. This is somewhere around the energy consumption of Ethiopia, Bangladesh, China in 1950, and Western Europe before 1800. 40 GJ/capita/year (1000 kg oil equivalent) is required for industrialization and basic well-being (around the level of 1980s China, 1930-1950s Japan, and late 1800s Western Europe and US). 80 GJ/capita/year (2000 kg oil equivalent) corresponds to more affluent industrial society (1960s France, 1970s Japan, and 2012 China (though high industrial energy use in China means that its level is not directly comparable to the others)). Over 110 GJ/capita/year (2.5t oil equivalent) is the minimum level for highly affluent societies. Note, however, that the approximately 100 GJ level is not a guarantee of welfare and affluence, but simply the minimum level. It also seems to be a threshold level: above this, further energy use no longer correlates with wellbeing: Thus, countries like Japan, Germany, France, UK, and Italy manage to sustain affluent industrialized societies with 100-175 GJ of annual per capita energy use, while other countries take much more energy to reach a similar level. In some cases this makes sense: looking at a list of countries by per capita energy consumption, many northern countries like Iceland, Canada, and Finland have fairly high consumptions (760, 300, and 255 GJ/capita/year respectively). Other countries don't have this excuse - many Middle-Eastern oil nations, like Qatar (800 GJ/capita/year), Bahrain (430), Kuwait (410), and UAE (320), have very high energy consumption. The United States, Russia, and Saudi Arabia also have disproportionate levels of energy consumption compared to their standards of living. Therefore, it seems that there is a lot of room for cutting energy consumption in many countries without reducing quality of life. However, as Smil laments, this tends to be politically unfeasible. Efficiency gains The efficiencies of many processes have improved by an order of magnitude or more. The most dramatic example is light. The number of lumens (the unit of light) produced per watt has risen from 0.3 for candles to 2 for gas lights to 5 for incandescent light bulbs to 15 for modern light bulbs to 100 for fluorescent light bulbs and almost 150 for LEDs (this has been accompanied by a drop in real prices of four orders of magnitude, and a 200-600 fold decrease during just the 1900s!). Similarly, the efficiency of cooking has increased from a few percent for open fires, to 30% for wood stoves to 45% for coal stoves to 65% for gas furnaces and up to 97% in the newest models, like the one which the author has in his "super-efficient home" (somehow I'm not surprised that Smil knows the efficiencies of his household appliances). On a larger level, Smil estimates that while energy use increased 14-fold during the 20th century, useful energy increased 30-fold due to an increase in weighted global energy efficiency from roughly 20% in 1900 to 35% in 1950 to 50% in 2015. A doubling of energy efficiency is no small thing. However, the issue with efficiency improvements is that they cannot be eternal: X joules of work can never be done with less than X joules of input (in fact, thermodynamics dictates it will always take at least a bit more than X joules). With many things - including light, heating, and power plant boilers - already operating near the theoretical limit, reductions in energy use in developed economies will increasingly require decreases in delivered useful energy as well. As noted above, one industry where efficiency gains are still possible is cars, which currently have efficiencies of below 10% (though this figure includes all inefficiencies between oil in the ground and the kinetic energy of a car). There are two obvious ways to increase efficiency: switch from ICEs to electric motors, and - once autonomous vehicles are finally a thing - switch to a shared-ownership model; the production of a car takes about 100 GJ, which, as we saw earlier, is comparable to the total annual per capita energy use of an efficient welfare society. Efficiency gains, however, are far from automatic, in large part due to the distorting effect of unpriced externalities on prices. Once again, the American car industry turns out to be far from a paragon of excellence in these matters: the fuel efficiency of American cars fell from 13.4 to 17.7 liters per 100km from the early 1930s to 1973. The gains from new technology were eaten up by cars becoming bigger and faster. The cheapest and most important efficiency gains will, however, come from the developing world. Smil points out that even something as simple as introducing modern stoves with efficiencies of 25-30%, compared to 10-15% for traditional ones, would cut the energy required for cooking by half, hence halving the wood requirements and having a sizable impact on deforestation rates. Despite inefficiencies in some industries, it is important to remember that there is an overall downwards trend in the energy intensity of GDP in every industrialized nation: For Canada, the US, and Western Europe, the energy intensity of the economy peaked in the early 1900s and has been declining since then. The pattern has repeated for Japan's industrialization, and will likely repeat as more and more countries industrialize. Man perisheth? Smil ends his book imploring the world to commit to action with this cheery quote from Senancour: "Man perisheth. That may be, but let us struggle even though we perish; and if nothing is to be our portion, let it not come to us as a just reward." Indeed, the image painted by Smil's remorseless statistics is not promising when considering the enormous speed with which humanity must complete the next energy transition. Assuming solar and wind grow at the same rate as coal use, they will be providing a majority of the world's power by 2070 at the earliest. Electricity production is also only part of the challenge; agriculture, industry, and transportation are all significant polluters. There is no greater task for a civilization than overhauling its energy basis. And yet, given the stakes, there is little choice. The follow-up post has more facts and statistics from Energy and Civilization Nuclear power is good Labels: book reviews, civilisation, history, technology book reviews (18) civilisation (16) technology (13) computer science (10) math (10) EA (9) history (7) philosophical (6) science fiction (4) AI (3) physics (3) textbooks (3) humour (2) startups (1) More Energy and More Civilization Review: Energy and Civilization: A History (Vaclav...
CommonCrawl
Autonomous Agents and Multi-Agent Systems April 2020 , 34:18 | Cite as Fast core pricing algorithms for path auction Hao Cheng Wentao Zhang Yi Zhang Jun Wu Chongjun Wang Path auction is held in a graph, where each edge stands for a commodity and the weight of this edge represents the prime cost. Bidders own some edges and make bids for their edges. The auctioneer needs to purchase a sequence of edges to form a path between two specific vertices. Path auction can be considered as a kind of combinatorial reverse auctions. Core-selecting mechanism is a prevalent mechanism for combinatorial auction. However, pricing in core-selecting combinatorial auction is computationally expensive, one important reason is the exponential core constraints. The same is true of path auction. To solve this computation problem, we simplify the constraint set and get the optimal set with only polynomial constraints in this paper. Based on our constraint set, we put forward two fast core pricing algorithms for the computation of bidder-Pareto-optimal core outcome. Among all the algorithms, our new algorithms have remarkable runtime performance. Finally, we validate our algorithms on real-world datasets and obtain excellent results. Path auction Core Pricing algorithm Constraint set Part of the contents in this paper were published in AAMAS '18 [6]. This paper is supported by the National Key Research and Development Program of China (Grant No. 2018YFB1403400), the National Natural Science Foundation of China (Grant No. 61876080), the Collaborative Innovation Center of Novel Software Technology and Industrialization at Nanjing University. Appendix A: Summary of main notation \(G=(V,E)\) A directed graph that consists of a edge set E and a vertex set V \(v_0\) The starting vertex \(v_n\) The ending vertex \(e_i= (v_{i-1},v_i)\) The edge that starts in the vertex \(v_{i-1}\) and ends in \(v_{i}\), also represents a bidder owning edge \(e_i\) \(c_{e_i}\) The cost of the edge \(e_i\) \(\pi _{e_i}\) The utility of the bidder \(e_i\) \(\Pi\) Social welfare of the auction The total set of players, including the bidders and auctioneer W(L) The social welfare of the subset L of N The payment set of the auction \(p_{e_i}\) The payment to the bidder \(e_i\) \(\text {core}\) The total set of core outcomes \(W_G(v_i,v_j)\) A walk from \(v_i\) to \(v_j\) in graph G \(P_G(v_i,v_j)\) The shortest path from \(v_i\) to \(v_j\) in graph G \(V_G(v_i,v_j)\) The vertex set of \(P_G(v_i,v_j)\) \(E_G(v_i,v_j)\) The edge set of \(P_G(v_i,v_j)\) \(P_w(v_0,v_n)\) The path that is selected as the winner path in the auction \(E_w(v_0,v_n)\; or\; E_w\) The edge set of \(P_w(v_0,v_n)\) \(V_w(v_0,v_n)\; or\; V_w\) The vertex set of \(P_w(v_0,v_n)\) \(P_w(v_i,v_j)\) A subpath of \(P_w(v_0,v_n)\) that is from \(v_i\) to \(v_j\) \(E_w(v_i,v_j)\) The edge set of \(P_w(v_i,v_j)\) \(V_w(v_i,v_j)\) The vertex set of \(P_w(v_i,v_j)\) \(d_G(v_i,v_j)\) The cost of the shortest path from \(v_i\) to \(v_j\) in graph G Appendix B: Proof of the correctness of CCG-VCG algorithm for path auction Theorem 15 The outcome of CCG-VCG algorithm is a bidder-Pareto-optimal core outcome. Consider the constraint added into CCG-SET $$\begin{aligned} \sum _{e_i \in E_w \backslash z}p_{e_i} \le \sum _{e_i \in E_{w'} \backslash z}c_{e_i} \end{aligned}$$ \(E_{w'}\) is the new winner set in a new graph where we change the cost of each edge in \(E_w\) from \(p^{t-1}_{e_i}\) to \(p^{t}_{e_i}\) in G. Denote by \(G_1\) this graph and \(P_{w'}\) the path corresponding to the edge set \(E_{w'}\). \(P_{w'}\) is the shortest path in \(G_1\). We first prove that the constraint (65) is a standard constraint of (C1). In \(G_1\), we remove the edges in \(E_w \backslash z\) and change the costs of the edges in z from \(p^{t}_{e_i}\) to \(c_{e_i}\). Denote by \(G_2\) this graph. \(P_{w'}\) also exists in \(G_2\) because it doesn't include any edge in \(E_w \backslash z\). Compared with \(G_1\), the cost of \(P_{w'}\) reduces by \(\sum _{e_i\in z}p^t_{e_i} - c_{e_i}\)6 in \(G_2\). As to other paths in \(G_2\), their costs reduce by \(\sum _{e_i\in z}p^t_{e_i} - c_{e_i}\) at most, so \(P_{w'}\) is also the shortest path in \(G_2\). Note that \(G_2\) is just the graph \(G\backslash (E_w \backslash z)\), from the constraint (65), we have $$\begin{aligned} \sum _{e_i \in E_w \backslash z}p_{e_i}&\le \sum _{e_i \in E_{w'} \backslash z}c_{e_i}\\&= d_{G_2}(v_0,v_n) - \sum _{e_i\in z}c_{e_i}\\&= d_{G\backslash (E_w\backslash z)}(v_0,v_n)- \sum _{e_i\in z}c_{e_i} \end{aligned}$$ Recall the constraint in (C1) as $$\begin{aligned} (C1):\sum _{e_i \in x}p_{e_i} \le d_{G\backslash x}(v_0,v_n) - (d_G(v_0,v_n) -\sum _{e_i\in x}c_{e_i}),\forall x\in E_w \end{aligned}$$ Let \(x= E_w\backslash z\), we have $$\begin{aligned} \sum _{e_i \in E_w\backslash z}p_{e_i}&\le d_{G\backslash (E_w\backslash z)}(v_0,v_n) - \left(d_G(v_0,v_n) -\sum _{e_i\in (E_w\backslash z)}c_{e_i}\right)\\&= d_{G\backslash (E_w\backslash z)}(v_0,v_n) - \sum _{e_i\in z}c_{e_i} \end{aligned}$$ The constraint (68) is (C1) is just the same as the constraint (66), so the constraint we add into CCG-SET during each iteration is a standard constraint of (C1). Then the constraint set CCG-SET is a subset of the constraint set (C1). In each iteration, CCG-VCG algorithm adds a constraint of (C1). The number of constraints in (C1) is limited so that this algorithm must stop in a limited number of steps. To prove the theorem, we just need to prove that the outcome of CCG-VCG algorithm is in the core. Assuming that the outcome of CCG-VCG algorithm isn't in the core. Thus, there is at least one constraint in (C1) which is not satisfied by this result. Without loss of generality, let \(x=E_w\backslash z'\) is the corresponding set, then the constraint becomes $$\begin{aligned} \sum _{e_i \in E_w\backslash z'}p_{e_i}&> d_{G\backslash (E_w\backslash z')}(v_0,v_n) - \left(d_G(v_0,v_n) -\sum _{e_i\in (E_w\backslash z')}c_{e_i}\right)\\&= d_{G\backslash (E_w\backslash z)}(v_0,v_n) - \sum _{e_i\in z}c_{e_i} \end{aligned}$$ Then we have $$\begin{aligned} \sum _{e_i \in E_w}p_{e_i}&= \sum _{e_i \in E_w\backslash z'}p_{e_i} +\sum _{e_i \in z'}p_{e_i}\\&> d_{G\backslash (E_w\backslash z)}(v_0,v_n)+\sum _{e_i \in z'}p_{e_i}-\sum _{e_i \in z'}c_{e_i} \end{aligned}$$ where \(d_{G\backslash (E_w\backslash z)}(v_0,v_n)\) is the cost of the shortest path from \(v_0\) to \(v_n\) in graph \(G\backslash (E_w\backslash z)\). This path still exists in graph which changes the cost of edges in \(E_w\) from \(c_{e_i}\) to \(p_{e_i}\). This change makes the cost of this path increase by \(\sum _{e_i \in z'}p_{e_i}-\sum _{e_i \in z'}c_{e_i}\) at most. So the cost of this path is no more than \(d_{G\backslash (E_w\backslash z)}(v_0,v_n)+\sum _{e_i \in z'}p_{e_i}-\sum _{e_i \in z'}c_{e_i}\), which means it is shorter than the sum of outcome in CCG-VCG algorithm. This produces a contradiction with terminal condition in CCG-VCG algorithm, so this theorem is established. \(\square\) Appendix C: Proof of Theorem 9 In \(G'\), the edge in \(E_w\) is converted into a reverse edge with negative original cost. As we know, each edge is not cut edge for the connectivity from \(v_0\) to \(v_n\), that is, there exist a path from \(v_0\) to \(v_n\) after removing this edge in graph G. We use the mathematical induction by proving the following two propositions. There exist a path from \(v_0\) to \(v_1\) in \(G'\). If there exists a path from \(v_0\) to \(v_i\) in \(G'\), then there exists a path from \(v_0\) to \(v_{i+1}\) in \(G'\) (\(0< i < n\)). It is obvious that Theorem 9 is established if these two propositions is correct. Note that \(V_w(v_1,v_n)\) is the vertex set including \(v_1, v_2,\dots , v_n\). In proposition 1, since \((v_0,v_1)\) is not a cut edge, there must exist a path from \(v_0\) to any vertex of \(V_w(v_1,v_n)\) in the graph \(G\backslash E_w\). Otherwise, there will not exist a path from \(v_0\) to any vertex of \(V_w(v_1,v_n)\) in graph \(G\backslash (v_0,v_1)\), this is because compared with \(G\backslash E_w\), the extra edges in \(G\backslash (v_0,v_1)\) is useless for the connectivity between \(\{v_0 \}\) and \(V_w(v_1,v_n)\). This means \((v_0,v_1)\) is a cut edge, which is contradictory. Therefore, there exists a path from \(v_0\) to any vertex of \(V_w(v_1,v_n)\) in \(G\backslash E_w\). This path also exists in \(G'\) and once this path could arrive at any vertex of \(V_w(v_1,v_n)\) from \(v_0\), it could arrive at \(v_1\) along the negative edges in \(G'\). Thus, proposition 1 is true. In proposition 2, since there exists a path from \(v_0\) to \(v_i\) in \(G'\), we can arrive at any vertex of \(V_w(v_0,v_i)\) by just lengthening this path along the negative edges. Based on that \((v_i,v_{i+1})\) is not a cut edge, similarly, we can draw a conclusion that there exists a path from any vertex of \(V_w(v_0,v_i)\) to any vertex of \(V_w(v_{i+1},v_n)\). Then there also exists a path from any vertex of \(V_w(v_0,v_i)\) to any vertex of \(V_w(v_{i+1},v_n)\) in \(G'\). Denote these two vertices by \(v_a,v_b\) and we have a path from \(v_0\) to \(v_{i+1}\) as \(v_0 \rightarrow v_i \rightarrow v_a \rightarrow v_b \rightarrow v_{i+1}\), like Fig. 12. Therefore, proposition 2 is proved. Path \(v_0 \rightarrow v_i \rightarrow v_a \rightarrow v_b \rightarrow v_{i+1}\) Above all, Theorem 9 is established. \(\square\) Archer, A., & Tardos, É. (2007). Frugal path mechanisms. ACM Transactions on Algorithms (TALG), 3(1), 3.MathSciNetzbMATHCrossRefGoogle Scholar Ausubel, L. M., & Milgrom, P. R. (2002). Ascending auctions with package bidding. Advances in Theoretical Economics, 1(1), 1–42.MathSciNetCrossRefGoogle Scholar Ausubel, L. M., Milgrom, P., et al. (2006). The lovely but lonely Vickrey auction. Combinatorial Auctions, 17, 22–26.Google Scholar Bünz, B., Seuken, S., & Lubin, B. (2015). A faster core constraint generation algorithm for combinatorial auctions. In Twenty-Ninth AAAI conference on artificial intelligence (pp. 827–834).Google Scholar Bünz, B., Lubin, B., & Seuken, S. (2018). Designing core-selecting payment rules: A computational search approach. In Proceedings of the 2018 ACM conference on economics and computation (pp. 109–109). ACM.Google Scholar Cheng, H., Zhang, L., Zhang, Y., Wu, J., & Wang, C. (2018). Optimal constraint collection for core-selecting path mechanism. In Proceedings of the 17th international conference on autonomous agents and multiagent systems (pp. 41–49).Google Scholar Clarke, E. H. (1971). Multipart pricing of public goods. Public Choice, 11(1), 17–33.CrossRefGoogle Scholar Cramton, P. (2013). Spectrum auction design. Review of Industrial Organization, 42(2), 161–190.CrossRefGoogle Scholar Day, R., & Milgrom, P. (2013). Optimal incentives in core-selecting auctions. In The handbook of market design (Chap. 11, pp. 282–298). OUP Oxford.Google Scholar Day, R., & Milgrom, P. (2008). Core-selecting package auctions. International Journal of Game Theory, 36(3–4), 393–407.MathSciNetzbMATHCrossRefGoogle Scholar Day, R. W., & Cramton, P. (2012). Quadratic core-selecting payment rules for combinatorial auctions. Operations Research, 60(3), 588–603.MathSciNetzbMATHCrossRefGoogle Scholar Day, R. W., & Raghavan, S. (2007). Fair payments for efficient allocations in public sector combinatorial auctions. Management Science, 53(9), 1389–1406.zbMATHCrossRefGoogle Scholar Du, Y., Sami, R., & Shi, Y. (2010). Path auctions with multiple edge ownership. Theoretical Computer Science, 411(1), 293–300.MathSciNetzbMATHCrossRefGoogle Scholar Elkind, E., Sahai, A., & Steiglitz, K. (2004). Frugality in path auctions. In Proceedings of the fifteenth annual ACM-SIAM symposium on discrete algorithms (pp. 701–709). Society for Industrial and Applied MathematicsGoogle Scholar Erdil, A., & Klemperer, P. (2010). A new payment rule for core-selecting package auctions. Journal of the European Economic Association, 8(2–3), 537–547.CrossRefGoogle Scholar Feigenbaum, J., Papadimitriou, C., Sami, R., & Shenker, S. (2005). A BGP-based mechanism for lowest-cost routing. Distributed Computing, 18(1), 61–72.zbMATHCrossRefGoogle Scholar Grötschel, M., Lovász, L., & Schrijver, A. (1981). The ellipsoid method and its consequences in combinatorial optimization. Combinatorica, 1(2), 169–197.MathSciNetzbMATHCrossRefGoogle Scholar Groves, T., et al. (1973). Incentives in teams. Econometrica, 41(4), 617–631.MathSciNetzbMATHCrossRefGoogle Scholar Hartline, J., Immorlica, N., Khani, M.R., Lucier, B., & Niazadeh, R. (2018). Fast core pricing for rich advertising auctions. In Proceedings of the 2018 ACM conference on economics and computation (pp. 111–112). ACM.Google Scholar Hershberger, J., & Suri, S. (2001). Vickrey prices and shortest paths: What is an edge worth? In Proceedings 42nd IEEE symposium on foundations of computer science (pp. 252–259). IEEE.Google Scholar Karger, D., & Nikolova, E. (2006). On the expected VCG overpayment in large networks. In Proceedings of the 45th IEEE conference on decision and control (pp. 2831–2836).Google Scholar Karlin, A.R., Kempe, D., & Tamir, T. (2005). Beyond VCG: Frugality of truthful mechanisms. In 46th annual IEEE symposium on foundations of computer science (FOCS'05) (pp. 615–626). IEEE.Google Scholar Lee, Y. T., Sidford, A., & Wong, S. C. W. (2015). A faster cutting plane method and its implications for combinatorial and convex optimization. In 56th annual symposium on foundations of computer science (pp. 1049–1065). IEEE.Google Scholar Lehmann, D., Oćallaghan, L. I., & Shoham, Y. (2002). Truth revelation in approximately efficient combinatorial auctions. Journal of the ACM (JACM), 49(5), 577–602.MathSciNetzbMATHCrossRefGoogle Scholar Leskovec, J., & Krevl, A. (2014, June). SNAP datasets: Stanford large network dataset collection. http://snap.stanford.edu/data/. Nisan, N., & Ronen, A. (1999). Algorithmic mechanism design. In Proceedings of the thirty-first annual ACM symposium on theory of computing (pp. 129–140). ACM.Google Scholar Polymenakos, L., & Bertsekas, D. P. (1994). Parallel shortest path auction algorithms. Parallel Computing, 20(9), 1221–1247.MathSciNetzbMATHCrossRefGoogle Scholar Rothkopf, M. H., Pekeč, A., & Harstad, R. M. (1998). Computationally manageable combinational auctions. Management science, 44(8), 1131–1147.zbMATHCrossRefGoogle Scholar Vickrey, W. (1961). Counterspeculation, auctions, and competitive sealed tenders. The Journal of Finance, 16(1), 8–37.MathSciNetCrossRefGoogle Scholar Yokoo, M., Sakurai, Y., & Matsubara, S. (2004). The effect of false-name bids in combinatorial auctions: New fraud in internet auctions. Games and Economic Behavior, 46(1), 174–188.MathSciNetzbMATHCrossRefGoogle Scholar Zhang, L., Chen, H., Wu, J., Wang, C. J., & Xie, J. (2016). False-name-proof mechanisms for path auctions in social networks. In ECAI (pp. 1485–1492).Google Scholar Zhu, Y., Li, B., Fu, H., & Li, Z. (2014). Core-selecting secondary spectrum auctions. IEEE Journal on Selected Areas in Communications, 32(11), 2268–2279.CrossRefGoogle Scholar © Springer Science+Business Media, LLC, part of Springer Nature 2020 Email author 1.National Key Laboratory for Novel Software Technology, Department of Computer Science and TechnologyNanjing UniversityNanjingChina Cheng, H., Zhang, W., Zhang, Y. et al. Auton Agent Multi-Agent Syst (2020) 34: 18. https://doi.org/10.1007/s10458-019-09440-y Publisher Name Springer US Get Access to for the whole of 2020
CommonCrawl
Comparison Test Calculator com to confirm my request, receive a request to complete. This information is intended for comparison purposes only. Theorem 1 of F Distribution can be used to test whether the variances of two populations are equal, using the Excel functions and tools which follows. Make sure that all other gas appliances are turned off. MedCalc uses the "N-1" Chi-squared test as recommended by Campbell (2007) and Richardson (2011) Comparison test calculator. The result is a z-score which may be compared in a 1-tailed or 2-tailed fashion to the unit normal distribution. This is the value of the test statistic obtained here. car craft takes a closer look at how to perform a Roller-Rocker Ratio Test. Sound power is a theoretical value that is not measurable. Among the most commonly used statistical significance tests applied to small data sets (populations samples) is the series of Student's tests. To complete the free Cooper Test, you simply have to measure the distance you can run in 12 minutes at a constant rhythm. By comparing it with 1/n The sine function has this weird property that for very small values of x: sin(x) = x You can see this easily by plotting the graph for y = sin(x) and the graph for y=x over each other: You can see that when x->0, sinx=x So this also means that for very small values of 1/n, sin(1/n)=1/n When does 1/n become very small?. Box Compression Test Strength (BCT) Calculator Estimates the BCT of a regular slotted container (RSC) for a moment with flutes positioned vertically. Assume that. Calculate your Numerology relationship compatibility. The p-Test implies that the improper integral is convergent. This is linked to the concept of p-value. BMI Calculator » Triangle Calculators » Length and Distance Conversions » SD SE Mean Median Variance » Blood Type Child Parental Calculator » Unicode, UTF8, Hexidecimal » RGB, Hex, HTML Color Conversion » G-Force RPM Calculator » Chemical Molecular Weight Calculator » Mole, Moles to Grams Calculator » R Plot PCH Symbols » Dilution. Now that we've seen how to actually compute improper integrals we need to address one more topic about them. The calculator assumes you are equally talented and trained for each distance, so it works best when planning for races of similar distances. Let and be a series with positive terms and suppose , , 1. The purpose of the test is to assess whether or not the samples come from populations with the same population median. n th-Term Test for Divergence If the sequence {a n} does not converge to zero, then the series a n diverges. Our Spring Creator Calculator is composed of three individual spring calculators. Access Google Sheets with a free Google account (for personal use) or G Suite account (for business use). There are too many variables involved to ever convert your score to a high school GPA with 100% accuracy. If you check the 'Equal Var' box SISA will calculate the traditional student's t-test with n1+n2-2 degrees of freedom. \int \frac{2+ e^{-x} dx}{x} from 1 to Improper Integral (comparison test q) | Physics Forums. The Rockport walking test is an evaluation you can self-administer to determine your cardiovascular fitness. Our Pure-Electric Commute in the 2018 Honda Clarity Plug-In. The interactive format of the beef cutout calculator, found at www. To use this calculator, you must enter unpaired values into the fields. This is the full calcSD calculator, made for those who really like numbers. The student's t-test is more powerful than Welch's t-test and should be used if the variances are equal. if y = model, then to apply the instruction. Love calculator Welcome to our site. Also shown are the. ) This is the "bottom line" figure that can help you decide whether or not to refinance. Historically, the long time industry standard has been the Bursting (Mullen) Test, which is related to the rough handling durability of corrugated material. Let and be two series with positive terms and suppose If is finite and , then the two series both converge or Online Integral Calculator ». As you already know, muscle weighs more than fat, so a very muscular person of the same height will have a higher BMI. T-test online. The sample size calculator computes the critical value for the normal distribution. For what values of p does the series P ∞ n=1 np 2+ 3 converge? Answer: Doing a limit. This calculator will tell you the critical value of the F-distribution, given the probability level, the numerator degrees of freedom, and the denominator degrees of freedom. Now that we've seen how to actually compute improper integrals we need to address one more topic about them. The number of degrees of freedom will be smaller as in the student's t-test. The calculated appliance heat input will be displayed in both Gross and Net. The Sobel test works well only in large samples. A person s test results on their own have virtually no value. The RealAge test is an extremely comprehensive longevity calculator. Quantitative Comparison questions always have the same answer choices, so get to know them, especially the last choice, "The relationship cannot be determined from the information given. Although the payments may seem attractive, it may not always be the best financial decision versus purchasing the equipment outright and financing it with a low interest loan. com allows you to find the sum of a series online. His foray into the test prep world began in high school, when he self-studied his way from an average SAT score to the top percentile. Also shown are the. To help you easily compare the differences between essential elements contained in some of the top brand name sea salt mixes sold on the market, we compiled this quick reference elements comparison table based on actual test results that were conducted by the Hawai'i Institute of Marine Biology reported in Marlin Atkinson and Craig Bingman's. Section 1-9 : Comparison Test for Improper Integrals. Every percentage can be expressed as a fraction. Then if B converges, so does A. That is, we test for equality between two groups at a time, and we make several of these comparisons. Comparison of Blood Glucose, HbA1c, and Fructosamine by Tom Bartol, RN-C, MN, FNP, CDE The hemoglobin A1c is an important part of long term blood glucose monitoring. For example, if your independent variable was "brand of coffee" your levels might be Starbucks, Peets and Trader Joe's. You also view the rolling correlation for a given number of trading days to see how the correlation between the assets has changed over time. Tests for Convergence of Series 1) Use the comparison test to con rm the statements in the following exercises. Section 4-7 : Comparison Test/Limit Comparison Test. My love calculator, just like any other love calculator, tries to give you a score on your love compatibility with another person. P 1 n=4 1diverges, so P 1 n=4 3 diverges. Suppose we have two series. Army Standards of Medical Fitness, published on June 14th, 2017. In the two independent samples application with a continuous outcome, the parameter of interest in the test of hypothesis is the difference in population means, μ 1-μ 2. Bitcoin Mining Calculator and Hardware Comparison. The null hypothesis is always that there is no difference between groups with respect to means, i. Since the limit you calculated is 1, which is positive, the hypothesis of the test is satisfied, and the correct conclusion is that your two integrals either both converge or both diverge. It can convert seconds to minutes, seconds to hours, hours to minutes, or virtually anything else. more Two-tailed test example: Treatment is given to 50 people to reduce the cholesterol level. Practical Pain Management is a Remedy Health Media, LLC web property. Tukey originated his HSD test, constructed for pairs with equal number of samples in each treatment, way back in 1949. Compare the cost of living in two cities using the CNNMoney Cost of Living calculator. we have two samples. The Statistics Calculator puts significance testing in the palm of your hand. Infinite Series: Ratio Test For Convergence The ratio test may be used to test for convergence of an infinite series. T-test online. QI Macros Tukey Test Template in Excel \ Note: Excel does not provide Tukey tests; QI Macros adds this functionality. Side-by-Side comparison of cars and trucks. The BAC calculator and information generated from it is not intended to replace the medical advice of your doctor or health care provider and should not be relied upon; nor do the BAC calculator or information generated from it constitute legal advice. Impact of UPS efficiencies on energy costs and carbon footprint. Pregnancy tests on the market today can vary in how early they can detect the pregnancy hormone, hCG (Human Chorionic Gonadotropin). • If working for a client, have the client complete the. Consumers; Retailers & Tradies; Suppliers; About the E3 Program; Consumers; Retailers & Tradies; Suppliers; About the E3 Program. You can test the equality of two proportions obtained from independent samples using the Pearson chi-square test. The idea behind the limit comparison test is that if you take a known convergent series and multiply each of its terms by some number, then that new series also converges. , non-normal) data. Let's try n^-2: This limit is positive, and n^-2 is a convergent p-series, so the series in question does converge. While the integral test is a nice test, it does force us to do improper integrals which aren't always easy and, in some cases, may be impossible to determine the. This calculator will determine body fat percentage and whether the calculated body fat percentage is in compliance with the army recruitment standard, the standard after entry into the army, or the most. The nested model is the more restrictive model with more degrees of freedom than the comparison model. Substantially Equal Periodic Payments / 72(t) Calculator: If you need to tap into retirement savings prior to 59½ and want to avoid an early distribution penalty, this calculator can be used to determine the allowable distribution amounts under code 72(t). News: Release of Bayes Factor Package We have recently released the BayesFactor package for R. 05 for one pair cannot be considered significant. Comparison of Blood Glucose, HbA1c, and Fructosamine by Tom Bartol, RN-C, MN, FNP, CDE The hemoglobin A1c is an important part of long term blood glucose monitoring. This is the value of the test statistic obtained here. Body mass index (BMI) is a measure of body fat based on height and weight that applies to adult men and women. This carbon calculator is provided free to use Show you care for the environment and communities across the World by Carbon Offsetting. How much will college cost and what schools are affordable? Our tuition rankings, student loan calculators and college saving planning tools provide you with the answers. In addition, explore hundreds of other calculators addressing fitness, health, math, finance, and more. Compare the gas mileage and greenhouse gas emissions of new and used cars and trucks. Given sample sizes, confidence intervals are also computed. A test run of 38 cars using the old assembly schedule results in a sample mean of 51. Suppose we have two series. Love compatibility chart is the best way to see how your relationship works. The Ratio Test: Let be a positive series such that for any. Tukey originated his HSD test, constructed for pairs with equal number of samples in each treatment, way back in 1949. This information is intended for comparison purposes only. 11:47 per mile is the average pace found in our sample data. $\begingroup$ To solve your example, take Bn=sqrt(n), that is the division of n^3 and n^5/2. I was unable to find an online calculator to do it. When to Use a Tukey Quick Test. Therefore, it's important you familiarize yourself with the types of math these questions test in addition to how they're likely to be tested. Background: Text color:? How to use. Please use this form if you would like to have this math solver on your website, free of charge. For example, you could use this calculator to determine whether the correlation between GPA and Verbal IQ (r 1) is higher than the correlation between GPA and Non-verbal IQ (r 2). In addition to the calculators listed below you can view our complete calculator directory. DATASET LIST - Detailed info about each dataset. 99 is convergent and the limit is 0, ln(n)/n^3 must diverge). The calculated appliance heat input will be displayed in both Gross and Net. The test developers assume that students will have an approved calculator. Comparison of Two Means In many cases, a researcher is interesting in gathering information about two populations in order to compare them. Solution: Because the degree is odd and the leading coefficient is negative, the graph rises to the left and falls to the right as shown in the figure. Copayments & Cost-Shares Active duty service members pay nothing out-of-pocket for any type of care. This is the value of the test statistic obtained here. This test applies when you have two samples that are dependent (paired or matched). Utilizing the FBI Ammunition Testing Protocol, firearms training officers test ammunition side-by-side to make informed decisions on the duty ammunition their department will carry. The GRE offers a simple on-screen calculator for use during the quantitative section. This makes it easy to choose the most important problem to solve, or to pick the solution that will be most effective. You can support Carbon Offsetting Projects that both tackle climate change and support impoverished communities across the world. In the previous section we saw how to relate a series to an improper integral to determine the convergence of a series. You can t plug your DNA results into some magic formula and find out who your ancestors were. My love calculator, just like any other love calculator, tries to give you a score on your love compatibility with another person. Some methods test hypotheses by comparison. Studentized Range Calculator. 9 (Limit Comparison Test). If you are considering replacing your tires with a different sized tire, you can conduct a size comparison, which can help you determine what alternate tire sizes will work on your vehicle. Guidelines for performing the experiment are provided and there is an introductory discussion of how to graph the data and. Get the free "Convergence Test" widget for your website, blog, Wordpress, Blogger, or iGoogle. Bitcoin Mining Calculator and Hardware Comparison. If f(x) is larger than g(x) then the area under f(x) must also be larger than the area under g(x). Usually, with an online calculator, significance is also calculated once you enter in the two correlation values and different sample sizes (N 1 and N 2). Calculate Sample Size Needed to Compare 2 Proportions: 2-Sample, 2-Sided Equality. Search the world's information, including webpages, images, videos and more. 0 accessibility guidelines for contrast. , the drug is not effective. All rights belong to the owner! Sum of series. A significance value (P-value) and 95% Confidence Interval (CI) of the difference is reported. T-test online. The key to earning more money is understanding your true market value. You know how many years it's been since you were born, but what about your actual body age?. the signs of the test statistic are flipped). Recruiter's free Salary Comparison Calculator makes it easy for you to compare the average salaries and salary trends of over a thousand jobs. Finding the calculator that will suit your needs without breaking the bank and that is allowed to be used on the test. As in statistical inference for one population parameter, confidence intervals and tests of significance are useful statistical tools for the difference between two population parameters. This test applies when you have two samples that are dependent (paired or matched). Dareboost offers a powerful comparison tool, with unlimited use cases: you can compare your page with your competitors, compare together multiple pages of your website, test and compare the same page throught different contexts, etc. You can use a Z-test if you can do the following two assumptions: the probability of common success is approximate 0. This allows us to approximate the infinite sum by the nth partial sum if necessary, or allows us to compute various quantities of interest in probability. Using the Fisher r-to-z transformation, this page will calculate a value of z that can be applied to assess the significance of the difference between two correlation coefficients, r a and r b, found in two independent samples. Get salary and cost of living data around the world. This is the full calcSD calculator, made for those who really like numbers. Surge Watts 0 (0 kW). Multiple comparisons arise when a statistical analysis involves multiple simultaneous statistical tests, each of which has a potential to produce a "discovery. Your body's capacity to transport and use oxygen during exercise (VO2 max) is the most precise measure of overall cardiovascular fitness. It enables easy calculation of an appropriate lens focal length, camera viewing angle, IP camera bandwith, storage capacity for records archiving and of other camera system parameters. 547; single sided test). Different business groups and industries have different average rates. With it, you can get information on moving and relocation factors such as population density, weather, school reports, city demographics and more!. Calculator Information iSolutions International Pty Ltd does not warrant the cost data or calculations contained within this Equipment Life Cycle Cost Calculator Spreadsheet. If you're seeing this message, it means we're having trouble loading external resources on our website. This is the value of the test statistic obtained here. Search the world's information, including webpages, images, videos and more. Add your motorcycle specs for Free. The limit comparison test shows that the original series is divergent. Only battery-operated, handheld equipment can be used for testing. It calculates scale and proficiency scores for Listening, Speaking, Reading and Writing, as well as composite scores. While excellent for many things, MRI and Ultrasound are often not the best test and CT scans or x-rays are preferred. Please add [email protected] Dependent data set (paired) A clinical trial tests for the effect of a cholesterol lowering drug gives the following results:. The limit comparison test is an easy way to compare the limit of the terms of one series with the limit of terms of a known series to check for convergence or divergence. You can't use your calculator to communicate with other calculators. A word to the wise. Linerboard constitutes the bulk of the bursting strength of a corrugated sheet. Don't know your pace? Use the pace calculator to find out. Online since 1999, we publish thousands of articles, guides, analysis and expert commentary together with our. Learn more about the Statistics Calculator. In order to deal exclusively with the right tail of the distribution, when taking ratios of sample variances from the theorem we should put the larger variance in the numerator of. This interactive calculator yields the result of a test of the equality of two correlation coefficients obtained from the same sample, with the two correlations sharing one variable in common. To enter a career, please start typing in the box and select from the list. Plan your route, estimate fuel costs, and compare vehicles!. The number of degrees of freedom will be smaller as in the student's t-test. The comparison test provides a way to use the convergence of a series we know to help us determine the convergence of a new series. It's designed to help you estimate and compare possible future benefits using a hypothetical illustration and is based on the information you provide. It also takes into account something known as opportunity cost — for example, the return you could have. The BAC calculator and information generated from it is not intended to replace the medical advice of your doctor or health care provider and should not be relied upon; nor do the BAC calculator or information generated from it constitute legal advice. Take me to the Weighted Average Grade Calculator. Find out what your expected return is depending on your hash rate and electricity cost. Numbeo is the world's largest database of user contributed data about cities and countries worldwide. Retirement Calculator. The means are from two independent sample or from two groups in the same sample. Please supply as much information as possible. Like the integral test, the comparison test can be used to show both convergence and divergence. Numbeo provides current and timely information on world living conditions including cost of living, housing indicators, health care, traffic, crime and pollution. Limit Comparison Test. Albert's AP US History score calculator uses the official scoring worksheets of previously released exams by the College Board, making our score calculators the most accurate and up-to-date. Text Compare! is an online diff tool that can find the difference between two texts. There is no one right way to do this, but one possible answer is the following:. Therefore, there is a 33. The Limit Comparison Test (LCT) is used to find out if an infinite series of numbers converges (Settles on a certain number) or diverges. FINRA Exams - Test Comparison Prometric. Just input any two tires, metric or standard, and click the button. It's an indispensable tool for report writers who need a quick test to compare means or percents. When to Use a Tukey Quick Test. So, in general case it cannot be used to test a one sided alternative that subdiagonal frequencies are larger/smaller than superdiagonal frequencies. Add this widget to your Web site to let anyone calculate their BMI. Your body's capacity to transport and use oxygen during exercise (VO2 max) is the most precise measure of overall cardiovascular fitness. Please select the null and alternative hypotheses, type the sample data and the significance level, and the results of the t-test for two dependent samples will be displayed for you: Ho:. "Use the string length calculator to for your convenience & to save time!" Feel free to test the string length calculator with this string of text! Where can a character count tool be used? In various professions, it can be helpful to analyze the number of characters in a string of text or words. There is no one right way to do this, but one possible answer is the following:. Bitcoin Mining Calculator and Hardware Comparison. For example, if your independent variable was "brand of coffee" your levels might be Starbucks, Peets and Trader Joe's. If you're seeing this message, it means we're having trouble loading external resources on our website. The Z-test is especially useful in the case of time series data where you might want to assess a "before and after" comparison of some system to see if there has been effect. The wind chill calculator only works for temperatures at or below 50 ° F and wind speeds above 3 mph. Without them it would have been almost impossible to decide on the convergence of this integral. This RAID calculator computes array characteristics given the disk capacity, the number of disks, and the array type. Hash rate:. Computational notes. MedCalc uses the "N-1" Chi-squared test as recommended by Campbell (2007) and Richardson (2011). An online exposure calculator. Z-test of proportions: Tests the difference between two proportions. Take me to the Weighted Average Grade Calculator. Free improper integral calculator - solve improper integrals with all the steps. Let and be two series with positive terms and suppose If is finite and , then the two series both converge or Online Integral Calculator ». Use the direct comparison test to determine whether series converge or diverge. Try our easy-to-use refinance calculator and see if you could save by refinancing. A weight vector is called efficient if no other weight vector is at least as good in approximating the elements of the pairwise comparison matrix, and strictly better in at least one position. Multiple comparisons arise when a statistical analysis involves multiple simultaneous statistical tests, each of which has a potential to produce a "discovery. The Ratio Test: Let be a positive series such that for any. Calculation for the Sobel test: An interactive calculation tool for mediation tests Kristopher J. A1C levels explanation and A1C calculator Your A1C test result (also known as HbA1c or glycated hemoglobin) can be a good general gauge of your diabetes control, because it provides an average blood glucose level over the past few months. The degrees of freedom is equal to the total number of observations minus the number of means. While excellent for many things, MRI and Ultrasound are often not the best test and CT scans or x-rays are preferred. It may be one of the most useful tests for convergence. Fine tune the data based on location, education, experience, and many more filters to find the average salary for your criteria. Compare loans, calculate costs, and more. BONE CALCULATORS, APPLETS, ANIMATIONS & SIMULATIONS BONE FRACTURE CALCULATORS, APPLETS, ANIMATIONS & SIMULATIONS FRACTURE RISK CALCULATOR - S. A worked example of carrying out the Paired Student's t-test can be found here. If #0 leq a_n leq b_n# and #sum b_n# converges, then #sum a_n# also converges. Pregnancy tests on the market today can vary in how early they can detect the pregnancy hormone, hCG (Human Chorionic Gonadotropin). 8% chance that the average breaking strength in the test will be no more than 19,800 pounds. The limit comparison test is an easy way to compare the limit of the terms of one series with the limit of terms of a known series to check for convergence or divergence. Sale Discount Calculator - Percent Off Mortgage Loan Calculator - Finance Fraction Calculator - Simplify Reduce Engine Motor Horsepower Calculator Earned Value Project Management Present Worth Calculator - Finance Constant Acceleration Motion Physics Statistics Equations Formulas Weight Loss Diet Calculator Body Mass Index BMI Calculator Light. /* Spss Code for T-test of indpendent groups */ data list / group 1-1 scr 3-4. If you are applying for permanent residency in Canada, it is likely that you will be required to present your test result as part of your application. This p-value calculator calculates the p-value based on the test statistic, the sample size, the type of hypothesis testing (left tail, right tail, or two-tail), and the significance level. One-Sample t-test: Tests whether the mean of a single variable differs from a specified constant. Add your motorcycle specs for Free. Knowing the minimum service tier will allow you to get the performance you need while minimizing your costs. It calculates scale and proficiency scores for Listening, Speaking, Reading and Writing, as well as composite scores. However unlike other numerious love calculators, we put high emphasis on the quality and accuracy of our results. It can convert seconds to minutes, seconds to hours, hours to minutes, or virtually anything else. Without them it would have been almost impossible to decide on the convergence of this integral. test=contrast - Designates the type of test for which the power will be computed. t test calculator A t test compares the means of two groups. Hi Saronda, Thanks for 1 last update 2019/11/05 getting in touch with finder. ) Test colors against the WCAG 2. Select a rectangular area around a face when there are more than one face in the uploaded image. Calculate and compare protein, energy and other components for different mixes of livestock feeds. The nested model is the more restrictive model with more degrees of freedom than the comparison model. Adult BMI (Body Mass Index) Calculator. How to Compare Fractions. Theorem 1 of F Distribution can be used to test whether the variances of two populations are equal, using the Excel functions and tools which follows. Plan your route, estimate fuel costs, and compare vehicles!. Know your value Get paid more money. View a breakdown of different pregnancy test results for each brand by day past ovulation. Instead we might only be interested in whether the integral is convergent or divergent. Guidance on this can be found in your calculator's instruction booklet. Linerboard constitutes the bulk of the bursting strength of a corrugated sheet. Full Calculator. For home buyers and real estate professionals, we have mortgage costs comparison guides and a mortgage payment calculator to help compare costs associated with purchasing a new home. 99 is convergent and the limit is 0, ln(n)/n^3 must diverge). 11:47 per mile is the average pace found in our sample data. College admissions calculator. If you can run Windows applications you might prefer a nicer and more compact version of these calculators that ships with my bike gear calculator or the additional functionality of my power calculator. What is Dunnett's Test? Dunnett's Test (also called Dunnett's Method or Dunnett's Multiple Comparison) compares means from several experimental groups against a control group mean to see is there is a difference. Do you have special connections? Is there a chance to build a long-lasting relationship? Are you in out-of-box relationships? How prosperous your relationship are? Is it a sexual attraction or hidden obsession?. BYJU'S online infinite series calculator tool makes the calculations faster and easier where it displays the value in a fraction of seconds. News: Release of Bayes Factor Package We have recently released the BayesFactor package for R. Compare the crime rates of any two cities by entering the city names into the boxes above. Running Calculator. With The Love Calculator you can calculate the probability of a successful relationship between two people. The follow-up post-hoc Tukey HSD multiple comparison part of this calculator is based on the formulae and procedures at the NIST Engineering Statistics Handbook page on Tukey's method. If you're behind a web filter, please make sure that the domains *. I'm writing about this because as a fertility counselor, I know that many women try to become pregnant using an ovulation calculator. Use the limit comparison test to determine whether converges or diverges. Comparison of proportions free online statistical calculator Comparison test calculator. We're sorry but client doesn't work properly without JavaScript enabled. View the results. then get out the calculator as we fill. List your cycle hp torque 1/4 mile time, quick cycle hp reference. To use the comparison test we must first have a good idea as to convergence or divergence and pick the sequence for comparison accordingly. In a points-based grading system, the student receives points for completed assignments, and the course grade is determined by the number of points earned out of the number of points possible. Thus in 21 pairings, a P=. com is an independent, advertising-supported publisher and comparison service. View the results. Assume that. Find more Mathematics widgets in Wolfram|Alpha. If several correlations have been retrieved from the same sample, this dependence within the data can be used to increase the power of the significance test. It should not be relied upon to calculate exact taxes, payroll or other financial data. It is slightly less powerful than Student's t–test when the standard deviations are equal, but it can be much more accurate when the standard deviations are very unequal. Section 1-9 : Comparison Test for Improper Integrals. Note: Mensa considers that scores from after January 31, 1994, "No longer correlate with an IQ test. If #0 leq a_n leq b_n# and #sum b_n# converges, then #sum a_n# also converges. Assuming that the data in mtcars follows the normal distribution, find the 95% confidence interval estimate of the difference between the mean gas mileage of manual and automatic transmissions. The calculated appliance heat input will be displayed in both Gross and Net. With only 6 subjects the statistical power of the test won't be very high 4. It is very important that you use the correct values for a reliable estimate. A weight vector is called efficient if no other weight vector is at least as good in approximating the elements of the pairwise comparison matrix, and strictly better in at least one position. Box Performance. The personalized report will suggest optimum performance computer components that are hardware compatible, at the budget you set. Hence the Comparison test implies that the improper integral is convergent. Even without a built-in option, is is so easy to set up a spreadsheet to do a paired t test that it may not be worth the expense and effort to buy and learn a dedicated statistics software program, unless more complicated statistics are needed. Compare reviews & prices then book online!. (Note that BCT is not stacking strength, which is the maximum load a box can handle throughout the distribution cycle. A significance value (P-value) and 95% Confidence Interval (CI) of the difference is reported. (Direct Comparison) Let and , be series with positive terms. The percentage of compatibility is calculated without taking into account the complementarity of stars of the Chinese horoscope (see wheel charts) and energies (the pentagon), sometimes taking this into account compatibility varies greatly, you can look at it yourself, it's simply to see if what is missing from one the other has it. You know how many years it's been since you were born, but what about your actual body age?. The two-sample t-test is a hypothesis test for answering questions about the mean where the data are collected from two random samples of independent observations, each from an underlying normal distribution: The steps of conducting a two-sample t-test are quite similar to those of the one-sample test. The means are from two independent sample or from two groups in the same sample. In the previous section we saw how to relate a series to an improper integral to determine the convergence of a series. The output is confidence intervals. IME TNT Equivalence Calculator : The term "TNT equivalence" is a normalization technique for equating properties of an explosive to TNT, the standard. HPLC Method Transfer Calculator Calculates conditions for transfer of an isocratic or gradient method from one HPLC column to another. My love calculator, just like any other love calculator, tries to give you a score on your love compatibility with another person. Learn more about the Statistics Calculator. This calculator will determine body fat percentage and whether the calculated body fat percentage is in compliance with the army recruitment standard, the standard after entry into the army, or the most stringent standard of being in compliance with the. Estimate your profits with MinerGate's cryptocurrency mining calculator for Ethash, Equihash, Cryptonote, CryptoNight and Scrypt algorithms. It is also possible to perform bit shift operations on integral types. How much will college cost and what schools are affordable? Our tuition rankings, student loan calculators and college saving planning tools provide you with the answers. IELTS - General Training is accepted by Citizenship and Immigration Canada (CIC) as evidence of English-language proficiency. Enter your Age, Sport, Experience and then select the "Calculate" button. Located 45 miles south of Salt Lake City, Utah, Brigham Young University is home to more than 33,000 full-time undergraduate and graduate students and is sponsored by the Church of Jesus Christ of Latter-day Saints. Ensure that the appliance will run for the duration of the test. Provide concentration and contamination test results; You can use our CBD dosage calculator to compare products purely based on price per milligram (mg) of CBD, allowing you to make a more informed and the most economical choice. Before we state the theorem, let's do a straight forward example.
CommonCrawl
Using the theory of planned behavior to explain birth in health facility intention among expecting couples in a rural setting Rukwa Tanzania: a cross-sectional survey Fabiola V. Moshi ORCID: orcid.org/0000-0001-8829-27461, Stephen M. Kibusi2 & Flora Fabian3 Reproductive Health volume 17, Article number: 2 (2020) Cite this article According to the theory of planned behavior, an intention to carry out a certain behavior facilitates action. In the context of birth in health facility, the intention to use health facilities for childbirth may better ensure better maternal and neonatal survival. Little is known on the influence of the domains of theory of planned behavior on birth in health facility intention. The study aimed to determine the influence of the domains of theory of planned behavior on birth in health facility intention among expecting couples in the rural Southern Highlands of Tanzania. A community based cross-sectional study targeting pregnant women and their partners was performed from June until October 2017. A three-stage probability sampling technique was employed to obtain a sample of 546 couples (making a total of 1092 study participants). A structured questionnaire based upon the Theory of Planned Behavior was used. The questionnaire explored three main domains of birth in health facility intentions. These three domains included; 1) attitudes towards maternal services utilization, 2) perceived subjective norms towards maternal services utilization and 3) perceived behavior control towards maternal services utilization. The vast majority of study participants had birth in health facility intention. This included 499(91.2%) of pregnant women and 488(89.7%%) of their male partners partner. Only perceived subjective norms showed a significant higher mean score among pregnant women (M = 30.21, SD = 3.928) compared to their male partners (M = 29.72, SD = 4.349) t (1090) = − 1.965 at 95% CI = -0.985 to − 0.002; p < 0.049. After adjusting for the confounders, no intention to use health facility for childbirth decreased as the attitude [pregnant women (B = − 0.091; p = 0.453); male partners (B = − 0.084; p = 0.489)] and perceived behavior control [pregnant women (B = − 0.138; p = 0.244); male partners (B = − 0.155; p = 0.205)] scores increase among both pregnant women and their male partners. Despite the fact that majority of study respondents had birth in health facility intention, the likelihood of this intention resulting into practice is weak because none of the domains of theory of planned behavior showed a significant influence. Innovative interventional strategies geared towards improving domains of intention is highly recommended in order to elicit strong intention to use health facilities for childbirth. According to the theory of planned behavior, an individual's intention to engage in a certain behavior facilitates the practice of the behavior. Individuals are much more likely to intend to have healthy behaviors (use of health facility for childbirth) if they have positive attitudes about the behaviors, believe that perceived subjective norms (social pressure) are favorable towards those behaviors and believe they are able to perform those behaviors correctly. Also, a person's intentions will be stronger when they have all three of the above than when they have only one. Research demonstrates that intentions matter – as the stronger a person's intentions to use health facility for childbirth, the more likely that person will actually perform that behavior. However, it is important to remember that many outside factors and restrictions can prevent an individual from performing a behavior, even when they have an intention to do so. This study used the theory of planned behavior to explain birth in health facility intention among expecting couples. The study tested the association between the predictors of intention (attitude, perceived subjective norms and perceived behavior control) as postulated in the theory of planned behavior and the intention. Three predictors of intention. Majority of study respondents had intention to use health facility for childbirth. The intention to use health facility for childbirth among pregnant women was higher compared to the intention among their male partners. The reason for the difference could be routed from the traditional gender roles and responsibilities. Male partners are responsible in provision of financial support. They may find it expensive for their female partners to use health facility for childbirth than homebirths where they will not be required to pay for transport and staying allowance. When other factors were controlled only the perceived social pressure (perceived subjective norms) significantly influenced intention to use health facility for childbirth among pregnant women. When other factors were controlled among male partners only perceived behavior control showed a significant influence to birth in health facility intention. According to theory of planned behavior, birth in health facility intention was weak among both pregnant women and their male partners because only one predictor of intentions showed to be significant. Most life-threatening complications which occur during childbirth are unpredictable which necessitate the use of skilled birth attendants [1]. The use of skilled birth attendants in developing countries have increased from 55% in 1990 to 65% in 2010. However, in South Asia and Sub-Saharan Africa, the use of skilled birth assistance remains as low as 45% in Sub-Saharan Africa and 49% in south Asia [2]. The use of skilled birth attendants in developing countries have increased from 55% in 1990 to 65% in 2010. The average use of skilled birth attendance in Tanzania in the period of 2010–2015 was 64% which is the same in Rukwa Region [3]. It is estimated that 293,300 maternal deaths occurred in 2013 worldwide [4]. The major causes of these deaths were; maternal hemorrhage (44,200 deaths), complications of abortion(43,700 deaths), maternal hypertensive disorders (29,300 deaths), maternal sepsis and other maternal infections (23,800 deaths) and obstructed labor (18,800 deaths) [4]. The majority of maternal deaths occurred in developing countries where there were 230,000 maternal deaths in 2013.Most of these deaths occurred in Sub Saharan Africa (62%) and South Asia (24%) which together account for 86% of maternal mortality worldwide [5]. The Tanzanian estimated maternal mortality ratio is 556/100,000 [3] meaning that for every 1000 live births in Tanzania, about 5 women die due to pregnancy related causes which amounts to 8000 maternal deaths per year. Maternal mortality ratio varies within Tanzania with the highest maternal mortality of 860 deaths per 100,000 live births [6] in Rukwa Region where the use of health facility for childbirth was only 31.1% of deliveries were assisted by skilled birth assistance [7]. The risk of a woman dying due to maternal causes in developing countries is high: one woman in every 76 deliveries [8]. Comparing the risk in Tanzania where one woman dies in every 44 deliveries, to the risk in Poland where one woman in every 22,100 deliveries dies from maternal causes [9], Tanzania ranks among the countries with the highest maternal mortality rates worldwide [10]. However, maternal health is more than the survival of pregnant women and mothers. Studies have found that for each woman who lost her life in the course of bringing another life, there are 20 others who suffer pregnancy-related illness or experience other severe consequences [11]. Such pregnancy–related illnesses include severe acute maternal illnesses and chronic illnesses. The severe acute illnesses are life threatening complications such as organ failure and lifesaving surgery which necessitate timely emergency obstetric care in a hospital so that they can survive [12]. The other type of maternal illness is chronic illnesses. These are conditions caused by the birthing process and while they are not life-threatening, they greatly impair the quality of life, such as fistula, uterine prolapse, and dyspareunia. Other disabilities are also called postpartum maternal morbidities and include urinary incontinence, hernias, hemorrhoids, breast problems, and postpartum depression [12]. Similar to maternal survival, the survival of neonates depends very much on investment in maternal care, especially access to skilled antenatal care, delivery and early postnatal services [13]. This is because 36% of all newborn deaths are due to severe infections which necessitate identification and treatment of infections during pregnancy as well as clean delivery practices [13]. Also, asphyxia (difficulty in breathing after birth) causes 23% of newborn deaths and can largely be prevented by improved care during labor and delivery [13]. The intensity of maternal and neonatal mortalities in these low resource settings are mostly contributed by the use of health facilities where there are skilled birth attendants. The practice e (use of health facility for child birth) is highly contributed to the intention to use health facility during pregnancy. According to the Theory of Planned Behavior, an individual will have the intention to perform a behavior when they evaluate it positively, believe that the important others think they should perform it, and perceive it to be within their own control [14]. The intention to use health facility for child is influenced by the way an individual evaluates birth in health facility. If they evaluate it positively, believing that important others think it is something worth doing and perceive they can do it then they will have the intention to use health facility for childbirth. An attitude toward a behavior refers to the degree to which a person has positive or negative feelings of the behavior of interest. It entails a consideration of the outcomes of performing the behavior [14]. A subjective norm refers to the belief about whether important others think he or she will perform the behavior. It relates to a person's perception of the social environment surrounding the behavior [14]. Perceived behavior control refers to the individual's perception of the extent to which performance of the behavior is easy or difficult [14] (see Fig. 1). Theoretical model birth in health facility intention Previous studies have pointed out that among the causes of low use of health facility for childbirth are social demographic characteristics of expecting mothers (level of education, place of resident, parity etc) [15], low level of birth preparedness and low male involvement in birth preparedness [16,17,18]. Countries with good indicators in maternal and infant mortality have pregnancy related complications identified and managed early. Little was known on the influence of the three domains of theory of planned behavior on birth in health facility intention. Therefore, this study reports the findings on the influence of the three domains of theory of planned behavior on intention to use health facility for childbirth. Study design and setting A community based cross-sectional study was conducted in Rukwa Region from June 1st - October 30th, 2017, among expecting couples from 45 villages in Rukwa Region in the Southern Highlands of Tanzania. The region had a population of 1,004,539 people; 487,311 males and 517,228 females. The forecast for 2014 was 1,076,087 persons with a growth rate of 3.5%. The region has the lowest mean age at marriage where males marry at the age of 23.3 years and 19.9 years for females and a fertility rate of 7.3 [19]. Sampling method and sample size Sampling technique Two districts with the lowest rate of facility delivery in the Rukwa Region (Sumbawanga Rural District and Kalambo District) were purposively selected from the four districts of the region. Three staged multi-stage cluster sampling technique was used to obtain study participants. During first stage random samplings, all wards (12 wards of Sumbawanga Rural District and 17 wards of Kalambo District) in each district were listed and by the use of the lottery method of random sampling, five wards from Sumbawanga District and 10 from Kalambo District were picked. During second stage random sampling, all villages in the selected wards were listed and another simple random sampling was conducted to select 15 villages from Sumbawanga rural district and 30 villages from Kalambo District. The third stage sampling was a systematic sampling used to obtain households with pregnant women of 24 weeks gestation or less and living with a male partner. At each visited household, a female partner was interviewed for the signs and symptoms of pregnancy. A female partner who had missed her period for 2 months was requested to complete a pregnancy test. Those with positive tests who gave consent to participate were enrolled in the study. If a selected household had no eligible participants, the household was skipped and researchers entered into the next household. The sample size for couples who were involved in the study was calculated using the following formula [20]. $$ \mathrm{N}=\frac{{\left\{Z\upalpha \surd \left[\uppi \mathrm{o}\left(1-\uppi \mathrm{o}\right)\right]+2\upbeta \surd \left[\uppi 1\left(1-\uppi 1\right)\right]\right\}}^2}{{\left(\uppi 1-\uppi \mathrm{o}\right)}^2} $$ n = maximum sample size. Ζα = Standard normal deviation (1.96) at 95% confidence level for this study. 2β = standard normal deviate (0.84) with a power of demonstrating a statistically significant difference before and after the intervention between the two groups at 90%. πο = Proportion at pre- intervention (Use of skilled delivery in Rukwa region 30.1%) [7]. π1= proportion after intervention (Proportion of families which would access skilled birth attendant 51%) [7]. $$ \mathrm{n}=\frac{\left\{1.96\surd \right[0.301\left(1-0.301\right]+0.84\surd \left[0.51\left(1-0.51\right)\right]\Big\}{}^2}{{\left(0.6-0.51\right)}^2} $$ n = 162 couples + 10% =180. A total of 546 couples were included in this study. Data collection procedure Data were collected using interviewers-administered questionnaires. Four trained research assistants (two from each district) were recruited, trained and participated in data collection. A questionnaire about domains of theory of planned behavior on birth in health facility intention was developed using the Theory of Planned Behavior. A pilot survey was done among ten (10) expecting couples from unselected village to test the reliability of the research tool. The questionnaire had two parts; i) social demographic characteristics ii) a Likert scale where respondents were supposed to strongly agree, agree, neutral, strongly disagree and disagree. The Likert response were scored as strongly agree = 5, agree = 4, neutral = 3, disagree = 2 and strongly disagree = 1. There were three subparts of the statements in the Likert scale which focused on understanding of; i) attitudes towards maternal services utilization, ii) perceived subjective norms towards maternal services utilization iii) perceived behavior control towards maternal services utilization. Likert scale statements in the questionnaire were drafted differently between male and female respondents. Attitude towards maternal services utilization among pregnant women, statements which were used were; "If I attend antenatal clinic four or more times, I am doing a good thing" "If I get vaccinated against tetanus toxoid, I am doing a good thing" "If I test for HIV during antenatal visits, I am doing a good thing" "If I test for Syphilis during pregnancy, I am doing a good thing" "If I attend for skilled birth attendant, I am doing a good thing" "If I attend for skilled postnatal services, I am doing a good thing. Attitudes towards maternal services utilization among male respondents were assessed using the following Likert statements; "If my wife attends antenatal clinic four or more times, she is doing a good thing" "If she gets vaccinated against tetanus toxoid, she is doing a good thing" "If she tests for HIV during antenatal visits, she is doing a good thing" "If she tests for Syphilis during pregnancy, she is doing a good thing" "If she is attended by a skilled birth attendant, she is doing a good thing" "If she attends for skilled postnatal service seven days after delivery, she is doing a good thing" and "If she utilizes the available maternal services, she will ensure good birth outcome". Perceived subjective norms towards maternal services utilization among pregnant women were assessed using the following Likert scale statements; "Important people to me think I should attend four or more antenatal visits" "Important people to me think I should get vaccinated against tetanus" "Important people to me think I should test for HIV during pregnancy" "Important people to me think I should get testes for syphilis during pregnancy" "Important people to me think I should use skilled birth attendants during childbirth" "Important people to me think I should attend postnatal care seven days after delivery" and "When it comes to maternal services utilization, I will do what the health care provider advise me to do". Perceived subjective norms among male partners were assessed using the following Likert scale statements; "Important people to me, think my wife should attend four or more antenatal visits" "Important people to me, think she should get vaccinated against tetanus" "Important people to me, think she should test for HIV during pregnancy" "Important people to me, think she should get testes for syphilis during pregnancy" "Important people to me, think she should use skilled birth attendants during childbirth" "Important people to me, think she should attend postnatal care seven days after delivery" and "When it comes to maternal services utilization, she will do what the health care provider advise me to do". Perceived behavior control towards maternal services utilization among pregnant women were assessed using the following Likert statements; "For me to attend four or more antenatal clinics is simple and I can do it" "For me to get vaccinated against tetanus is simple and I can do it" "For me to be tested for HIV is trouble free and I can do it" "For me to be screened for STI such as syphilis is trouble free and I can do it" "For me to use skilled services for delivery is simple and I can do it" "For me to attend for postnatal checkups after seven days delivery is trouble free and I can do it" and "For me to use available maternal health services is simple and I can do so" Statements which were used among male partners were; "For my wife to attend four or more antenatal clinics is simple and she can do it" "For my wife to get vaccinated against tetanus is simple and she can do it" "For my wife to be tested for HIV is trouble free and she can do it" "For my wife to be screened for STI such as syphilis is trouble free and she can do it" "For my wife to use skilled services for delivery is simple and she can do it" "For my wife to attend for postnatal checkups after seven days of delivery is trouble free and she can do it" and "For my wife to use the available maternal health services, is simple and she can do so". The data were checked for completeness and consistencies, were entered in to computer using statistical package IBM SPSS version 23. The analysis was done for each group (pregnant women and their male partners) separately. Scores on the domains of theory of planned behavior were treated as continuous scores. Descriptive characteristic on the continuous variables were used to generate mean scores, standard deviation. Independent t-test was used to compare mean scores between pregnant women and their male partners. For categorical variables, descriptive analysis was used to generate frequency distribution and cross tabulation was used describe the characteristics of the study participants. A chi-square test was used to test the relationship between socio-demographic characteristics and intention. All variables with p-value of 0.2 and below were used in bivariate logistic regression and multivariate logistic regression and ap-value below 0.05 was termed as a significant association. Socio-demographic characteristics A total of 546 couples were included in the study, with a response rate of 100%. The sample included 546 pregnant women (with gestational age of 24 weeks and below) and their partners. The mean age among the pregnant women was 25.57 years (SD = 6.810) and the mean age of their partners was 30.65 years (SD = 7.726). The majority of the couples were married (390, 71.4%), monogamous (469, 85.9%), live on less than 1 dollar per day (382, 70.0%), and receive their basic obstetric care services from dispensaries (452, 82.8). Ninety five percent of the cohort had completed primary school or less (Table 1). Table 1 Socio-demographic characteristics of respondents Mean scores of domains of theory of planned behavior compared between pregnant women and their male partners When attitude mean scores were compared between pregnant women and their male partners, male partners had higher attitudinal mean score (M = 26.09, SD = 3.135), compared to pregnant women (M = 26.16, SD = 3.142), t(1090) = 0.366 at 95% CI = -0.303–0.442; p > 0.05 but the difference was not statistically significant. On perceived subjective norms mean scores, pregnant women had significantly higher mean score (M = 30.21, SD = 3.928) compared to their male partners (M = 29.72, SD = 4.349) t (1090) = − 1.965 at 95% CI = -0.985 to − 0.002; p < 0.049. On perceived behavior control, male partners had higher mean score (M = 30.47, SD = 3.668) compared to pregnant women (M = 30.45, SD = 3.771) t(1090) = 0.073 at 95% CI = 0.225 to − 0.425; p = 0.942. Birth in health facility intention among pregnant women and their male partners Majority of study participants 499(91.2%) of pregnant women and 488(89.7%) of their partners had intention to use health facility for childbirth. The relationship between socio demographic characteristic and intention to use health facility for childbirth among pregnant women and their male partners Among pregnant women characteristic of nearby health facility (p = 0.024) and owning a mobile phone (p = 0.018) were variables which influenced significantly birth in health facility intention (Tables 2 and 3). Table 2 Distribution of participants by birth in health facility intention and factors affecting their intention (Chi-Square) among pregnant women Table 3 Distribution of participants by birth in health facility intention and factors affecting their intention (Chi-Square) among male partners The association between domains of theory of planned behavior and no intention to use health facility for childbirth In crude odds ratio among pregnant women two domains (attitudes and perceived behavior control) showed a significant association with no intention to use health facility for childbirth. After controlling for the confounders (Own a mobile phone, characteristic of nearby health facility and covered with health insurance), among the domains of theory of planned behavior, no intention to use health facility for childbirth decreased as the attitude(B = − 0.091; p = 0.453) and perceived behavior control (B = − 0.138; p = 0.244) scores increase among pregnant women nevertheless the relationship was not statistically significant (Table 4). Table 4 The association between domains of theory of planned behavior and no intention to use health facility for childbirth among pregnant women Among male respondents, no significant association between the domains and no intention to use health facility for childbirth. After controlling of confounders among male partners (age group, ethnic group, education level and covered with health insurance), among the domains of theory of planned behavior, only attitude (B = -0.084; p = 0.489) and perceived behavior control (B = − 0.155; p = 0.205) scores showed a decrease in no intention to use health facility for childbirth as the scores increases (Table 5). Table 5 The association between domains of theory of planned behavior and no intention to use health facility for childbirth among male partners An individual's intention to perform a certain behavior is influenced by an individual's attitude towards the behavior, the perceived subjective norms and the perceived behavior control. In this study the behavior of interest was birth in health facility. According to the theory of planned behavior, birth in health facility intention is influenced by the attitude the individual has about birth in health facility, the perceived subjective norms this particular individual has and the perceived control on performing the behavior [13]. This study found that majority of study respondents (91.2% of pregnant women and 89.7% of their male partners) had birth in health facility intention. A similar findings has been reported by a study done by Creanga et al. [21] . The intention among male partners was lower than the intention of their female partners. The reason could be that male partners avoid financial implications associated with health facility childbirth. Avoidance of financial responsibility may be attributed to gender norms which influence men not to prioritize access to skilled birth attendance as pregnancy and childbirth are perceived to be women's affairs [22].. The access to maternal and child health in Tanzania is free [23] but there are some hidden cost associated with decision to choose health facility for childbirth. The costs include, transport costs, costs to procure birthing items and the cost of staying in health facility. Male partners may opt for home childbirth assisted by unskilled attendance to avoid financial implications and hence lower proportion of intention to use health facility for childbirth. Low risk perception towards pregnancy and childbirth could be another reason for some male partners to prefer home childbirth over health facility childbirth [22]. When they perceive pregnancy and childbirth is a normal process and not associated with risks may lower their intention to use health facility for childbirth. Another reason could be low knowledge about risks associated with pregnancy and childbirth among expecting couples in this community [24]. Men have low knowledge on risks which are associated with pregnancy and childbirth compared to female [24]. Their low knowledge on risks associated with pregnancy and childbirth may lower their intention to use health facility for childbirth. Choosing health facility for childbirth ensure timely intervention in case of any complication which may occur during childbirth. Despite the fact that, the majority of study participants had intention to use health facility for childbirth, their intention was weak. According to theory of planned behavior, the intention to engage into a behavior is predicted by three variables-the attitudes, perceived subjective norms (socio pressure) and perceived behavior control. A person's intentions will be stronger when all domains of intention have significant influence to the intention [25]. Research demonstrates that intentions matter – as the stronger a person's intentions to have a healthy behavior, the more likely that person will actually perform that behavior [25]. The study found that after adjusting for confounders, the attitudinal scores and perceived behavior control scores among both pregnant women and their male partners decreases when there was no intention to use health facility for childbirth. Despite the fact that the relationship was not statistically significant, the study showed a decrease odd when the attitudinal scores and perceived behavior scores increases. This means that when both pregnant women and male partners have increased attitude towards maternal services utilization their likelihood to intend to use health facility for childbirth increases. According to theory of planned behavior, the attitude is influences by the evaluation about the benefit of the behavior [14]. Also, there was a decreased odd of no intention to use health facility for childbirth as the perceived behavior control scores increase among both pregnant women and their male partners. This means that when couples believe they are capable of engaging into a behavior, the likelihood of not intending to participate into the behavior decreases. On contrary, Odds of no intention increases with increase in perceived subjective norms scores. This means that the perception of social pressure to support use of maternal services increase the intention not to use health facility for childbirth. In other words, social pressure does not increase the intention to use health facility for childbirth. This is contrary to what was postulated by the theory of planned behavior where perceived subjective norms is among the domains which increase intention to engage into a behavior [14]. This study found that none of the domains of theory of planned behavior showed a significantly influenced on birth in health facility intention among both pregnant women and their male partners. This means that the intention to engage into the behavior is weak and may not be translated to action This is translated in the real practice of low use of health facility for childbirth in this community where more than 69% of birth occur outside health facility assisted by unskilled personnel [7]. The finding is in contrast with a similar study done in Ethiopia where all predictors of intention significantly influenced the intention [26]. The difference in finding is due to differences in place of receiving skilled maternal care. While this study was about birth in health facility intention, the study in Ethiopia was about intention in maternity home. In addition to the existing interventions in Tanzania such as increasing number of facilities and removed financial barriers in accessing maternal services, behavior theory integrated interventions to address deep seated predictors of health seeking behaviors is highly recommended. Despite the fact that majority of study respondents had birth in health facility intention, the likelihood of this intention resulting into practice is weak because none of the domains statistically influenced the intention to use health facility for childbirth. Innovative interventional strategies geared towards improving domains of theory of planned behavior among expecting couples are highly recommended in order to improve birth in health facility intention. Data set is available and can be shared on request. AOR: Adjusted Odds Ratio HIV: IBM: Statistical Package for the Social Sciences UNICEF: United Nations Children's Fund Moran AC, Sangli G, Dineen R, Rawlins B, Yaméogo M, Baya B. Birth preparedness for Maternal Health: Findings from Koupéla District, Burkina Faso. J Heal Popul Nutr. 2006;24(4):489–97. United Nations. The millennium development goals report [Internet]. New York: United Nations; 2015. Available from: http://www.un.org/millenniumgoals/2015_MDG_Report/pdf/MDG2015rev(July1).pdf Ministry of Health, Community Development, Gender, Elderly and Children (MoHCDGEC) [Tanzania Mainland], Ministry of Health (MoH) [Zanzibar], National Bureau of Statistics (NBS), Office of the Chief Government Statistician (OCGS) and I. Tanzania Demographic and Health Survey and Malaria Indicator Survey 2015–2016 [Internet]. Tanzania Demographic and Health Survey and Malaria Indicator Survey (TDHS-MIS) 2015–16. Dar es Salaam, Tanzania, and Rockville, Maryland, USA; 2016. Available from: https://www.dhsprogram.com/pubs/pdf/FR321/FR321.pdf Naghavi M, Wang H, Lozano R, Davis A, Liang X, Zhou M, et al. Global, regional, and national age-sex specific all-cause and cause-specific mortality for 240 causes of death, 1990-2013: a systematic analysis for the global burden of disease study 2013. Lancet. 2015;385(9963):117–71. https://doi.org/10.1016/S0140-6736(14)61682-2. United nation. The millenium development goals report 2014 [Internet]. New York: United Nations; 2014. Available from: http://www.un.org/millenniumgoals/reports.shtml National Bureau of Statistics. Mortality and health [Internet]. Dar es Salaam: National Bureau of Statistics; 2015. Available from: http://www.mamaye.or.tz/sites/default/files/Mortality_and_Health_Monograph-TanzaniaCensusdata(1).pdf National Bureau of Statistics. Tanzania demographic and health survey [Internet]. Dar es Salaam: National Bureau of Statistics; 2010. Available from: http://www.measuredhs.com WHO. Trends in mternal mortality: 1990–2013. Estimates by WHO,UNICEF, UNFPA, The World Bank and the United Nations Population Division [Internet]. Geneva: World Health Organisation; 2014. Available from: http://apps.who.int/iris/bitstream/10665/112682/2/9789241507226_eng.pdf?ua=1 Khanal V, Adhikari M, Karkee R, Gavidia T. Factors associated with the utilisation of postnatal care services among the mothers of Nepal: analysis of Nepal Demographic and Health Survey 2011. BMC Womens Health. 2014;14(1):19 Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3911793&tool=pmcentrez&rendertype=abstract. Kinney MV, Kerber KJ, Black RE, Cohen B, Nkrumah F, Coovadia H, et al. Sub-Saharan Africa's mothers, newborns, and children: where and why do they die? PLoS Med. 2010;7(6):1–9. UNICEF. Progress for children. A report card on maternal mortality. A Rep Card Matern Mortal. 2008;(7):48. https://www.unicef.org/progressforchildren/files/Progress_for_Children-No._7_Lo-Res_082008.pdf. Koblinsky M, Chowdhury ME, Moran A, Ronsmans C. Maternal morbidity and disability and their consequences: neglected agenda in maternal health. J Health Popul Nutr. 2012;30(2):124–30. UNICEF. The state of world's children 2009. Vacuum. 2009; 168 p. https://www.unicef.org/protection/SOWC09-FullReport-EN.pdf. Netemeyer R, Van Ryn M, Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211 Available from: http://linkinghub.elsevier.com/retrieve/pii/074959789190020T. Ogunjimi LO, Ibe TR, Ikorok MM. Curbing maternal and child mortality : the Nigerian experience. Int J Nurs Midwifery. 2012;4(3):33–9. August F, Pembe AB, Mpembeni R, Axemo P, Darj E. Men's knowledge of obstetric danger signs, birth preparedness and complication readiness in rural Tanzania. PLoS One. 2015;10(5):1–12. Helleve A. Men's involvement in care and support during pregnancy and childbirth; 2010. Iliyasu Z, Abubakar IS, Galadanci HS, Aliyu MH. Birth preparedness, complication readiness and fathers' participation in maternity care in a northern Nigerian community. Afr J Reprod Health. 2010;14(1):21–32. National Bureau of Statistics. Fertility and nuptiality report 2015 [Internet]. Vol. IV. 2015. Available from: http://www.nbs.go.tz/nbs/takwimu/census2012/FertilityandNuptialityMonograph.pdf . West CIT, Briggs NCT. Effectiveness of trained community volunteers in improving knowledge and management of childhood malaria in a rural area of Rivers State, Nigeria. Niger J Clin Prac. 2015;18(5):651–8. Creanga AA, Odhiambo GA, Odera B, Odhiambo O, Desai M, Goodwin M, et al. Pregnant women ' s intentions and subsequent behaviors regarding maternal and neonatal service utilization : results from a cohort study in Nyanza Province, Kenya. PLoS One. 2016;11:1–17. Moshi F, Nyamhanga T. Understanding the preference for homebirth; an exploration of key barriers to facility delivery in rural Tanzania. Reprod Health. 2017;14(1):132 Available from: http://reproductive-health-journal.biomedcentral.com/articles/10.1186/s12978-017-0397-z. The United Republic of Tanzania Ministry of Health and Social Welfare. National Health Policy. Minist Heal. 2003;1–37. Moshi FV, Ernest A, Fabian F, Kibusi SM. Knowledge on birth preparedness and complication readiness among expecting couples in rural Tanzania : differences by sex cross-sectional study. PLoS One. 2018;13:1–15. Sommestad T, Karlzén H, Hallberg J. The sufficiency of the theory of planned behavior for explaining information security policy compliance. Inf Comput Secur. 2015;23(2):200–17. Endalew GB, Gebretsadik LA, Gizaw AT. Intention to use maternity waiting home among pregnant women in Jimma District, Southwest Ethiopia. Global Journal of Medical Research; 2017. The authors thank the University of Dodoma for providing ethical clearance for this study. We also gratefully acknowledge the University of Dodoma sponsorship for their financial support. We also thank the administration of Rukwa Region for allowing us to conduct the study and the study participants for volunteering to participate in this study. The study was not funded. Department of Nursing and Midwifery, the University of Dodoma, P.O. Box 259, Dodoma, Tanzania Fabiola V. Moshi Department of Public Health, the University of Dodoma, P. O Box.259, Dodoma, Tanzania Stephen M. Kibusi Department of Biomedical Sciences, the University of Dodoma, P. O Box.259, Dodoma, Tanzania Flora Fabian Search for Fabiola V. Moshi in: Search for Stephen M. Kibusi in: Search for Flora Fabian in: FM led the conception, design, acquisition of data, analysis, interpretation of data, and drafting of the manuscript. SK and FP guided the conception, design, acquisition of data, analysis, interpretation and critically revising the manuscript for intellectual content and have given final approval for the version to be published. All authors read and approved the final manuscript. Correspondence to Fabiola V. Moshi. The proposal was approved by Ethical Review Committee of the University of Dodoma. A letter of permission was obtained from the Rukwa Regional Administration. Both written and verbal consent were sought from study participants after explaining the study objectives and procedures. Their right to refuse to participate in the study at any time was assured. The manuscript does not have individual-specific data. Moshi, F.V., Kibusi, S.M. & Fabian, F. Using the theory of planned behavior to explain birth in health facility intention among expecting couples in a rural setting Rukwa Tanzania: a cross-sectional survey. Reprod Health 17, 2 (2020) doi:10.1186/s12978-020-0851-1 Accepted: 06 January 2020 Perceived subjective norms Behavior control
CommonCrawl
CPAA Home Consequences of the choice of a particular basis of $L^2(S^3)$ for the cubic wave equation on the sphere and the Euclidean space May 2014, 13(3): 977-990. doi: 10.3934/cpaa.2014.13.977 Liouville type theorems for Schrödinger system with Navier boundary conditions in a half space Ran Zhuo 1, , Fengquan Li 2, and Boqiang Lv 3, Department of Mathematical Sciences, Yeshiva University, New York, NY 10033, United States School of Mathematical Sciences, Dalian University of Technology, Dalian, 116024 College of Mathematics and Information Science, Nanchang Hangkong University, Nanchang, Jiangxi 330063, China Received November 2012 Revised August 2013 Published December 2013 In this paper, we study the positive solutions for the following integral system: \begin{eqnarray} u(x)=\int_{R^n_+}(\frac{1}{|x-y|^{n-\alpha}}-\frac{1}{|x^*-y|^{n-\alpha}})u^{\beta_1}(y)v^{\gamma_1}(y)dy ,\\ v(x)=\int_{R^n_+}(\frac{1}{|x-y|^{n-\alpha}}-\frac{1}{|x^*-y|^{n-\alpha}})u^{\beta_2}(y)v^{\gamma_2}(y)dy, \end{eqnarray} where $0 < \alpha < n$ and $x^*=(x_1,\cdots,x_{n-1},-x_n)$ is the reflection of the point $x$ about the plane $R^{n-1}$, and $\beta_1, \gamma_1, \beta_2, \gamma_2 $ satisfy the condition$(f_1)$: \begin{eqnarray} 1 \leq \beta_1,\gamma_1,\beta_2,\gamma_2 \leq \frac{n+\alpha}{n-\alpha}\ \mbox{with}\ \beta_1+\gamma_1= \frac{n+\alpha}{n-\alpha}=\beta_2+\gamma_2, \beta_1\neq \beta_2, \gamma_1 \neq \gamma_2. \end{eqnarray} This integral system is closely related to the PDE system with Navier boundary conditions, when $\alpha$ is a even number between $0$ and $n$, \begin{eqnarray} (- \Delta)^{\frac{\alpha}{2}}u(x)=u^{\beta_1}(x)v^{\gamma_1}(x), \mbox{in}\ R^n_+,\\ (- \Delta)^{\frac{\alpha}{2}}v(x)=u^{\beta_2}(x)v^{\gamma_2}(x), \mbox{in}\ R^n_+,\\ u(x)=-\Delta u(x)=\cdots =(-\Delta)^{\frac{\alpha}{2}-1} u(x)=0,\mbox{on}\ \partial{R^n_+},\\ v(x)=-\Delta v(x)=\cdots =(-\Delta)^{\frac{\alpha}{2}-1} v(x)=0,\mbox{on}\ \partial{R^n_+}. \end{eqnarray} More precisely, any solution of (1) multiplied by a constant satisfies (2). We use method of moving planes in integral forms introduced by Chen-Li-Ou to derive rotational symmetry, monotonicity, and non-existence of the positive solutions of (1) on the half space $R^n_+$. Keywords: system of integral equations, non-existence., monotonicity, method of moving planes in integral forms, Kelvin transforms, Navier boundary conditions, symmetry. Mathematics Subject Classification: Primary: 31A10, 35B45; Secondary: 35B53, 35J9. Citation: Ran Zhuo, Fengquan Li, Boqiang Lv. Liouville type theorems for Schrödinger system with Navier boundary conditions in a half space. Communications on Pure & Applied Analysis, 2014, 13 (3) : 977-990. doi: 10.3934/cpaa.2014.13.977 H. Berestycki and, L. Nirenberg, On the method of moving planes and the sliding method,, \emph{Bol. Soc. Brazil. Mat. (N.S.)}, 22 (1991), 1. Google Scholar L. Cao and Z. Dai, A Liouville-type theorem for an integral equations system on a half-space $R^n_+$,, \emph{Journal of Mathematical Analysis and Applications}, 389 (2012), 1365. doi: 10.1016/j.jmaa.2012.01.015. Google Scholar L. Caffarelli, B. Gidas and J. Spruck, Asymptotic symmetry and local behavior of semilinear elliptic equations with critical Sobolev growth,, \emph{Comm. Pure Appl. Math.}, 42 (1989), 271. doi: 10.1002/cpa.3160420304. Google Scholar C. Jin and C. Li, Symmetry of solutions to some integral equations,, \emph{Proc. Amer. Math. Soc.}, 134 (2006), 1661. doi: 10.1090/S0002-9939-05-08411-X. Google Scholar W. Chen and C. Li, A priori estimates for prescribing scalar curvature equations,, \emph{Annals of Math.}, 145 (1997), 547. doi: 10.2307/2951844. Google Scholar W. Chen and C. Li, The best constant in some weighted Hardy-Littlewood-Sobolev inequality,, \emph{Proc. Amer. Math. Soc.}, 136 (2008), 955. doi: 10.1090/S0002-9939-07-09232-5. Google Scholar W. Chen and C. Li, Methods on Nonlinear Elliptic Equations,, AIMS Book Series on Diff. Equa. Dyn. Sys., 4 (2010). Google Scholar W. Chen and C. Li, A sup + inf inequality near $R=0$,, \emph{Advances in Math.}, 220 (2009), 219. doi: 10.1016/j.aim.2008.09.005. Google Scholar W. Chen and C. Li, An integral system and the Lane-Emden conjecture,, \emph{Disc. Cont. Dyn. Sys.}, 4 (2009), 1167. doi: 10.3934/dcds.2009.24.1167. Google Scholar W. Chen, C. Li and Y. Fang, Super-polyharmonic property for a system with Navier conditions on $R^n_+$,, submitted to Comm. PDEs, (2012). Google Scholar W. Chen, C. Li and B. Ou, Classification of solutions for an integral equation,, \emph{Comm. Pure Appl. Math.}, LLVIII (2005), 1. doi: 10.1002/cpa.20116. Google Scholar W. Chen, C. Li and B. Ou, Qualitative properties of solutions for an integral equation,, \emph{Disc. Cont. Dyn. Sys.}, 12 (2005), 347. Google Scholar W. Chen, C. Li and B. Ou, Classification of solutions for a system of integral equations,, \emph{Comm. PDEs.}, 30 (2005), 59. doi: 10.1081/PDE-200044445. Google Scholar A. Chang and P. Yang, On uniqueness of an n-th order differential equation in conformal geometry,, \emph{Math. Res. Letters}, 4 (1997), 1. Google Scholar L. Fraenkel, An Introduction to Maximum Principles and Symmetry in Elliptic Problems,, Cambridge Unversity Press, (2000). doi: 10.1017/CBO9780511569203. Google Scholar Y. Fang and W. Chen, A Liouville type theorem for poly-harmonic Dirichlet problem in a half space,, \emph{Advances in Math.}, 229 (2012), 2835. doi: 10.1016/j.aim.2012.01.018. Google Scholar B. Gidas, W. Ni and L. Nirenberg, Symmetry of positive solutions of nonlinear elliptic equations in $R^n$,, Mathematical Analysis and Applications, (1981). Google Scholar E. Lieb, Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities,, \emph{Ann. of Math.}, 118 (1983), 349. doi: 10.2307/2007032. Google Scholar C. Li, Local asymptotic symmetry of singular solutions to nonlinear elliptic equations,, \emph{Invent. Math.}, 123 (1996), 221. doi: 10.1007/s002220050023. Google Scholar C. Li and L. Ma, Uniqueness of positive bound states to Shrodinger systems with critical exponents,, \emph{SIAM J. Math. Analysis}, 40 (2008), 1049. doi: 10.1137/080712301. Google Scholar C. Liu and S. Qiao, Symmetry and monotonicity for a system of integal equations,, \emph{Comm. Pure Appl. Anal.}, 6 (2009), 1925. doi: 10.3934/cpaa.2009.8.1925. Google Scholar D. Li and R. Zhuo, An integral equation on half space,, \emph{Proc. Amer. Math. Soc.}, 138 (2010), 2779. doi: 10.1090/S0002-9939-10-10368-2. Google Scholar L. Ma and D. Chen, A Liouville type theorem for an integral system,, \emph{Comm. Pure Appl. Anal.}, 5 (2006), 855. doi: 10.3934/cpaa.2006.5.855. Google Scholar B. Ou, A remark on a singular integral equation,, \emph{Houston J. of Math.}, 25 (1999), 181. Google Scholar J. Serrin, A symmetry problem in potential theory,, \emph{Arch. Rational Mech. Anal.}, 43 (1971), 304. Google Scholar R. Zhuo and D. Li, A system of integral equations on half space,, \emph{Journal of Mathematical Analysis and Applications}, 381 (2011), 392. doi: 10.1016/j.jmaa.2011.02.060. Google Scholar Changlu Liu, Shuangli Qiao. Symmetry and monotonicity for a system of integral equations. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1925-1932. doi: 10.3934/cpaa.2009.8.1925 Yingshu Lü. Symmetry and non-existence of solutions to an integral system. Communications on Pure & Applied Analysis, 2018, 17 (3) : 807-821. doi: 10.3934/cpaa.2018041 Yingshu Lü, Chunqin Zhou. Symmetry for an integral system with general nonlinearity. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1533-1543. doi: 10.3934/dcds.2018121 Baiyu Liu. Direct method of moving planes for logarithmic Laplacian system in bounded domains. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5339-5349. doi: 10.3934/dcds.2018235 Pengyan Wang, Pengcheng Niu. A direct method of moving planes for a fully nonlinear nonlocal system. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1707-1718. doi: 10.3934/cpaa.2017082 Abdelkader Boucherif. Positive Solutions of second order differential equations with integral boundary conditions. Conference Publications, 2007, 2007 (Special) : 155-159. doi: 10.3934/proc.2007.2007.155 Meixia Dou. A direct method of moving planes for fractional Laplacian equations in the unit ball. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1797-1807. doi: 10.3934/cpaa.2016015 Patricia J.Y. Wong. Existence of solutions to singular integral equations. Conference Publications, 2009, 2009 (Special) : 818-827. doi: 10.3934/proc.2009.2009.818 Dorina Mitrea and Marius Mitrea. Boundary integral methods for harmonic differential forms in Lipschitz domains. Electronic Research Announcements, 1996, 2: 92-97. Wenxiong Chen, Congming Li. Regularity of solutions for a system of integral equations. Communications on Pure & Applied Analysis, 2005, 4 (1) : 1-8. doi: 10.3934/cpaa.2005.4.1 Johnny Henderson, Rodica Luca. Existence of positive solutions for a system of nonlinear second-order integral boundary value problems. Conference Publications, 2015, 2015 (special) : 596-604. doi: 10.3934/proc.2015.0596 Gennaro Infante. Eigenvalues and positive solutions of odes involving integral boundary conditions. Conference Publications, 2005, 2005 (Special) : 436-442. doi: 10.3934/proc.2005.2005.436 Carlos Fresneda-Portillo, Sergey E. Mikhailov. Analysis of Boundary-Domain Integral Equations to the mixed BVP for a compressible stokes system with variable viscosity. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3059-3088. doi: 10.3934/cpaa.2019137 Olusola Kolebaje, Ebenezer Bonyah, Lateef Mustapha. The first integral method for two fractional non-linear biological models. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 487-502. doi: 10.3934/dcdss.2019032 Thomas Y. Hou, Pingwen Zhang. Convergence of a boundary integral method for 3-D water waves. Discrete & Continuous Dynamical Systems - B, 2002, 2 (1) : 1-34. doi: 10.3934/dcdsb.2002.2.1 Wu Chen, Zhongxue Lu. Existence and nonexistence of positive solutions to an integral system involving Wolff potential. Communications on Pure & Applied Analysis, 2016, 15 (2) : 385-398. doi: 10.3934/cpaa.2016.15.385 Z. K. Eshkuvatov, M. Kammuji, Bachok M. Taib, N. M. A. Nik Long. Effective approximation method for solving linear Fredholm-Volterra integral equations. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 77-88. doi: 10.3934/naco.2017004 Stanisław Migórski, Shengda Zeng. The Rothe method for multi-term time fractional integral diffusion equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 719-735. doi: 10.3934/dcdsb.2018204 Franck Boyer, Pierre Fabrie. Outflow boundary conditions for the incompressible non-homogeneous Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2007, 7 (2) : 219-250. doi: 10.3934/dcdsb.2007.7.219 Natalia Skripnik. Averaging of fuzzy integral equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1999-2010. doi: 10.3934/dcdsb.2017118 Ran Zhuo Fengquan Li Boqiang Lv
CommonCrawl
Lightweight image classifier using dilated and depthwise separable convolutions Wei Sun ORCID: orcid.org/0000-0002-4870-29711,2, Xiaorui Zhang2,3 & Xiaozheng He4 Journal of Cloud Computing volume 9, Article number: 55 (2020) Cite this article The image classification based on cloud computing suffers from difficult deployment as the network depth and data volume increase. Due to the depth of the model and the convolution process of each layer will produce a great amount of calculation, the GPU and storage performance of the device are extremely demanding, and the GPU and storage devices equipped on the embedded and mobile terminals cannot support large models. So it is necessary to compress the model so that the model can be deployed on these devices. Meanwhile, traditional compression based methods often miss many global features during the compression process, resulting in low classification accuracy. To solve the problem, this paper proposes a lightweight neural network model based on dilated convolution and depthwise separable convolution with twenty-nine layers for image classification. The proposed model employs the dilated convolution to expand the receptive field during the convolution process while maintaining the number of convolution parameters, which can extract more high-level global semantic features to improve the classification accuracy. Also, the depthwise separable convolution is applied to reduce the network parameters and computational complexity in convolution operations, which reduces the size of the network. The proposed model introduces three hyperparameters: width multiplier, image resolution, and dilated rate, to compress the network on the premise of ensuring accuracy. The experimental results show that compared with GoogleNet, the network proposed in this paper improves the classification accuracy by nearly 1%, and the number of parameters is reduced by 3.7 million. In recent years, deep networks have made significant progress in many fields, such as image processing, object detection, and semantic segmentation. Krizhevsky, et al. [1] first adopted deep learning algorithm and the AlexNet and won the champion of ImageNet Large Scale Visual Recognition Challenge in 2012, which improved the recognition accuracy by 10% compared to the traditional machine learning algorithm. Since then, various convolutional neural network models have been proposed in the computer vision community, including the VGGNet proposed by the Visual Geometry Group at the University of Oxford [2] in 2014, the GoogLeNet [3, 4] by Google researchers, and the ResNet by He et al. [5, 6] in 2015. These networks are superior to AlexNet [7, 8]. The trend of improvement is using deeper and more complex networks for higher accuracy. With a higher precision for computer vision tasks, the model depth and parameters are also increasing exponentially, making these models dependent more on computationally-powerful GPUs [4, 9]. As a consequence, existing deep neural network models cannot be deployed on resource-constrained devices [10, 11], such as smart-phones and in-vehicle devices, due to their limited computing power. The emerging cloud computing has the potential to solve this challenge [12]. Cloud computing technology, which combines the characteristics of distributed computing, parallel computing and grid computing, provides users with scalable computing resources and storage space by using massive computing clusters built by ordinary servers and storage clusters built by a large number of low-cost devices. At present, a large number of enterprises have enterprise-level cloud computing platforms: amazon cloud computing, alibaba cloud computing, baidu cloud computing, and so on. Compared with the traditional application platform, cloud computing platform has many fine characteristics, such as strong computing capacity, infinite storage capacity, convenient and fast virtual service and so on. However, renting the cloud computing servers need extra cost for individuals and small companies. For example,The model training in this article can be run on an NVIDIA P4 cloud server with 8g memory. This server is the most basic server and costs $335 per month. Although the cost is not too expensive for a company, it is a huge expenditure for students without salary. Therefore, there is the need to design a lightweight network to reduce the model's dependence on high-performance devices [13, 14]. To reduce the network's dependence on high performance servers and reduce the cost of cloud computing. various new lightweight networks are proposed for object detection. By compressing the model, the size of neural network is reduced [15, 16]. Typical strategies involve avoiding full connection in the network, reducing the number of channels and the size of convolution kernel, as well as optimizing down-sampled, weight pruning, weight discretization, model representation and coding [17, 18]. For example, GoogleNet [3, 19] increased the width of the network to reduce the network complexity; the subsequent Xception network extended the depthwise separable filter to overcome the shortcomings in the InceptionV3 network [5, 20]. The article MobileNet [21] proposes a deep separable convolution, which shows great potential for decomposing networks. However, the classification accuracy of these models cannot be guaranteed during the compression due to omitting excessive image features for simplified convolution operation [22, 23]. Aimed to address the above issues, this paper proposes a lightweight neural network combining dilated convolution and depthwise separable convolution. Inspired by the MobileNet, this paper adopts a depthwise separable convolution architecture and hyperparameters, width multiplier, and resolution multiplier to obtain a small network model that can be applied to resource-constrained devices such as smartphones [24, 25]. The convolution process is divided into two processes by depthwise separable convolution to reduce network computation. Because the depthwise separation convolution cannot guarantee the classification accuracy of the model [26], the proposed model integrates the dilated convolution into the depthwise separable convolution architecture. The dilated convolution can increase the receptive field of the network in the convolution process without increasing convolution parameters, which can extract more global features and higher-level semantic features, thus improving the classification accuracy of the network [27, 28]. Finally, the proposed model is further compressed by reducing the number of input channels and the resolution of input image using hyperparameter strategy. Compared with other networks, the network proposed in this paper can ensure higher classification accuracy while using fewer resources. In addition, the joint dilated convolution and depthwise separable convolution method proposed in this paper effectively solves the problem that model size and classification accuracy cannot coexist. In the current state-of-the-art, deep neural network compression can be conducted in two approaches: i) compressing the trained models by optimizing the network parameters and ii) designing and training small network models directly [29]. For the first approach, Han introduced compression methods such as cropping, weight sharing, quantization, and coding to deep network model in 2015. In general, a complex network has good performance, but its parameters may also be redundant [20]. Therefore, for a network that has already been trained, unimportant hierarchical connections or filters can be tailored to reduce model parameters and redundancy. In the training process, a weight update strategy is introduced to make it sparser, but the commonly used sparse matrix operation is not efficient on the hard-ware platform and is susceptible to hardware devices [30]. The second approach has become popular with the introduction of lightweight models such as SqueezeNet [31], ShuffleNet [32], and MobileNet [21]. The Squeeze-Net proposed by Landola et al. applies a convolution kernel to convolve and dimension the upper features and a feature convolution to perform feature stacking, which greatly reduces the number of parameters of convolution layers. SqueezeNet uses a bottleneck method to design a small network that greatly reduces the parameters and computational complexity while maintaining accuracy [19]. Zhang et al. [32] proposed the ShuffleNet, which groups multi-channel feature lines and then performs convolution to avoid unsmooth information flow. ShuffleNet [32] network reduces the amount of network computation through channel shuffling and point-group convolution. Howard et al. [21] proposed the depthwise separable convolution model, named MobileNet, where the features of each channel convolved separately and then uses 1×1 convolution to splice all features of different channels. These lightweight models reduce the number of network parameters and computational cost. However, during the compression process, the classification accuracy of the model cannot be guaranteed because only local information of the image is utilized [33–35]. Aimed to achieve a lightweight model while ensuring the classification accuracy, this paper combines the above two methods. Firstly, directly design and train a small network model by combining depthwise separable convolution and dilated convolution. The depthwise separable convolution is used to reduce the parameter number and computation burden, and the dilated convolution improves the accuracy of the model. Secondly, inspired by the MobileNet, the proposed model applies the hyperparameters to further compress the trained model, thereby making the model to adapt to source-constrained devices. This paper uses dilated convolution as a filter to extract image features. Compared to the traditional filters, the dilated convolution yields more full-image information without increasing the number of network parameters, where the dilated rate δ controls the size of each convolution dilation. Then, we apply depthwise separable convolution instead of traditional convolution to reduce the computational complexity. To compress the model further, we introduce two hyperparameters proposed in MobileNet: width multiplier α and resolution multiplier ρ, to evenly reduce the computational burden of each layer of the network [30, 36]. This paper combines the dilated convolution and the depthwise separable convolution to ensure the classification accuracy while maintaining the model to be lightweight by adjusting hyperparameters. This section first presents the idea of building a joint module of dilated convolution and depthwise separable convolution, which is then used to build the deep convolution network. Joint module As shown in Fig. 1, the proposed model dilates each filter to obtain more image information without increasing the computation burden and the number of channels. The dilated filter is then used to convolve each input channel, and the final filter is used filter to combine the output of different convolution channels. Figure 2 illustrates the dilation process of the 3×3 filter for the dilated convolution process in Fig. 1. The position of the node without the dot mark in Fig. 2 indicates that there is a zero weight, and the node with the dot mark represents non-zero weight to that position. It represents filters having different dilated rates, respectively, in Fig. 2a, b, and c. The parameters of the convolution layer remain the same, so the amount of convolution process remains the same too. The fields of the filters (a), (b), and (c) are defined as 3×3=9, 7×7=49, and 11×11=121, respectively. Filter (c) has the largest receptive field, indicating that each node on the feature map corresponds to more feature information. With the increase of the receptive field, it means that each node contains higher semantic features, which can improve the classification accuracy of the network. To factor the influence of different dilated convolution on model accuracy, we apply hyperparameter δ to control the size of each dilated convolution. As illustrated by Fig. 2, the relationship between the receptive field and the original filter size can be represented as: $$C={(S\times\delta+(\delta-1))^{2}}. $$ Dilated convolution process where C denotes the size of the receptive field, S represents the size of the initial filter, and δ represents the dilated rate. The separable convolution operation is carried out on the obtained dilated convolution filter. The size of the dilation filter is Lk×Lk with \(L_{k}= \sqrt {C}\). Figure 3 shows the process of constructing a Li×Li×H feature map and a Li×Li×N feature map. This process shows how to reduce the number of parameters in the model. Figure 3a, b, and c represent the traditional convolution filter, depthwise convolution filter, and pointwise convolution filter, respectively. Figure 3b and c together represent a separable convolution process, where Li×Li is the width and height of the input feature map, N is the number of filters, Lk×Lk is the width and height of the dilated filter, and H is the number of channels. For example, a single dilated filter of Lk×Lk is firstly used to carry out the convolution operation on each channel. If the number of the feature map channels is H here are H filters with the same size to participate in the convolution operation, and the number of channels of each filter is 1. The image is then convolved by N filters with 1×1 size and convolution channels. Figure 3 shows that the traditional convolution layer takes a Li×Li×H feature map as the input and produces a Li×Li×N feature map, in which Li×Li is the width and height of input feature map, H is the number of input channels, N is the number of output channel, Lk×Lk is the width and height of the dilated filter. Gt represents the amount of parameters in the traditional convolution process. $$ G_{t}=L_{k} \times L_{k} \times H \times N \times L_{i} \times L_{i}. $$ Gd is the number of parameters of the depthwise separable convolution process. $$ G_{d}=L_{k} \times L_{k} \times H \times L_{i} \times L_{i} + H \times N \times L_{i} \times L_{i}. $$ Therefore, the ratio of separable convolution to the traditional convolution can be represented by: $$ \frac{G_{d}}{G_{t}} =\frac{1}{ N} + \frac{1}{L_{k}^{2}}. $$ Equation (4) shows that the calculation can be reduced to \(\frac {1}{N}+\frac {1}{L_{k}^{2}}\) compared to the conventional convolution process, which lowers computational complexity. To avoid the gradient disappearance problem and speed up the network training, we apply the BN layer (Batch Normalization) and the ReLU layer to make the gradient larger [37, 38] after introducing the joint module above. We call the process presented as a basic network structure, shown in Fig. 4. Basic structure Using only one basic structure is not enough to form a usable neural network, because we cannot receive deep information about the image if the network is too shallow. Therefore, there is the need to construct a lightweight neural network based on Fig. 4. As shown in Fig. 5, several basic network structures are combined with the average pooling layer, the full connection layer, and the Softmax layer to form the overall network structure. Table 1 shows the entire composition of this lightweight neural network in detail. Class represents the category of the dataset in the table. In total, the model includes one average pooling layer and one fully connected layer, nine dilated convolution layers, nine depthwise separable convolution layers, and nine BN layers. Structural flow chart Table 1 Overall architecture The model dilates the 3×3 convolution kernel before implementing each depthwise separable convolution. Through the dilated rate to obtain a convolution kernel with a larger receptive field. The obtained 3×3 dilated convolution is applied to each channel of the feature map, and then 1×1 convolution is used to combine the output of the channel convolution. Adding a BN layer and a Relu linear activation function after each 1×1 convolution operation can accelerate training speed and improve the generalization capability of the network [39, 40]. Hyperparameters This study adjusts the dilated rate δ to change the size of the dilated convolution. The specific experimental results will be introduced in the next section. Different devices require smaller and faster models. Therefore, this paper refers to two hyperparameters, the width multiplier α and resolution multiplier ρ, to obtain a smaller model. The two hyperparameters reduce the computational complexity of the entire network by reducing the computational complexity of the depthwise separable convolution process. The role of the width multiplier is to thin a network uniformly at each layer. The number of input channels changes from H to αH, and the number of output channels becomes αN from N. As a result, the complexity of the depthwise separable convolution is: $$ G_{\alpha}=L_{k} \times L_{k} \times \alpha H \times L_{i} \times L_{i} + \alpha H \times \alpha N \times L_{i} \times L_{i}. $$ where Gα indicates the amount of calculation, where α∈(0,1] with a typical value of 1, 0.75, or 0.5 [23]. It represents compression factor. Note that α<1 represents a narrow network. The second hyperparameter ρ is a resolution multiplier. By applying this strategy to the input image, the internal representation of every layer is subsequently reduced. For example, the size of the feature map of each layer of the convolution becomes ρ2 compared to the original input image. The computational complexity of the depthwise separable convolution is: $$ G_{\rho}=L_{k} \times L_{k} \times H \times \rho L_{i} \times \rho L_{i} + H \times N \times \rho L_{i} \times \rho L_{i}. $$ where ρ∈(0,1] which is set implicitly so that the input resolution of the network is 224, 192, or 160 [23]. It represents the size of input images. When ρ<1 it is named the reduced computation network. We use ρ to further compress the trained model. Accordingly, the computational complexity of two hyperparameters is shown as follows: $$ G_{\alpha}\rho=L_{k} \times L_{k} \times \alpha H \times \rho L_{i} \times \rho L_{i} + \alpha H \times \alpha N \times \rho L_{i} \times \rho L_{i}. $$ The computational complexity of the model is reduced by adopting these two hyperparameters, which can be applied to various source-constrained devices. Meanwhile, to ensure the classification accuracy, we need to compromise the hyperparameters α, ρ, δ to get the best model in sections experiments. Loss function and optimization We adopt cross-entropy as the loss function of neural network, using Adam as the network optimizer [41]. The formula for cross-entropy is as follows: $$ W(p,q)=\sum_{i} p(i)\ast \log\left(\frac{1}{q(i)}\right) $$ where W(p,q) represents cross-entropy, p represents the distribution of the true mark, q is the predicted mark distribution of the trained model, and cross-entropy loss function can measure the similarity between p and q. Adam is considered to be robust in selecting hyperparameters [11]. Therefore, this paper adopts an adaptive Adam learning rate to optimize the proposed model. In Adam, momentum is incorporated directly as an estimate of the first-order moment (with exponential weighting) of the gradient. Meanwhile, Adam includes bias corrections to the estimates of both the first-order moments (the momentum term) and the (uncentered) second-order moments to account for their initialization at the origin [41]. The optimization steps are presented in Table 2. Table 2 Optimization algorithm To verify the effectiveness of the proposed method, we constructed an experimental platform and selected a typical dataset. The proposed network model was compared with other models to verify its effectiveness. Furthermore, we investigated the influence of the dilated convolution size on the classification accuracy of the model and verified the classification accuracy. We also verified the compression effect and accuracy of the proposed model through hyperparameters. All experiments were carried out on a computer with Intel Core i7-7700k CPU, 4.20Ghz ×8 frequency, and GTX 1080Ti graphics card. CUDA version 9.0 and cuDNN version 7.3.1 were installed. The proposed model and algorithm were compiled and operated on TensorFlow 1.12.2. There are many datasets available on the Internet. We select the CIFAR-10 dataset to verify the proposed model, because it applies recognition to ubiquitous objects and applies to multiple classifications and the dataset size is also suitable for most classifier training. In addition, according to experimental requirements, the experiment requires different-resolution pictures. CIFAR-10 dataset contains 60,000 color images, all of which are 32×32 pixels. The dataset has been divided into 10 categories, each of which includes 6000 images. We selected 50000 images from the dataset as the training set. The training set constitutes five training batches, and each batch includes 10,000 images. Another 10,000 images are used for testing, forming a separate batch. In the test batch, 1000 images are randomly selected from each of the 10 categories, and the rest are randomly arranged to form the training batch again. The number of images with different categories in each training batch is not necessarily the same. Meanwhile, The Tiny ImageNet dataset is used to verify the generalization capability of the proposed net-work. The dataset spans 200 image classes with 500 training examples per class. The dataset also has 50 validation and 50 test examples per class. The images are down-sampled to 64×64 pixels. Training results and optimal selection As shown above, the complete network structure has been set up and the dataset has been selected. Next, we need to train the built model. In the training of the network, the best training result is selected by observing the change of the loss function to test the classification accuracy of the model. The change of the loss function on different datasets is shown in Fig. 6. Loss function change in different dataset The abscissa in Fig. 6 represents the epoch, and the ordinate represents the cross-entropy, which is regarded as the loss function. The whole picture shows the change in cross-entropy after each epoch training. It can be seen from the Fig. 6a that on the CIFAR-10 dataset, as the training progresses, the value of the loss function continuously decreases. The loss function stabilizes and reaches a minimum at 13 epoch. But epoch is greater than 13, the value of the loss function becomes larger and no longer decreases. This is because the model may be overfitting. In order to get better accuracy, this paper chooses the model parameters when the epoch is 12 for testing. On the Tiny Image dataset, Fig. 6b shows that the loss function decreased steadily in the first few epochs. Although there are some slight fluctuations, the loss function is still converging towards the optimal solution. After the epoch is 10, the loss function is nearly unchanged and does not increase, which indicates that the model has reached the optimal solution. This article chose the training results at epoch 15 as the parameters of the model on the Tiny Image dataset. Comparison of the proposed network with other networks To demonstrate the performance of the proposed model in network compression while ensuring accuracy of classification, we compare the proposed network to other mainstream networks and illustrate their classification accuracy based on the dataset CIFAR - 10. The parameters of the proposed network are specified as follows: the dilated rate δ=3, width multiplier α=1.0, and resolution multiplier ρ=1. The results are shown in Table 3. Table 3 The proposed network vs popular networks in CIFAR-10 Table 3 shows that, compared to mainstream networks, the proposed network model is more accurate on the CIFAR-10 dataset. With the same width factor and the input image resolution of the MobileNet network, the proposed network retains a high accuracy while reducing the number of network parameters compared to MobileNet and GoogleNet. The SqueezeNet model typical acquires fewer parameters, however, at the cost of low accuracy. Although the proposed network requires more parameters than SqueezeNet, it is much better in terms of classification accuracy. Because SqueezeNet sacrifices classification accuracy, it is not suitable for practical applications requiring high accuracy. Therefore, in the compromise of classification accuracy and model size, the proposed network is superior to SqueezeNet model. By contrast, although the VGG16 network has slightly higher classification accuracy than the proposed network, its model size is dozens more than the proposed model, resulting in computational difficulty when computing power is limited. Due to fewer network parameters, the proposed network can be easily transplanted on mobile devices with less storage capacity while having better classification accuracy. Different dilated rate This study applies the dilated rate to control the size of the dilated convolution, which affects the size of the receptive field, and the receptive field will lead to the change of classification accuracy. Therefore, we compared the network classification accuracy under different dilated rates, as summarized in Table 4. Table 4 Classification accuracy of different dilated rates Table 4 shows the classification accuracy changes with the dilated rate given the width multiplier α=1.0, and resolution multiplier ρ=1. It shows that the joint dilated convolution and the depthwise separable convolution improve classification accuracy by two percent compared to networks without joint convolutions on the dataset CIFAR-10. It also shows that the maximum classification accuracy is achieved when the dilated rate is 3. Note that the classification accuracy of the network decreases slightly as the dilated rate increases continue. As the dilated rate increases, the receptive field becomes larger, which may contain more global and semantic features. However, blindly expanding its receptive field will lose a lot of local and detailed information during the convolution process, affecting the classification accuracy of small targets and distant objects. Accuracy after hyperparameter compression This section is aimed to verify the classification accuracy when applying the width multiplier and the input resolution to compress the model after adding dilated rate. Figure 7 compares the classification accuracy of the proposed model with different width multiplier and input image resolution. Figure 7 presents the classification accuracy of the proposed model under different dilated rates after further compression with hyperparameters. The triangle label indicates the change of network classification accuracy when α=1.0, ρ=1; the square label indicates the change of the compression network classification accuracy when α=1.0 and ρ=0.8571; the diamond label indicates the network classification accuracy when α=0.75 and ρ=1. Figure 7 shows that the proposed network has improved the classification accuracy with the increasing of the dilated rate and using compression parameters to further compress the model will not affect the effectiveness of the proposed model. Comparing the results with different input resolutions when the width multiplier is constant, we can see that the increasing trend of the classification accuracy is not affected by the resolution of the input image. When the input image resolution is unchanged, the square label polyline and the diamond label polyline are compared. When the dilated rate increases from 1 to 3, we can see that the network reaches the maximum classification accuracy when the dilated rate is 3. In addition, the model accuracy of the width multiplier α=1.0 is increased from 82.04% to 84.25%, and the model accuracy of the width multiplier α=0.75 is improved from 78.75% to 80.35%. When the dilated rate is greater than 3, the network classification accuracy slightly decreases, but it is still better than the original network. Therefore, in order to make the network more effective, we have selected a dilated ratio of 3 in subsequent experiments. The classification accuracy has also improved. In summary, even if the model is further compressed by the width multiplier and the input picture resolution, the proposed method can improve the classification accuracy. Result on different dataset The results in previous sections show that the proposed network performs well on the CIFAR-10 dataset. To investigate the transferability of the proposed model, we conducted training and testing on Tiny ImageNet dataset. Table 5 shows that the proposed network has good accuracy on Tiny ImageNet Dataset. Compared to the MobileNet with width multiplier α=1.0 and the picture size is 224×224, the proposed network improves the accuracy of both datasets. Compared to GoogleNet, the proposed network enhances the accuracy rate on Tiny ImageNet dataset from 82.94% to 85.01%. These comparisons demonstrate that the proposed network can consistently improve classification accuracy, indicating a good generalization ability. The proposed model also reduces the size under the premise of ensuring accuracy, which makes it possible to achieve better classification accuracy on mobile devices. Table 5 Compare this paper network with popular networks in Tiny ImageNet dataset Table 6 shows the influence of different dilated rates on the classification accuracy of the model in the Tiny ImageNet dataset. As the dilated rate increases, the model accuracy increases from 81.73% to 85.01%. It shows that the proposed network improve classification accuracy by close to four percent compared to without dilated convolution on the dataset Tiny ImageNet. In addition, the best classification accuracy can be obtained when the dilated rate reaches 3. The results are the same as network in the CIFAR-10 dataset. Therefore, when use the proposed network in this article for testing or training, set the dilated rate to 3 to get the best classification accuracy. What is more, Fig. 7 shows that different dilated rate can effectively increase the robustness of the model. The proposed network in this paper can also improve the classification accuracy of the model on the different dataset and the proposed network has good generalization ability and good accuracy in different datasets. Table 6 Classification accuracy of different dilated rates in Tiny ImageNet dataset The model proposed is mainly used for image classification, aiming at balancing the network size and classification accuracy for a lightweight and efficient model. The experimental results on different datasets demonstrate that the proposed model has a good generalization ability and classification accuracy. In addition, the network proposed in this paper can be used as the basic network of SSD or YOLO models to realize pedestrian detection, or it can be transplanted to different devices to realize real-time pedestrian detection in portable devices [42, 43]. Applications developed on the basis of this model can convey additional practical values. This paper proposes a lightweight neural network model combining dilated convolution and depthwise separable convolution. This joint module reduces the computational burden with depthwise separable convolution, making it possible to apply the network model to resources or computationally constrained devices. Meanwhile, the dilated convolution is used to increase the receptive field in the process of convolution without increasing the number of convolution parameters. It extracts global features and higher semantic level features in the convolution process, which improves classification accuracy. The hyperparameters (i.e., width multiplier and resolution multiplier) are used to further compress the model to be lightweight so that the proposed model can be applied to devices with limited computational power. Compared with the previous network, this paper combines dilated convolution and deepthwise separable convolution, which not only solves the problem that the calculation amount is too large to apply to resource-constrained equipment, but also solves the problem that model size and model classification accuracy cannot coexist. Experimental results demonstrate that the proposed model makes a good compromise between the classification accuracy and the model size while maintaining the classification accuracy when the network is compressed. Moreover, it uses hyperparameters and dilated rate to further compress the trained model effectively. The proposed network can greatly reduce the size and computation of the network, making it easier to transplant to devices. For example, the network can be transplanted in Android mobile devices, embedded devices such as MCU or FPGA [44, 45]. In addition, companies or individuals using the network proposed in this paper can reduce the performance of cloud computing servers and reduce the cost of renting cloud computing servers. At the same time, it can be seen from experiments that the amount of calculation and parameters of the lightweight network proposed in this article are quite small, which allows some companies to train on personal servers, which has better security. CUDA: ComputeUnified device architecture GraphicsProcessing unit 2 ×1.0224 this paper: Width multiplier α=2, resolution multiplier ρ=1 and the input picture size is 224×224 krizhevsky A, Sutskever I, Hinton G (2012) Imagenet classification with deep convolutional neural networks In: Advances in Neural Information Processing Systems, 1097–1105. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition In: International Conference on Learning Representations, 1–14.. IEEE, USA. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1–12.. IEEE, USA. Xu X, He C, Xu Z, Qi L, Wan S, Bhuiyan M (2020) Joint optimization of offloading utility and privacy for edge computing enabled iot. IEEE Internet Things J 7(4):2622–2629. https://doi.org/10.1109/JIOT.2019.2944007. He K, Zhang X, Ren S, Sun J (2016) Identity mappings in deep residual networks In: European Conference on Computer Vision, 630–645.. Springer, German. Xie S, Girshick R, Dollár P, Tu Z, He K (2017) Aggregated residual transformations for deep neural networks In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1492–1500.. IEEE, USA. Zhou J, Hu X, Ma Y, Sun J, Wei T, Hu S (2019) Improving availability of multicore real-time systems suffering both permanent and transient faults. IEEE Trans Comput 68(12):1785–1801. Article MATH Google Scholar Iandola F, Han S, Moskewicz M, Ashraf K, Dally W, Keutzer K (2017) Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 In: International Conference on Learning Representations, 1–13.. IEEE, USA. Zhou J, Sun J, Zhou X, Wei T, Chen M, Hu S, Hu X (2018) IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 38(12):2215–2228. Xu X, Cai Q, Zhang G, Zhang J, Tian W, Zhang X, Liu A (2018) An incentive mechanism for crowdsourcing markets with social welfare maximization in cloud-edge computing. Concurrency Comput: Pract Experience:4961. https://doi.org/10.1002/cpe.4961. Li J, Cai T, Deng K, Wang X, Sellis T, Xia F (2020) Community-diversified influence maximization in social networks. Information Systems 92:1–12. Zhou J, Sun J, Cong P, Liu Z, Zhou X, Wei T, Hu S (2020) Security-critical energy-aware task scheduling for heterogeneous real-time mpsocs in iot. IEEE Trans Serv Comput 13(4):745–758. https://doi.org/10.1109/TSC.2019.2963301. Guo Y, Wang J, Peeta S, Anastasopoulos P (2020) Personal and societal impacts of motorcycle ban policy on motorcyclists' home-to-work morning commute in china. Travel Behav Soc 19:137–150. Guo Y, Peeta S (2020) Impacts of personalized accessibility information on residential location choice and travel behavior. Travel Behav Soc 19:99–111. Ramlatchan A, Yang M, Liu Q, Li M, Wang J, Li Y (2018) A survey of matrix completion methods for recommendation systems. Big Data Mining and Analytics 1(4):308–323. Zhang C, Yang M, Lv J, Yang W (2018) An improved hybrid collaborative filtering algorithm based on tags and time factor. Big Data Mining and Analytics 1(2):128–136. Han S, Mao H, Dally W (2016) Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding In: International Conference on Learning Representations, 1–14.. IEEE, USA. Han S, Pool J, Tran J, Dally W (2015) Learning both weights and connections for efficient neural network In: Advances in Neural Information Processing Systems, 1135–1143.. Springer, German. Ghemawat S, Gobioff H, Leung S-T (2003) The google file system In: Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles, 29–43.. IEEE, USA. Chollet F (2017) Xception: Deep learning with depthwise separable convolutions In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1251–1258.. IEEE, USA. Howard A, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications In: International Conference on Learning Representations, 1–9.. IEEE, USA. Kumar S, Singh M (2018) Big data analytics for healthcare industry: impact, applications, and tools. Big Data Mining and Analytics 2(1):48–57. Chang F, Dean J, Ghemawat S, Hsieh W, Wallach D, Burrows M, Chandra T, Fikes A, Gruber R (2008) Bigtable: A distributed storage system for structured data. ACM Trans Comput Syst (TOCS) 26(2):1–26. Liu Y, Wang S, Khan M, He J (2018) A novel deep hybrid recommender system based on auto-encoder with neural collaborative filtering. Big Data Mining and Analytics 1(3):211–221. Xu X, Mo R, Dai F, Lin W, Wan S, Dou W (2019) Dynamic resource provisioning with fault tolerance for data-intensive meteorological workflows in cloud. IEEE Trans Ind Inform. https://doi.org/10.1109/TII.2019.2959258. Dean J, Ghemawat S (2008) Mapreduce: simplified data processing on large clusters. Commun ACM 51(1):107–113. Xu X, Liu X, Xu Z, Wang C, Wan S, Yang X (2019) Joint optimization of resource utilization and load balance with privacy preservation for edge services in 5g networks. Mobile Netw Appl:1–12. https://doi.org/10.1007/s11036-019-01448-8. Yu F, Koltun V (2016) Multi-scale context aggregation by dilated convolutions In: International Conference on Learning Representations, 1–13.. IEEE, USA. Wang L, Zhang X, Wang R, Yan C, Kou H, Qi L (2020) Diversified service recommendation with high accuracy and efficiency. Knowledge-Based Systems:106196. https://doi.org/10.1016/j.knosys.2020.106196. Xu X, Zhang X, Khan M, Dou W, Xue S, Yu S (2020) A balanced virtual machine scheduling method for energy-performance trade-offs in cyber-physical cloud systems. Futur Gener Comput Syst 105:789–799. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7132–7141.. IEEE, USA. Zhang X, Zhou X, Lin M, Sun J (2018) Shufflenet: An extremely efficient convolutional neural network for mobile devices In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6848–6856.. IEEE, USA. Guo Y, Wang J, Peeta S, Anastasopoulos P (2018) Impacts of internal migration, household registration system, and family planning policy on travel mode choice in china. Travel Behav Soc 13:128–143. Chen Y, Zhang N, Zhang Y, Chen X, Wu W, Shen X (2019) Energy efficient dynamic offloading in mobile edge computing for internet of things. Trans Cloud Comput. https://doi.org/10.1109/TCC.2019.2898657. Zhong W, Yin X, Zhang X, Li S, Dou W, Wang R, Qi L (2020) Multi-dimensional quality-driven service recommendation with privacy-preservation in mobile edge environment. Comput Commun 157:116–123. https://doi.org/10.1016/j.comcom.2020.04.018. Qi L, He Q, Chen F, Zhang X, Dou W, Ni Q (2020) Data-driven web apis recommendation for building web applications[j]. IEEE Trans Big Data. https://doi.org/10.1109/TBDATA.2020.2975587. Han S, Mao H, Dally W (2015) Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149. Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift In: Proceedings of the 32nd International Conference on Machine Learning, 448–456.. IEEE, USA. Gu J, Wang Z, Kuen J, Ma L, Shahroudy A, Shuai B, Liu T, Wang X, Wang G, Cai J, et al (2018) Recent advances in convolutional neural networks. Pattern Recog 77:354–377. Liu H, Kou H, Yan C, Qi L (2020) Keywords-driven and popularity-aware paper recommendation based on undirected paper citation graph[j]. Complexity:1–15. https://doi.org/10.1155/2020/2085638. Kingma D, Ba J (2015) Adam: A method for stochastic optimization In: International Conference on Learning Representations, 1–15.. IEEE, USA. Liu H, Kou H, Yan C, Qi L (2019) Link prediction in paper citation network to construct paper correlation graph. EURASIP J Wirel Commun Netw 233:1–12. https://doi.org/10.1186/s13638-019-1561-7. Hu H, Peng R, Tai Y-W, Tang C-K (2016) Network trimming: A data-driven neuron pruning approach towards efficient deep architectures In: International Conference on Learning Representations, 1–9.. IEEE, USA. Qiu J, Wang J, Yao S, Guo K, Li B, Zhou E, Yu J, Tang T, Xu N, Song S, et al. (2016) Going deeper with embedded fpga platform for convolutional neural network In: Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 26–35.. IEEE, USA. Wu J, Leng C, Wang Y, Hu Q, Cheng J (2016) Quantized convolutional neural networks for mobile devices In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4820–4828.. IEEE, USA. Authors thank editor and reviewers for their time and consideration. This work is supported in part by the National Nature Science Foundation of China (No. 61304205, 61502240), Natural Science Foundation of Jiangsu Province (BK20191401), and Innovation and Entrepreneurship Training Project of College Students (201910300050Z, 201910300222). School of Automation, Nanjing University of Information Science and Technology, Nanjing, 210044, China Wei Sun Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology, Nanjing, 210044, China Wei Sun & Xiaorui Zhang Jiangsu Engineering Center of Network Monitoring, Nanjing, 210044, China Xiaorui Zhang Department of Civil and Environmental Engineering, Rensselaer Polytechnic Institute, Troy, 12180, USA Xiaozheng He All authors have participated in conception, drafting the article or revising it critically for important intellectual content, approval of the final version. Authors' information Correspondence to Wei Sun. This manuscript has not been submitted to, nor is under review at, another journal or other publishing venue. The authors declare that they have no competing interests among authors. Sun, W., Zhang, X. & He, X. Lightweight image classifier using dilated and depthwise separable convolutions. J Cloud Comp 9, 55 (2020). https://doi.org/10.1186/s13677-020-00203-9 Accepted: 11 September 2020 Classification accuracy Depthwise separable convolution Dilated convolution Lightweight neural network Security and privacy issues for artificial intelligence in edge-cloud computing
CommonCrawl
npj Computational Materials Multiobjective genetic training and uncertainty quantification of reactive... Multiobjective genetic training and uncertainty quantification of reactive force fields Machine learning hydrogen adsorption on nanoclusters through structural... Machine learning hydrogen adsorption on nanoclusters through structural descriptors Automated defect analysis in electron microscopic images Automated defect analysis in electron microscopic images Effects of environmental factors on key kinetic parameters relevant to... Effects of environmental factors on key kinetic parameters relevant to pitting corrosion Anti-corrosion performance of the synergistic properties of... Anti-corrosion performance of the synergistic properties of benzenecarbonitrile and 5-bromovanillin on 1018 carbon steel in HCl environment Topolectrical Circuits Topolectrical Circuits Rapid tremor migration and pore-pressure waves in subduction zones Rapid tremor migration and pore-pressure waves in subduction zones Contraction of basal filopodia controls periodic feather branching via Notch... Contraction of basal filopodia controls periodic feather branching via Notch and FGF signaling Timescales of water accumulation in magmas and implications for short warning... Timescales of water accumulation in magmas and implications for short warning times of explosive eruptions Provable compressed sensing quantum state tomography via non-convex methods Provable compressed sensing quantum state tomography via non-convex methods Phase-field model of pitting corrosion kinetics in metallic materials npj Computational Materials, Jul 2018 Talha Qasim Ansari, Zhihua Xiao, Shenyang Hu, Yulan Li, Jing-Li Luo, San-Qiang Shi Talha Qasim Ansari Zhihua Xiao Shenyang Hu Yulan Li Jing-Li Luo San-Qiang Shi Pitting corrosion is one of the most destructive forms of corrosion that can lead to catastrophic failure of structures. This study presents a thermodynamically consistent phase field model for the quantitative prediction of the pitting corrosion kinetics in metallic materials. An order parameter is introduced to represent the local physical state of the metal within a metal-electrolyte system. The free energy of the system is described in terms of its metal ion concentration and the order parameter. Both the ion transport in the electrolyte and the electrochemical reactions at the electrolyte/metal interface are explicitly taken into consideration. The temporal evolution of ion concentration profile and the order parameter field is driven by the reduction in the total free energy of the system and is obtained by numerically solving the governing equations. A calibration study is performed to couple the kinetic interface parameter with the corrosion current density to obtain a direct relationship between overpotential and the kinetic interface parameter. The phase field model is validated against the experimental results, and several examples are presented for applications of the phase-field model to understand the corrosion behavior of closely located pits, stressed material, ceramic particles-reinforced steel, and their crystallographic orientation dependence. https://www.nature.com/articles/s41524-018-0089-4.pdf Abstract Pitting corrosion is one of the most destructive forms of corrosion that can lead to catastrophic failure of structures. This study presents a thermodynamically consistent phase field model for the quantitative prediction of the pitting corrosion kinetics in metallic materials. An order parameter is introduced to represent the local physical state of the metal within a metal-electrolyte system. The free energy of the system is described in terms of its metal ion concentration and the order parameter. Both the ion transport in the electrolyte and the electrochemical reactions at the electrolyte/metal interface are explicitly taken into consideration. The temporal evolution of ion concentration profile and the order parameter field is driven by the reduction in the total free energy of the system and is obtained by numerically solving the governing equations. A calibration study is performed to couple the kinetic interface parameter with the corrosion current density to obtain a direct relationship between overpotential and the kinetic interface parameter. The phase field model is validated against the experimental results, and several examples are presented for applications of the phase-field model to understand the corrosion behavior of closely located pits, stressed material, ceramic particles-reinforced steel, and their crystallographic orientation dependence. Introduction Corrosion is the gradual destruction of materials (usually metallic materials) via chemical and/or electrochemical reaction with their environment. It costs about 3.1% of the gross domestic product (GDP) in the United States, which is much more than the cost of all natural disasters combined. Localized corrosion, such as pitting corrosion, is one of the most destructive forms of corrosion; it leads to the catastrophic failure of structures and raises both human safety and financial concerns.1,2,3 Pitting corrosion of stainless steel usually occurs in two different stages: (1) pit initiation from passive film breakage4,5,6 and (2) pit growth.2,3,7,8,9,10,11,12 In this study, we focused on the development of a phase-field modeling capability to study pit growth by considering both anodic and cathodic reactions. In the past few decades, great efforts have been made to develop numerical models for pitting corrosion. The moving interface and the electrical double layer at the metal/electrolyte interface are the two challenging problems faced by most of these numerical models. These numerical models can be divided according to the method in which a moving interface is incorporated in their models. Several steady state9,10,13,14,15,16,17,18 and transient state19,20,21,22,23,24,25,26,27,28 models have been developed over the years that did not allow for changes in the shape and dimensions of the pits/crevices as corrosion proceeds. Recent advances in numerical techniques, such as the finite element method, the finite volume method, the arbitrary Lagrangian–Eulerian method, the mesh-free method, and the level set method have encouraged researchers to model the evolving morphology of the pits with a moving interface. Most of these modeling efforts have used a sharp interface to represent the corroding surface, which requires the matching mesh at each time step,11,29 thus increasing the errors associated with the violation of mass conservation laws and increasing the computation cost. The finite volume method models overcome this problem by creating a matching mesh as a function of the concentration of ions, but they are still unable to model complex microstructures.12,30,31 A mesh-free method, the peridynamic model, has been implemented to model pitting corrosion, but it only considered electrochemical reactions without considering the ionic transport in the electrolyte.7 Over the past three decades, the phase field (PF) method has emerged as a powerful simulation tool for modeling of microstructure evolution. PF models study the phase transformation by defining the system's free energy, and the system's microstructure evolution is predicted by free energy minimization. The PF approach has been extensively applied to many materials processes, such as solidification, dendrite growth, solute diffusion and segregation, phase transformation, electrochemical deposition, dislocation dynamics, crack propagation, void formation and migration, gas bubble evolution, and electrochemical processes.32 PF models assume a diffusive interface at the phase boundaries rather than a sharp one, which makes the mathematical functions continuous at the interface. A few recent attempts have been made to use the PF method to model pitting corrosion and stress corrosion cracking without the consideration of cathodic reactions, ionic transport and in particular the dependence of overpotential on metal ion concentration in the electrolyte.33,34 In reality, the transport of ionic species in the electrolyte often plays a very important role in diffusion controlled corrosion processes, and the effects of these ionic species must be incorporated to model the process adequately. In this study, a PF method is used to model pitting corrosion by considering both anodic and cathodic reactions, transport of ionic species and the dependence of overpotential on metal ion concentration in the electrolyte. This paper is organized as follows. Firstly, we describe the system and the electrochemical reactions considered in this work followed by the construction of PF model. The total free energy of this PF model consists of three parts: bulk free energy, gradient energy and electrostatic energy. We used the KKS model35 to construct the bulk free energy and the interfacial energy. Secondly, we developed the governing equations which comprise of mass diffusion, electromigration, and chemical reaction terms, whereas the interface conditions are incorporated by introducing an order parameter that defines the system's physical state at each material point. Thirdly, a study is included to couple the kinetic interface parameter and the system's total overpotential. Fourthly, the PF model is validated against the experimental results and previous numerical models. Lastly, several case studies are presented to demonstrate the strength of this proposed PF model. Results and discussion The system and electrochemical reactions considered The system studied consists of stainless steel 304 (SS304) in dilute salt water (Fig. 1). It is assumed that new passive film will not form in this system. We will consider the effects of passive film formation in a future study. In this model, the following electrochemical reactions and kinetics are considered. Fig. 1 Schematic of the chemical reactions that occur during the pitting corrosion process Full size image For the oxidation of main metal alloy elements in SS304, $$Fe\,\rightarrow\,Fe^{+ 2} + 2e^-$$$$Ni\,\rightarrow\,Ni^{ + 2} + 2e^ -$$$$Cr\,\rightarrow\,Cr^{ + 3} + 3e^ -$$ In the following, Me is used to represent the effective metal in SS304 with an average charge number of z1. The material properties of SS304 such as molar concentration in solid phase (csolid = 143 mol/L),12 saturation concentration in the electrolyte (csat = 5.1 mol/L),12 effective diffusion coefficient (D1 = 8.5 × 10−10 m2/s),12 and the average charge number (z1 = 2.19) based on Fe, Ni, Cr, and their mole fractions within the alloy (taken from Ref. 12). The above reactions can then be simplified to $$M_e\,\rightarrow\,M_e^{z_1} + z_1e^ -$$ (1) The anodic dissolution of the metal is assumed to follow Butler–Volmer equation, $$i_a = i_0\left[ {\exp \left( {\frac{{\alpha _az_1F\varphi _{s,o}}}{{R_gT}}} \right) - \exp \left( { - \frac{{\alpha _cz_1F\varphi _{s,o}}}{{R_gT}}} \right)} \right]$$ (2) where F is the Faraday constant, Rg is the gas constant, T is the temperature, φs,o is the polarization overpotential, io is the exchange current density, αa is the anodic charge transfer coefficient, αc is the cathodic charge coefficient (αc = 1 − αa). The values of the above mentioned parameters are reported in Table 1s (supplementary material). For the hydrogen discharge reaction in Eq. (3), the corresponding rate is given in Eq. (4) $$H^ + + e^ - \to \frac{1}{2}H_2$$ (3) $$J_5 = J_{50}\left[ {H^ + } \right]exp\left( {\frac{{\alpha _5F}}{{R_gT}}\varphi _{s,o}} \right)$$ (4) For reduction of water (Eq. (5)), the corresponding rate is given in Eq. (6) $$H_2O + e^ - \to H + OH^ -$$ (5) $$J_6 = J_{60}exp\left( {\frac{{\alpha _6F}}{{R_gT}}\varphi _{s,o}} \right)$$ (6) Experimental values of i0, J50, J60, αa, α5, and α6 are given in Table 1s. In this work, the following two reactions in the electrolyte are considered $$M_e^{z_1} + H_2O\rightleftharpoons\,M_eOH^{z_1 - 1} + H^ +$$ (7) $$H_2O\rightleftharpoons\,OH^ - + H^ +$$ (8) The equilibrium constants of reactions in Eqs. (7) and (8) are defined as K1 and K2, respectively. $$K_1 = \frac{{k_{1f}}}{{k_{1b}}},\,K_2 = \frac{{k_{2f}}}{{k_{2b}}}$$ where k1f, k1b, k2f, and k2b are the forward and backward reaction rates. Therefore, a total of six ion species are considered in this model, $$M_e^{z_1} = c_1;M_eOH^{z_2} = c_2;Cl^{z_3} = c_3;Na^{z_4} = c_4;H^{z_5} = c_5;OH^{z_6} = c_6$$ where zi (i = 1, 2, ……6) are the charge numbers of the respective species (their values are given in Table 2s), and ci (i = 1, 2, ……6) are the normalized concentrations of the respective species. The normalized concentration ci is determined by Ci = Ci/Csolid for i = 1,2,…,6, where Ci represents the molar concentration of ionic species. The constants K1 and K2 can also be expressed as a function of Ci. $$K_1 = C_2C_5/C_1,\,K_2 = C_5C_6$$The phase field model for corrosion The surface of the metal is normally covered with the passive film; however, a partial breakdown in the film can occur, which may initiate pits like the one illustrated in Fig. 1. The model consists of two phases: the solid phase Me (i.e., the metal part) and the liquid phase (i.e., the electrolyte part). The driving force for metal corrosion and microstructure evolution is from the minimization of the system's total free energy, which usually consists of bulk free energy Eb, interface energy Ei, and long-range interaction energies such as elastic strain energy Es and electrostatic energy Ee.36,37 The system's total energy can be expressed as $$E = E_b + E_i + E_s + E_e$$ (9) The inclusion of elastic and/or plastic deformation in the model is completely feasible because it has been done in other systems.38,39,40 It can be even necessary to include the strain energy term if a volumetric non-compatible passive film develops during corrosion. Because the formation of a passive film will not be considered in the first stage of this work, for simplicity, the elastic strain energy is not considered here. In a later section, the effect of applied or residual stress on pitting will be studied using the concept of overpotential rather than strain energy. Now we have, $$E = E_b + E_i + E_e$$ (10) $$E = {\int} {\left[ {f_b\left( {c_1,\eta } \right) + \frac{{\alpha _u}}{2}\left( {\nabla \eta } \right)^2 + z_1FC_1\varphi } \right]} dV$$ (11) where fb(c1, η), derived in the next section, is the local bulk free energy density, which is a function of the normalized concentration of the ionic species c1 and order parameter η; the second term in Eq. (11) represents the gradient energy density that contributes to the interfacial energy, in which αu is the gradient energy coefficient, which is related to physical parameters in a later section; and the third term in Eq. (11) represents the electrostatic energy density where C1 is the molar concentration of metal ion and φ is the electrostatic potential. Bulk free energy density To determine the bulk free energy density fb(c1, η), we use the model proposed by Kim et al. for binary alloys.35 We chose KKS model because the model has less limitations on the interface thickness as compared to some other models such as model presented by the model by Wheeler et al.41 The detailed derivations of all functions in the KKS model were skipped here and readers are referred to the original paper.35 In KKS model, the model parameters can be analytically determined by material properties and experimental conditions for the concerned system. In KKS model, at each point, the material is regarded as a mixture of two coexisting phases, and a local equilibrium between the two phases is always assumed: $$c_1 = h\left( \eta \right)c_s + \left[ {1 - h\left( \eta \right)} \right]c_l$$ (12) $$\partial f_s\left( {c_s} \right)/\partial c_s = \partial f_l\left( {c_l} \right)/\partial c_l$$ (13) where cs and cl represent the normalized concentrations of the solid and liquid phases, respectively; h(η) is a monotonously varying function from h(0) = 0 to h(1) = 1. In this study, it is assumed that h(η) = η2(−2η + 3). In Eq. (13), the free energy densities of the solid and liquid phases are expressed as fs(cs) and fl(cl), respectively. Because the concentration is considered to be a mixture of solid and liquid phases at each point, by following the same argument, the bulk free energy density of the solid and liquid phases are expressed in a similar manner, $$f_b(c_1,\eta ) = h\left( \eta \right)f_s(c_s) + \left[ {1 - h\left( \eta \right)} \right]f_l(c_l) + wg(\eta )$$ (14) This is a double well potential in the energy space. The height of the double well potential is w, and g(η) = η2(1 − η)2. This expression has two minima at η = 0 and η = 1, which represent the electrolyte phase and the solid phase, respectively. For this system, fs(cs) and fl(cl) can reasonably be considered as parabolic functions.42 $$f_s\left( {c_s} \right) = A(c_s - c_{eq,s})^2$$ (15) $$f_l\left( {c_l} \right) = A\left( {c_l - c_{eq,l}} \right)^2$$ (16) where ceq,s = 1 and ceq,l = Csat/Csolid are the normalized equilibrium concentrations of the solid and liquid phases, respectively. The temperature-dependent free energy density proportionality constant A is considered to be equal for both the liquid and solid phases. Its value is calculated in such a manner that the driving force for the metal corrosion in the approximated resulting system is quite close to that of the original thermodynamic system.42 The evolution of phase order parameter η and metal ion concentration c1 in time and space are assumed to obey the Ginzburg–Landau (also known as Allen–Cahn)43 and Cahn–Hilliard44 equations, respectively. $$\frac{{\partial \eta }}{{\partial t}} = - L\frac{{\delta E}}{{\delta \eta }} = L\left[ {\nabla \alpha _u\nabla \eta + h\prime \left( \eta \right)\left\{ {f_l\left( {c_l} \right) - f_s\left( {c_s} \right) - \left( {c_l - c_s} \right)\frac{{\partial f_l\left( {c_l} \right)}}{{\partial c_l}}} \right\} - wg\prime \left( \eta \right)} \right]$$ (17) $$\begin{array}{*{20}{l}} {\frac{{\partial {\mathrm{c}}_1}}{{\partial {\mathrm{t}}}}} \hfill & = \hfill & {\nabla M\nabla \frac{{\delta E}}{{\delta c_1}} + R_1} \hfill \cr {} \hfill & = \hfill & {\nabla \left[ {D_1\left( \eta \right)\nabla c_1} \right] + \nabla \left[ {D_1\left( \eta \right)h\prime \left( \eta \right)\left( {c_l - c_s} \right)\nabla \eta } \right]} \hfill \cr {} \hfill & {} \hfill & {+ \nabla \left[ {\frac{{z_1Fc_1D_1\left( \eta \right)}}{{R_gT}}\nabla \varphi } \right]y_1\left( \eta \right) + R_1} \hfill \end{array}$$ (18) where L is the kinetic parameter that represents the solid–liquid interface mobility, and M is the mobility of metal ions and expressed as \(M = D_1(\eta )/(\partial ^2f_b/\partial c_1^2)\). In Eq. (18), R1 is the source and/or sink term for metal ions due to reaction (Eq. (7)), and it takes the form of (−k1fc1 + k1bc2c5)y1(u). The function y1(η) defined below is to ensure that reaction (Eq. (7)) occurs only in the electrolyte phase. $${\mathrm{y}}_1\left( \eta \right) = \left\{ {\begin{array}{*{20}{l}} {1;} \hfill & {\eta \le 0} \hfill \cr {} \hfill & {0 < \eta < 0.1} \hfill \cr {0;} \hfill & {\eta \ge 0.1} \hfill \end{array}} \right.\left( {{\mathrm{linearly}}\,{\mathrm{change}}\,{\mathrm{from}}1\,{\mathrm{to}}\,0} \right)$$ (19) Conservation of charge In this study, we follow Dassault's work rather than following Guyer's model45,46 which simplifies the model by removing the need to discretize the double layer at the metal–electrolyte interface. It allows our PF model to simulate the corrosion process from meso- to macro-length scales, as compared to Guyer's model, which was limited to nanoscale. It is also possible to incorporate the effect of the laminar/turbulent flow of the electrolyte on the metal–electrolyte interface in case of moving electrolyte.47 Here, the conservation of charge can be expressed as $$\frac{{\partial \rho _e}}{{\partial t}} = \nabla \left\{ {\sigma _e\left[ {1 - y_1\left( \eta \right)} \right]\nabla \varphi } \right\} + y_1\left( \eta \right)FC_{solid}{\sum} {z_i} \frac{{\partial c_i}}{{\partial t}}$$ (20) where ρe is the charge density and σe is the electrical conductivity of the metal in the solid phase. The function [1 − y1(η)] interpolates the electrical conductivity, σe in the solid phase to zero in the electrolyte phase, where y1(η) is defined in Eq. (19). The time needed for charge accumulation across the interface due to the diffusion of ionic species is much longer than that required to achieve steady-state charge accumulation across the interface, so the conservation of charge in the above system can be considered at a steady state. The relation given in Eq. (20) is reduced to $$0 = \nabla \left\{ {\sigma _e\left[ {1 - y_1\left( \eta \right)} \right]\nabla \varphi } \right\} + y_1(\eta )FC_{solid}{\sum} {z_i} \frac{{\partial c_i}}{{\partial t}}$$ (21) Gradient energy coefficient The height of the double well potential w and the gradient energy coefficient αu can be related to the interface energy ϱ and the interface thickness l33 $${\it{\varrho }} = 4\sqrt {w\alpha _u}$$ (22) $$l = \sqrt 2 \alpha \prime \sqrt {\frac{{\alpha _u}}{w}}$$ (23) where α′ is a constant value determined by the order parameter u. If the interface region is defined as 0.05 < η < 0.95; the value of α′ is 2.94.35 Transport equations for other ionic species in the electrolyte The governing equations of the other five ionic species are the Nernst–Planck equations with chemical reaction terms. They are expressed as $$\frac{{\partial {\mathrm{c}}_2\left( {{\boldsymbol{x}},t} \right)}}{{\partial {\mathrm{t}}}} = \nabla \left[ {D_2\left( \eta \right)\nabla c_2} \right] + \nabla \left[ {\frac{{z_2Fc_2D_2\left( \eta \right)}}{{R_gT}}\nabla \varphi } \right]y_1\left( \eta \right) + R_2$$ (24) $$\frac{{\partial {\mathrm{c}}_3\left( {{\boldsymbol{x}},t} \right)}}{{\partial {\mathrm{t}}}} = \nabla \left[ {D_3\left( \eta \right)\nabla c_3} \right] + \nabla \left[ {\frac{{z_3Fc_3D_3\left( \eta \right)}}{{R_gT}}\nabla \varphi } \right]y_1\left( \eta \right)$$ (25) $$\frac{{\partial {\mathrm{c}}_4\left( {{\boldsymbol{x}},t} \right)}}{{\partial {\mathrm{t}}}} = \nabla \left[ {D_4\left( \eta \right)\nabla c_4} \right] + \nabla \left[ {\frac{{z_4Fc_4D_4\left( \eta \right)}}{{R_gT}}\nabla \varphi } \right]y_1\left( \eta \right)$$ (26) $$\frac{{\partial {\mathrm{c}}_5\left( {{\boldsymbol{x}},t} \right)}}{{\partial {\mathrm{t}}}} = \nabla \left[ {D_5\left( \eta \right)\nabla c_5} \right] + \nabla \left[ {\frac{{z_5Fc_5D_5\left( \eta \right)}}{{R_gT}}\nabla \varphi } \right]y_1\left( \eta \right) + R_5$$ (27) $$\frac{{\partial {\mathrm{c}}_6\left( {{\boldsymbol{x}},t} \right)}}{{\partial {\mathrm{t}}}} = \nabla \left[ {D_6\left( \eta \right)\nabla c_6} \right] + \nabla \left[ {\frac{{z_6Fc_6D_6\left( \eta \right)}}{{R_gT}}\nabla \varphi } \right]y_1\left( \eta \right) + R_6$$ (28) where R2 is the source/sink term originated from the electrochemical reaction in Eq. (7) which takes the form as [k1fc1 − k1bc2c5]y1(η). The rates of forward and backward reaction are expressed by k1f and k1b respectively. It is assumed that no electrochemical reaction occurs inside the metal part. This is ensured by y1(η) defined in Eq. (19). R5 and R6 are the source/sink terms originated from electrochemical reactions in Eqs. (7) and (8) and take the form as \(\left[ {k_{1f}c_1 - k_{1b}c_2c_5 + k_{2f} - k_{2b}c_5c_6} \right]y_1\left( \eta \right) - \left( {\frac{{J_5}}{{z_5FC_{solid}}}} \right)y_2\left( \eta \right)\) and \(\left[ {k_{2f} - k_{2b}c_5c_6} \right]y_1\left( \eta \right) - \left( {\frac{{J_6}}{{z_6FC_{solid}}}} \right)y_2\left( \eta \right)\) respectively. The rates of forward and backward reactions for the hydrolysis of water are represented by k2f and k2b, respectively. It should be noted that R5 and R6 have an additional term near the metal–electrolyte interface due to the cathodic reactions considered in Eqs. (3) and (5) where J5 and J6 are defined in Eqs. (4) and (6) respectively. These reaction terms are multiplied by a step function y2(η) to ensure that these reactions only happen in a small region near the metal surface. $${\mathrm{y}}_2\left( \eta \right) = \left\{ {\begin{array}{*{20}{l}} {1;} \hfill & {0.01 \le \eta < 0.05} \hfill \cr {0;} \hfill & {\eta \ge 0.05} \hfill \cr {0;} \hfill & {\eta < 0.01} \hfill \end{array}} \right.$$ (29) It should also be noted that, in Eqs. (25) and (26) there are no source/sink terms because it was assumed that c3 (Cl−) and c4 (Na+) does not take part in any reactions. This is not true if a salt film can be formed. The effect of salt film formation will be studied in a future study. The electrostatic potential, φ, is governed by Eq. (21) coupled with the governing Eqs. (18) and (24–28). The diffusivity Di is a function of the order parameter η. As it is known, the diffusivity of ionic species differs in the metal and electrolyte phase. The diffusivities of all the ions were defined using a step function of the order parameter η. For metal ion c1, a step function as expressed in Eq. (30) is used in which the diffusivity value in metal is assumed to be γ times less than that in electrolyte. A step function as expressed in Eq. (31) is used for all other ionic species (c2, c3, c4, c5, and c6). $${\mathrm{D}}_1\left( \eta \right) = \left\{ {\begin{array}{*{20}{l}} {D_1;} \hfill & {\eta < 0.90} \hfill \cr {} \hfill & {0.90 \le \eta \le 0.95} \hfill \cr {D_1/\gamma ;} \hfill & {\eta > 0.95} \hfill \end{array}} \right.\,\left( {{\mathrm{linearly}}\,{\mathrm{change}}\,{\mathrm{from}}\,D_1{\mathrm{to}}\,D_1/\gamma } \right)$$ (30) $${\mathrm{D}}_{\mathrm{i}}\left( {\mathrm{\eta}} \right) = \left\{ {\begin{array}{*{20}{l}} {{\mathrm{D}}_{\mathrm{i}};} \hfill & {\eta < 0.90} \hfill \cr {} \hfill & {0.90 \le \eta \le 0.95} \hfill \cr {0;} \hfill & {\eta > 0.95} \hfill \end{array}} \right.\,\left( {{\mathrm{linearly}}\,{\mathrm{change}}\,{\mathrm{from}}\,D_i\,{\mathrm{to}}\,0} \right)$$ (31) for i = 2,3,….,6. Overpotential The overpotential is expressed as $$\varphi _{s,o} = \varphi _m - \varphi _{m,se} - \varphi _c - \varphi _l$$ (32) where φm is the potential in the metal phase also known as applied potential; φm,se is the standard electrode potential in the metal; and φc is the concentration overpotential expressed in (33). $$\varphi _c = \frac{{R_gT}}{{Fz_1}}\ln\frac{{c_{1b}}}{{c_{eq,l}}}$$ (33) The concentration of \(M_e^{z_1}\) near the interface is, $$c_{1b} = \left\{ {\begin{array}{*{20}{l}} {c_1;} \hfill & {\left( {\eta = 0.05} \right)} \hfill \cr {0;} \hfill & {\left( {\eta < 0.05} \right)} \hfill \cr {0;} \hfill & {\left( {\eta > 0.05} \right)} \hfill \end{array}} \right.$$ (34) The electrostatic potential near the interface is, $$\varphi _l = \left\{ {\begin{array}{*{20}{l}} {\varphi ;} \hfill & {\left( {\eta = 0.05} \right)} \hfill \cr {0;} \hfill & {\left( {\eta < 0.05} \right)} \hfill \cr {0;} \hfill & {\left( {\eta > 0.05} \right)} \hfill \end{array}} \right.$$ (35) Kinetic interface parameter and overpotential relation In this model, the metal corrosion is described by the order parameter η. The corrosion rate is controlled by the kinetic interface parameter L. The shift in the corrosion mode from activation-controlled to diffusion-controlled can be modeled by continuous variation of the kinetic interface parameter L. The relationship between the kinetic interface parameter L and the corrosion rate is linear in the activation-controlled mode.33 From the Butler–Volmer equation, as expressed in (2), the kinetic interface parameter has an effect on overpotential similar to that of the current density, as expressed below in (36). A similar technique is also implemented in a peridynamic model, in which the interface diffusivity is directly related to the current density for Tafel relation.7 $$L = L_o\left[ {\exp \left( {\frac{{\alpha _az_1F\varphi _{s,o}}}{{R_gT}}} \right) - \exp \left( { - \frac{{\alpha _cz_1F\varphi _{s,o}}}{{R_gT}}} \right)} \right]$$ (36) Following the method developed in Refs 7 and33 and using the experimental values for SS304 (reported in Table 1s), i0 = 1.0 × 10−6A/cm2 and αa = 0.26, we calculated L0 = 1.94 × 10−13 m3/(Js). 1D PF model We implemented the PF model to simulate the corrosion evolution in 1D. The simulations are executed at T = 293.15 K (20 °C) with metal potential of 844 mV SHE (standard hydrogen electrode) (i.e., 600 mV SCE [saturated calomel electrode]) in a 1 M NaCl solution. The temperature dependence of the diffusion coefficient is governed by the Einstein relation.12 The PF simulation results for the corrosion length are then compared with the 1D pencil electrode of experimental findings.3 The simulations are performed for 400 s, and the results of the corroded length are plotted against the square root of time \(\left( {\sqrt t } \right)\). The simulation results agree well with the experimental results, as illustrated in Fig. 2. The 1D PF model and the 1D pencil electrode experimental results show similar slopes. Fig. 2 Comparison of corrosion kinetics of 1D PF model with 1D pencil electrode (1D growth in an artificial pit) in 1 M NaCl at 20 °C and 600 mV (SCE).3 Full size image A qualitative study on the concentration distribution of ionic species inside the electrolyte is performed as done in many classical numerical models for crevice and pitting corrosion.9,10,48 It is difficult to quantitatively measure the molar concentration distribution of ionic species in the electrolyte experimentally. Because such experimental data is lacking, we have discussed these concentration variations theoretically. Figure 1s (supplementary material) shows the concentration in mol/L on a logarithmic scale. The higher value of metal ions near the interface results in a slight increase in the [H+] ion concentration (i.e., a decrease in the pH value) due to strong coupling between C1, C2, and C5. The value of [H+] increases as the overpotential increases because it results in a higher production rate of metal ions and hydrolysis of metal ions. Although, this study was performed on a lower overpotential, but a small increase in C5 can still be seen in Fig. 1s (supplementary material). This increase in positive charge is neutralized by the migration of chloride ions towards the interface, as shown in Fig. 1s (supplementary material). To investigate the effects of metal potential, several simulations were performed to show the behavior of corrosion under different metal potentials. Figure 2s (supplementary material) shows that the corrosion rate obtained for these metal potentials are of the same order of the magnitude as the experimental results.49 The experimental results plotted in Fig. 2s (supplementary material) give the maximum corrosion rates that can be achieved at the corresponding metal potential. A calibration study was also performed to achieve a corrosion rate for PF 1D model simulation close to experimental ones by varying exchange current density (i0). It was found that if the value of i0 is chosen equal to twice the reported value (i0 = 1 × 10−6 A/cm2) in Table 1s (supplementary material), then the corrosion rate agree well with the experimental values.49 For the sake of consistency, all the presented modeling results are simulated by using the same of value of i0 as reported in Table 1s (supplementary material). The overpotential for the corresponding corrosion rates are shown in Fig. 3s (supplementary material). PF simulations for 2D model The 2D simulations are performed with a metal potential of 600 mV SCE (844 mV SHE). The boundary conditions and initial values are the same as described in Fig. 7s (supplementary material). To compare the 2D PF model results with the experimental ones, a 300 μm by 240 μm rectangular geometry is considered in which the metal and electrolyte parts are equally divided, as shown in Fig. 4s (supplementary material). A 60 μm wide and 20 μm deep semi-elliptical pit is assumed. The remaining surface, as shown in Fig. 4s, is considered to have a passive oxide film. Figure 4s (supplementary material) shows the concentration distribution inside the electrolyte at various time intervals. In Fig. 3, the 2D PF model results are compared with the 2D foil experiment results reported in the literature.3 The evolution of pitting depth over the time shows a trend similar to that found in the experimental results3 but with deeper pitting depths than the experimantal data. As mentioned earlier, the regrowth of passive film may be an important factor. We will include the formation of passive film in a future study. The contours of the electrostatic potential distribution for the simulations with the above conditions are shown in Fig. 5s (supplementary material). Fig. 3 Comparison of 2D PF model with 2D foil experiment at 600 mV (SCE) at 20 °C.3 Full size image Case studiesCase study 1: Interaction of closely located pits In reality, multiple pits can nucleate due to changes in chemistry on the metal surface, whereas most numerical models consider only the nucleation or growth of a single pit. A few efforts have been made to understand both experimentally and numerically the interaction of multiple pits.50,51 Because we have not considered pit initiation in our PF model, we apply our PF model to two narrow initial openings of 5 μm each at distances of (a) 12 μm and (b) 5 μm at an applied metal potential of 200 mV SHE. The boundary conditions and initial values are the same as those given in Fig. 7s (supplementary material). Figure 4 shows the changes in the morphology of pits with and without interaction in (b) and (a), respectively. It can be seen that without their interaction, these pits corrode at a rate similar to that at which they grow individually. After the pits interact, the chemical compositions of the ionic species change in the vicinities of the pits in the electrolyte. The interaction between the two pits can have either a positive or negative effect on pit growth.52 In this study the interaction of the two closely located pits had a positive effect which can be seen in Fig. 4 (b) at t = 6 s. The corroded material in both cases was estimated which suggested that the corrosion rate was increased in case (b). Two pits finally coalesce to form a wider pit, similar to the pits formed in real-life metallic structures (i.e., multiple pits nucleate on the corroding surface and interact with each other), which are wider. Fig. 4 Multiple-pit morphology changes over time and changes in corrosion rate after the interaction of two pits Full size image Case study 2: Pitting corrosion in a stressed material Like other alloys, stainless steel can have stressed zones (tensile and compressive). It is believed that overpotential is not uniform in the case of stressed zones, which results in different corrosion rates in different material locations and directions. The experimental findings53 show that most pits grow in locations near the scratched lines on the surface that result from mechanical polishing. These scratched lines could result in strain hardening, as revealed by electrochemical analysis.53 The experimental findings also illustrate that overpotential is not uniform in the presence of residual stresses. Gutman explained the same phenomenon with his theoretical model in which the compressive stress zone has less overpotential than the unstressed zone and the tensile stress zone.54 The overpotential is directly related to the corrosion current density. The relationship between the overpotential of the compressive stress zone (φcomp,s), the unstressed zone (φs,o), and the tensile stress zone (φtens,s) is φcomp,s < φs,o < φtens,s. It should be noted that corrosion rate in plastically deformed zones is greater than that of elastically deformed zones, due to the presence of high density dislocations in plastically deformed zones. In this study, we applied the overpotential dependence on applied/residual stress proposed by Gutman.54 According to Gutman's model,54 residual stress of 600MPa corresponds to a change in overpotential of about 20mV in our system. Here, we model a material with both tensile and compressive stress zones, which have an overpotential difference of φcomp,s = φs,o − 20 mV and φtens,s = φs,o + 20 mV, whereas φs,o is calculated from (32). A small opening of 6 μm is considered at the material's surface. The boundary conditions and initial values are the same as those given in Fig. 7s (supplementary material). Figure 5 shows that areas under tensile stress corrode at a faster rate than areas in the compressive stress zone. The pit morphology is closer to that of pits formed during a natural corrosion process because, in most natural scenarios, the corrosion process begins when the passive film is damaged by strain hardening of the surface. In fact, in most of these cases, multiple pits coalesce and grow faster along width than depth. This process is already illustrated in the previous case in which two closely located pits interact. Fig. 5 Pit growth at φm = −400 mV (SHE) for a material with tensile and compressive residual stresses Full size image Case Study 3: Crystallographic plane-dependent pitting corrosion Several studies suggested that crystallographic orientations greatly affect the propagation rates and morphology of the corroding pits.55,56,57 This dependence is usually attributed to factors such as close packing of crystal planes, reaction rate variation along different plane orientations and density of crystalline defects on micro scale. Here, we demonstrate that this PF model can be a good tool to study this phenomenon in detail. The crystal orientations affect the rate of corrosion because planes with lower atomic densities usually corrode at faster rates than planes with higher atomic densities.57 It has been reported that the corrosion rate tends to increase in the order of {111} < {110} ≤ {100}. The corrosion rate in the crystallographic plane {111} is one third of {100}.57 The scenario in which planes {110} and {100} corrode at the same rate is considered because no exact value is available for their ratio. We implemented our PF model to study the effects of the crystallographic plane orientation on pit growth. The domain geometry considered is 30 μm × 27 μm, as shown in Fig. 6. The PF simulations are performed at a lower metal potential of −400 mV SHE because it is believed that the crystallographic orientation dependence is limited to lower overpotentials when the corrosion process is activation controlled.33,58 A small opening of 6 μm is considered at the surface of the material. The initial values and boundary conditions are the same as described in Fig. 7s (supplementary material). Crystallographic planes {111}, {110}, and {100} are represented by blue, brown, and magenta, respectively, in Fig. 6, which shows that the pit shape is no longer uniform because {111} corrodes at one third the rate of the other two planes. This pit morphology illustrates the strength of our PF model under complex microstructures, with which most sharp interface models fail to cope. Fig. 6 Pit morphology evolution over time under crystalline dependent corrosion Full size image Case Study 4: Pitting corrosion in ceramic particle–reinforced steel Ceramic particles such as TiB2 and/or TiC are often embedded into steel to improve its stiffness, strength,59 and wear resistance.60 Although the addition of these ceramic particles improves some of the material's mechanical properties, it has very little effect on corrosion resistance.61 In fact, these reinforcements may enhance stress corrosion cracking (SCC) because they can change the stress distribution near the pits. In case of SCC, a higher stress concentration can be resulted at the tip of a growing pit when the pit reaches a ceramic particle. Metal corrodes faster at the high tensile stress region in the vicinity of a ceramic particle. As we are not studying SCC in this study, the effect of stress concentration will not be considered here. Because the ceramic particles are far less reactive than steel, we assume that they are non-corrodible in salt solution. To ensure that the ceramic particles do not corrode in the salt solution, we considered L to be equal to zero for the ceramic particles. A small opening of 6 μm is assumed at the surface of the material. The boundary conditions and initial values are the same as those described in Fig. 7s (supplementary material). Figure 7 shows that the pit morphology changes with the presence of ceramic materials. This example elaborates the ability of this PF model to handle complex structures. Fig. 7 Pit growth in ceramic particle–reinforced steel at φm = 200 mV (SHE) Full size image In this study, we have developed a PF model for metal corrosion with ion transport in the electrolyte and this model is used to study pitting corrosion of SS304 in salt water. It is shown that once the kinetic interface parameter is calibrated with the material's exchange current density, the model has the potential to predict corrosion behavior over the whole range of reaction and diffusion-controlled processes. The simulation results showed that the PF model predictions agree well with the experimental results and that the model has the ability to handle complex microstructures, such as the interaction of closely located pits, the effects of stress on pitting, the effects of ceramic particles, and crystallographic plane orientation on corrosion. Method Finite element method, Galerkin method,62 is used for space discretization while Backwards differentiation formula (BDF) method63 is used for the time integration of the governing partial differential Eqs (17, 18, 21, 24–28). Triangular Lagrangian mesh elements were chosen to discretize the space. It was ensured that we have at least 12 mesh elements inside the diffuse interface to accurately approximate η (order parameter) and the piecewise functions based on η. Data availability The data and codes supporting the findings of this study are available from the corresponding author on reasonable request. Additional information Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Change history: In the original published HTML version of this Article, some of the characters in the equations were not appearing correctly. This has now been corrected in the HTML version. References 1. Sharland, S. M. A review of the theoretical modelling of crevice and pitting corrosion. Corros. Sci. 27, 289–323 (1987). Article Google Scholar2. Ernst, P. & Newman, R. C. Pit growth studies in stainless steel foils. I. Introduction and pit growth kinetics. Corros. Sci. 44, 927–941 (2002). Article Google Scholar3. Ernst, P. & Newman, R. C. Pit growth studies in stainless steel foils. II. Eff. Temp. Chloride Conc. Sulphate Addit. Corros. Sci. 44, 943–954 (2002). Google Scholar4. Williams, D. E., Westcott, C. & Fleischmann, M. Studies of the initiation of pitting corrosion on stainless steels. J. Electroanal. Chem. Interfacial Electrochem. 180, 549–564 (1984). Article Google Scholar5. Engelhardt, G. & Macdonald, D. D. Unification of the deterministic and statistical approaches for predicting localized corrosion damage. I. Theoretical foundation. Corros. Sci. 46, 2755–2780 (2004). Article Google Scholar6. Laycock, N. J., Noh, J. S., White, S. P. & Krouse, D. P. Computer simulation of pitting potential measurements. Corros. Sci. 47, 3140–3177 (2005). Article Google Scholar7. Chen, Z. & Bobaru, F. Peridynamic modeling of pitting corrosion damage. J. Mech. Phys. Solids 78, 352–381 (2015). Article Google Scholar8. Duddu, R. Numerical modeling of corrosion pit propagation using the combined extended finite element and level set method. Computational Mechanics 54, 613-627 (2014). 9. Sharland, S. M. & Tasker, P. W. A mathematical model of crevice and pitting corrosion – I. The physical model. Corros. Sci. 28, 603–620 (1988). Article Google Scholar10. Sharland, S. M. A mathematical model of crevice and pitting corrosion – II. The physical model. Corros. Sci. 28, 621–630 (1988). Article Google Scholar11. Sarkar, S., Warner, J. E. & Aquino, W. A numerical framework for the modeling of corrosive dissolution. Corros. Sci. 65, 502–511 (2012). Article Google Scholar12. Scheiner, S. & Hellmich, C. Stable pitting corrosion of stainless steel as diffusion-controlled dissolution process with a sharp moving electrode boundary. Corros. Sci. 49, 319–346 (2007). Article Google Scholar13. Abodi, L. C. et al. Modeling localized aluminum alloy corrosion in chloride solutions under non-equilibrium conditions: steps toward understanding pitting initiation. Electrochim. Acta 63, 169–178 (2012). Article Google Scholar14. Galvele, J. Transport processes in passivity breakdown—II. Full hydrolysis of the metal ions. Corros. Sci. 21, 551–579 (1981). Article Google Scholar15. Galvele, J. R. Transport processes and the mechanism of pitting of metals. J. Electrochem. Soc. 123, 464-474 (1976). 16. Krawiec, H., Vignal, V. & Akid, R. Numerical modelling of the electrochemical behaviour of 316L stainless steel based upon static and dynamic experimental microcapillary-based techniques. Electrochim. Acta 53, 5252–5259 (2008). Article Google Scholar17. Turnbull, A. & Thomas, J. A model of crack electrochemistry for steels in the active state based on mass transport by diffusion and ion migration. J. Electrochem. Soc. 129, 1412–1422 (1982). Article Google Scholar18. Walton, J. C. Mathematical modeling of mass transport and chemical reaction in crevice and pitting corrosion. Corros. Sci. 30, 915–928 (1990). Article Google Scholar19. Oldfield, J. W. & Sutton, W. H. Crevice corrosion of stainless steels: II. Experimental studies. Br. Corros. J. 13, 104–111 (1978). Article Google Scholar20. Oldfield, J. W. & Sutton, W. H. Crevice corrosion of stainless steels: I. A mathematical model. Br. Corros. J. 13, 13–22 (1978). Article Google Scholar21. Hebert, K. & Alkire, R. Dissolved metal species mechanism for initiation of crevice corrosion of aluminum: I. Experimental investigations in chloride solutions. J. Electrochem. Soc. 130, 1001–1007 (1983). Article Google Scholar22. Watson, M. K. & Postlethwaite, J. Numerical simulation of crevice corrosion: the effect of the crevice gap profile. Corros. Sci. 32, 1253–1262 (1991). Article Google Scholar23. Sharland, S. M. A mathematical model of the initiation of crevice corrosion in metals. Corros. Sci. 33, 183–201 (1992). Article Google Scholar24. Friedly, J. C. & Rubin, J. Solute transport with multiple equilibrium-controlled or kinetically controlled chemical reactions. Water Resour. Res. 28, 1935–1953 (1992). Article Google Scholar25. White, S. P., Weir, G. J. & Laycock, N. J. Calculating chemical concentrations during the initiation of crevice corrosion. Corros. Sci. 42, 605–629 (2000). Article Google Scholar26. Webb, E. G. & Alkire, R. C. Pit initiation at single sulfide inclusions in stainless steel: III. Mathematical model. J. Electrochem. Soc. 149 (2002). 27. Gavrilov, S., Vankeerberghen, M., Nelissen, G. & Deconinck, J. Finite element calculation of crack propagation in type 304 stainless steel in diluted sulphuric acid solutions. Corros. Sci. 49, 980–999 (2007). Article Google Scholar28. Venkatraman, M. S., Cole, I. S. & Emmanuel, B. Corrosion under a porous layer: a porous electrode model and its implications for self-repair. Electrochim. Acta 56, 8192–8203 (2011). Article Google Scholar29. Xiao, J. & Chaudhuri, S. Predictive modeling of localized corrosion: an application to aluminum alloys. Electrochim. Acta 56, 5630–5641 (2011). Article Google Scholar30. Scheiner, S. & Hellmich, C. Finite volume model for diffusion- and activation-controlled pitting corrosion of stainless steel. Comput. Methods Appl. Mech. Eng. 198, 2898–2910 (2009). Article Google Scholar31. Onishi, Y., Takiyasu, J., Amaya, K., Yakuwa, H. & Hayabusa, K. Numerical method for time-dependent localized corrosion analysis with moving boundaries by combining the finite volume method and voxel method. Corros. Sci. 63, 210–224 (2012). Article Google Scholar32. Li, Y., Hu, S., Sun, X. & Stan, M. A review: applications of the phase field method in predicting microstructure and property evolution of irradiated nuclear materials. npj Comput. Mater. 3, 16 (2017). Article Google Scholar33. Mai, W., Soghrati, S. & Buchheit, R. G. A phase field model for simulating the pitting corrosion. Corros. Sci. 110, 157–166 (2016). Article Google Scholar34. Mai, W. & Soghrati, S. A phase field model for simulating the stress corrosion cracking initiated from pits. Corros. Sci. 125, 87–98 (2017). Article Google Scholar35. Kim, S. G., Kim, W. T. & Suzuki, T. Phase-field model for binary alloys. Phys. Rev. E 60, 7186–7197 (1999). Article Google Scholar36. Chen, L. Q. Phase-field models for microstructure evolution. Annu. Rev. Mater. Res. 32, 113–140 (2002). Article Google Scholar37. Moelans, N., Blanpain, B. & Wollants, P. An introduction to phase-field modeling of microstructure evolution. Calphad 32, 268–294 (2008). Article Google Scholar38. Guo, X. H., Shi, S.-Q. & Ma, X. Q. Elastoplastic phase field model for microstructure evolution. Appl. Phys. Lett. 87, 221910 (2005). Article Google Scholar39. Guo, X. H., Shi, S. Q., Zhang, Q. M. & Ma, X. Q. An elastoplastic phase-field model for the evolution of hydride precipitation in zirconium. Part I: smooth specimen. J. Nucl. Mater. 378, 110–119 (2008). Article Google Scholar40. Guo, X. H., Shi, S. Q., Zhang, Q. M. & Ma, X. Q. An elastoplastic phase-field model for the evolution of hydride precipitation in zirconium. Part II: specimen with flaws. J. Nucl. Mater. 378, 120–125 (2008). Article Google Scholar41. Wheeler, A. A., Boettinger, W. J. & McFadden, G. B. Phase-field model for isothermal phase transitions in binary alloys. Phys. Rev. A 45, 7424–7439 (1992). Article Google Scholar42. Hu, S. Y., Murray, J., Weiland, H., Liu, Z. K. & Chen, L. Q. Thermodynamic description and growth kinetics of stoichiometric precipitates in the phase-field approach. Calphad 31, 303–312 (2007). Article Google Scholar43. Ginzburg, V. L. On the theory of superconductivity. Il Nuovo Cim. (1955–1965) 2, 1234–1250 (1955). Article Google Scholar44. Cahn, J. W. & Hilliard, J. E. Free energy of a nonuniform system. I. Interfacial free energy. J. Chem. Phys. 28, 258–267 (1958). Article Google Scholar45. Pongsaksawad, W., Powell, A. C. & Dussault, D. Phase-field modeling of transport-limited electrolysis in solid and liquid states. J. Electrochem. Soc. 154, F122–F133 (2007). Article Google Scholar46. Guyer, J. E., Boettinger, W. J., Warren, J. A. & McFadden, G. B. Phase field modeling of electrochemistry. II. Kinet. Phys. Rev. E 69, 021604 (2004). Article Google Scholar47. Leblanc, P., Cabaleiro, J., Paillat, T. & Touchard, G. Impact of the laminar flow on the electrical double layer development. J. Electrost. 88, 76–80 (2017). Article Google Scholar48. Turnbull, A. & Ferriss, D. Mathematical modelling of the electrochemistry in corrosion fatigue cracks in structural steel cathodically protected in sea water. Corros. Sci. 26, 601–628 (1986). Article Google Scholar49. Revie, R. W. & Uhlig, H. H. Uhlig's Corrosion Handbook, 3rd edn (Wiley, Hoboken, New Jersey, 2011). 50. Budiansky, N. D., Organ, L., Hudson, J. L. & Scully, J. R. Detection of interactions among localized pitting sites on stainless steel using spatial statistics. J. Electrochem. Soc. 152, B152–B160 (2005). Article Google Scholar51. Laycock, N., White, S. & Krouse, D. Numerical simulation of pittingcorrosion: interactions between pits in potentiostatic conditions. ECS Trans. 1, 37–45 (2006). Article Google Scholar52. Laycock, N. J., Krouse, D. P., Hendy, S. C. & Williams, D. E. Computer simulation of pitting corrosion of stainless steels. Electrochem. Soc. Interface 23, 65–71 (2014). Article Google Scholar53. Martin, F. A., Bataillon, C. & Cousty, J. In situ AFM detection of pit onset location on a 304L stainless steel. Corros. Sci. 50, 84–92 (2008). Article Google Scholar54. Gutman, E. M. Mechanochemistry of Solid Surfaces. (World Scientific Publishing Company, Singapore, 1994). 55. Shahryari, A., Szpunar, J. A. & Omanovic, S. The influence of crystallographic orientation distribution on 316LVM stainless steel pitting behavior. Corros. Sci. 51, 677–682 (2009). Article Google Scholar56. Kumar, B. R., Singh, R., Mahato, B., De, P. K., Bandyopadhyay, N. R. & Bhattacharya, D. K. Effect of texture on corrosion behavior of AISI 304L stainless steel. Mater. Charact. 54, 141–147 (2005). Article Google Scholar57. Lindell, D. & Pettersson, R. Crystallographic effects in corrosion of austenitic stainless steel 316L. Mater. Corros. 66, 727–732 (2015). Article Google Scholar58. Frankel, G. S. Pitting corrosion of metals: a review of the critical factors. J. Electrochem. Soc. 145, 2186–2198 (1998). Article Google Scholar59. Akhtar, F. Ceramic reinforced high modulus steel composites: processing, microstructure and properties. Can. Metall. Q. 53, 253–263 (2014). Article Google Scholar60. Akhtar, F. Microstructure evolution and wear properties of in situ synthesized TiB2 and TiC reinforced steel matrix composites. J. Alloy. Compd. 459, 491–497 (2008). Article Google Scholar61. Pagounis, E. & Lindroos, V. K. Processing and properties of particulate reinforced steel matrix composites. Mater. Sci. Eng. A 246, 221–234 (1998). Article Google Scholar62. Fairweather, G. Finite Element Galerkin Methods for Differential Equations, Lecture Notes in Pure and Applied Mathematics, vol. 34, Marcel Dekker, New York, (1978). 63. Ascher, U. M. & Petzold, L. R. Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations. Vol. 61 (Society For Industrial Applied Mathematics, U.S., Philadelphia, 1998). Download references Acknowledgements This work was supported by Research Grants Council of Hong Kong (PolyU 152140/14E). Author information AffiliationsDepartment of Mechanical Engineering, The Hong Kong Polytechnic University, 11 Yuk Choi Road, Hung Hom, Kowloon, Hong KongTalha Qasim Ansari, Zhihua Xiao & San-Qiang ShiPacific Northwest National Laboratory, Richland, WA, 99352, USAShenyang Hu & Yulan LiDepartment of Chemical and Materials Engineering, University of Alberta, Edmonton, AB, T6G 2R3, CanadaJing-Li Luo AuthorsSearch for Talha Qasim Ansari in:Nature Research journals • PubMed • Google Scholar Search for Zhihua Xiao in:Nature Research journals • PubMed • Google Scholar Search for Shenyang Hu in:Nature Research journals • PubMed • Google Scholar Search for Yulan Li in:Nature Research journals • PubMed • Google Scholar Search for Jing-Li Luo in:Nature Research journals • PubMed • Google Scholar Search for San-Qiang Shi in:Nature Research journals • PubMed • Google Scholar Contributions S.Q.S. conceived the idea, designed and supervised the project. T.Q.A. developed the model, performed simulations and wrote the manuscript. Z.X. and S.Q.S. also contributed in developing the model. S.Q.S., S.Y.H., Y.L. and J.L. provided critical comments and contributed to revisions of the manuscript. Competing interests The authors declare no competing interests. Corresponding author Correspondence to San-Qiang Shi. Electronic supplementary material Supplementary Material Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. About this article Publication history Received 20 November 2017 Revised 13 June 2018 Accepted 18 June 2018 Published 24 July 2018 DOI https://doi.org/10.1038/s41524-018-0089-4 This is a preview of a remote PDF: https://www.nature.com/articles/s41524-018-0089-4.pdf Talha Qasim Ansari, Zhihua Xiao, Shenyang Hu, Yulan Li, Jing-Li Luo, San-Qiang Shi. Phase-field model of pitting corrosion kinetics in metallic materials, npj Computational Materials, 2018, DOI: 10.1038/s41524-018-0089-4
CommonCrawl
Ivan Mamaev Honorary Editor Valery Kozlov Vladimir Aslanov Jan Awrejcewicz Boris Bardin Ivan Bizyaev Anthony Bloch Alexey Bolsinov Anastasios Bountis Felix Chernousko Ulrike Feudel Igor Furtat Vyacheslav Grines Alexander Hramov Alexander Ivanov Alexey Kazakov Viktor Kazantsev Olga Kholostova Alexander Kilin Oleg Kirillov Konstantin Koshel' Nikolai Kudryashov Pavel Kuptsov Jürgen Kurths Nikolay Kuznetsov Víctor Lanchares Naomi Leonard Alexandr Maloletov Anatoly Markeev Vladimir Nekorkin Grigory Osipov Arkady Pikovsky Sergey Prants Maria Przybylska Marko Robnik Michael Rosenblum Sergei Ryzhkov Andrey Shafarevich Anton Shiriaev Charalampos Skokos Mikhail Sokolovskiy Mikhail Svinin Iskander Taimanov Valentin Tenenev Dmitrii Treschev Andrey Tsiganov Tatyana Vadivasova Michael Zaks Alexandra Zobova Passed away Vadim Anishchenko Alexey Borisov Online First Publication is available now! Online First Articles Articles not assigned to an issue Anatoly Pavlovich Markeev. On the Occasion of his 80th Birthday Citation: Anatoly Pavlovich Markeev. On the Occasion of his 80th Birthday, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 467-472 Nonlinear physics and mechanics Markeev A. P. On a Change of Variables in Lagrange's Equations This paper studies a material system with a finite number of degrees of freedom the motion of which is described by differential Lagrange's equations of the second kind. A twice continuously differentiable change of generalized coordinates and time is considered. It is well known that the equations of motion are covariant under such transformations. The conventional proof of this covariance property is usually based on the integral variational principle due to Hamilton and Ostrogradskii. This paper gives a proof of covariance that differs from the generally accepted one. In addition, some methodical examples interesting in theory and applications are considered. In some of them (the equilibrium of a polytropic gas sphere between whose particles the forces of gravitational attraction act and the problem of the planar motion of a charged particle in the dipole force field) Lagrange's equations are not only covariant, but also possess the invariance property. Keywords: analytical mechanics, Lagrange's equations, transformation methods in mechanics Citation: Markeev A. P., On a Change of Variables in Lagrange's Equations, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 473-480 Kholostova O. V. On Nonlinear Oscillations of a Time-Periodic Hamiltonian System at a 2:1:1 Resonance We consider the motions of a near-autonomous Hamiltonian system $2\pi$-periodic in time, with two degrees of freedom, in a neighborhood of a trivial equilibrium. A multiple parametric resonance is assumed to occur for a certain set of system parameters in the autonomous case, for which the frequencies of small linear oscillations are equal to two and one, and the resonant point of the parameter space belongs to the region of sufficient stability conditions. Under certain restrictions on the structure of the Hamiltonian of perturbed motion, nonlinear oscillations of the system in the vicinity of the equilibrium are studied for parameter values from a small neighborhood of the resonant point. Analytical boundaries of parametric resonance regions are obtained, which arise in the presence of secondary resonances in the transformed linear system (the cases of zero frequency and equal frequencies). The general case, for which the parameter values do not belong to the parametric resonance regions and their small neighborhoods, and both cases of secondary resonances are considered. The question of the existence of resonant periodic motions of the system is solved, and their linear stability is studied. Two- and threefrequency conditionally periodic motions are described. As an application, nonlinear resonant oscillations of a dynamically symmetric satellite (rigid body) relative to the center of mass in the vicinity of its cylindrical precession in a weakly elliptical orbit are investigated. Keywords: multiple parametric resonance, normalization, nonlinear oscillations, stability, periodic motions, satellite, cylindrical precession Citation: Kholostova O. V., On Nonlinear Oscillations of a Time-Periodic Hamiltonian System at a 2:1:1 Resonance, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 481-512 Cabral H. E., Carvalho A. C. Parametric Resonance in the Oscillations of a Charged Pendulum Inside a Uniformly Charged Circular Ring We study the mechanical system consisting of the following variant of the planar pendulum. The suspension point oscillates harmonically in the vertical direction, with small amplitude $\varepsilon$, about the center of a circumference which is located in the plane of oscillations of the pendulum. The circumference has a uniform distribution of electric charges with total charge $Q$ and the bob of the pendulum, with mass $m$, carries an electric charge $q$. We study the motion of the pendulum as a function of three parameters: $\varepsilon$, the ratio of charges $\mu = \frac qQ$ and a parameter $\alpha$ related to the frequency of oscillations of the suspension point and the length of the pendulum. As the speed of oscillations of the mass $m$ are small magnetic effects are disregarded and the motion is subjected only to the gravity force and the electrostatic force. The electrostatic potential is determined in terms of the Jacobi elliptic functions. We study the parametric resonance of the linearized equations about the stable equilibrium finding the boundary surfaces of stability domains using the Deprit – Hori method. Keywords: planar charged pendulum, Hamiltonian systems, parametric resonance, Deprit – Hori method, Jacobi elliptic integrals Citation: Cabral H. E., Carvalho A. C., Parametric Resonance in the Oscillations of a Charged Pendulum Inside a Uniformly Charged Circular Ring, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 513-526 Podvigina O. M. Rotation of a Planet in a Three-body System: a Non-resonant Case We investigate the temporal evolution of the rotation axis of a planet in a system comprised of the planet (which we call an exo-Earth), a star (an exo-Sun) and a satellite (an exo-Moon). The planet is assumed to be rigid and almost spherical, the difference between the largest and the smallest principal moments of inertia being a small parameter of the problem. The orbit of the planet around the star is a Keplerian ellipse. The orbit of the satellite is a Keplerian ellipse with a constant inclination to the ecliptic, involved in two types of slow precessional motion, nodal and apsidal. Applying time averaging over the fast variables associated with the frequencies of the motion of exo-Earth and exo-Moon, we obtain Hamilton's equations for the evolution of the angular momentum axis of the exo-Earth. Using a canonical change of variables, we show that the equations are integrable. Assuming that the exo-Earth is axially symmetric and its symmetry and rotation axes coincide, we identify possible types of motions of the vector of angular momentum on the celestial sphere. Also, we calculate the range of the nutation angle as a function of the initial conditions. (By the range of the nutation angle we mean the difference between its maximal and minimal values.) Keywords: nutation angle, exoplanet, averaging, Hamiltonian dynamics Citation: Podvigina O. M., Rotation of a Planet in a Three-body System: a Non-resonant Case, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 527-541 Bardin B. S., Avdyushkin A. N. On Stability of the Collinear Libration Point $L_1^{}$ in the Planar Restricted Circular Photogravitational Three-Body Problem The stability of the collinear libration point $L_1^{}$ in the photogravitational three-body problem is investigated. This problem is concerned with the motion of a body of infinitely small mass which experiences gravitational forces and repulsive forces of radiation pressure coming from two massive bodies. It is assumed that the massive bodies move in circular orbits and that the body of small mass is located in the plane of their motion. Using methods of normal forms and KAM theory, a rigorous analysis of the Lyapunov stability of the collinear libration point lying on the segment connecting the massive bodies is performed. Conclusions on the stability are drawn both for the nonresonant case and for the case of resonances through order four. Keywords: collinear libration point, photogravitational three-body problem, normal forms, KAM theory, Lyapunov stability, resonances Citation: Bardin B. S., Avdyushkin A. N., On Stability of the Collinear Libration Point $L_1^{}$ in the Planar Restricted Circular Photogravitational Three-Body Problem, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 543-562 Sukhov E. A., Volkov E. V. Numerical Orbital Stability Analysis of Nonresonant Periodic Motions in the Planar Restricted Four-Body Problem We address the planar restricted four-body problem with a small body of negligible mass moving in the Newtonian gravitational field of three primary bodies with nonnegligible masses. We assume that two of the primaries have equal masses and that all primary bodies move in circular orbits forming a Lagrangian equilateral triangular configuration. This configuration admits relative equilibria for the small body analogous to the libration points in the threebody problem. We consider the equilibrium points located on the perpendicular bisector of the Lagrangian triangle in which case the bodies constitute the so-called central configurations. Using the method of normal forms, we analytically obtain families of periodic motions emanating from the stable relative equilibria in a nonresonant case and continue them numerically to the borders of their existence domains. Using a numerical method, we investigate the orbital stability of the aforementioned periodic motions and represent the conclusions as stability diagrams in the problem's parameter space. Keywords: Hamiltonian mechanics, four-body problem, periodic motions, orbital stability Citation: Sukhov E. A., Volkov E. V., Numerical Orbital Stability Analysis of Nonresonant Periodic Motions in the Planar Restricted Four-Body Problem, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 563-576 Krasil'nikov P. S., Ismagilov A. R. On the Dumb-Bell Equilibria in the Generalized Sitnikov Problem This paper discusses and analyzes the dumb–bell equilibria in a generalized Sitnikov problem. This has been done by assuming that the dumb–bell is oriented along the normal to the plane of motion of two primaries. Assuming the orbits of primaries to be circles, we apply bifurcation theory to investigate the set of equilibria for both symmetrical and asymmetrical dumb–bells. We also investigate the linear stability of the trivial equilibrium of a symmetrical dumb–bell in the elliptic Sitnikov problem. In the case of the dumb–bell length $l\geqslant 0.983819$, an instability of the trivial equilibria for eccentricity $e \in (0,\,1)$ is proved. Keywords: Sitnikov problem, dumb–bell, equilibrium, linear stability Citation: Krasil'nikov P. S., Ismagilov A. R., On the Dumb-Bell Equilibria in the Generalized Sitnikov Problem, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 577-588 Bardin B. S., Chekina E. A., Chekin A. M. On the Orbital Stability of Pendulum Oscillations of a Dynamically Symmetric Satellite The orbital stability of planar pendulum-like oscillations of a satellite about its center of mass is investigated. The satellite is supposed to be a dynamically symmetrical rigid body whose center of mass moves in a circular orbit. Using the recently developed approach [1], local variables are introduced and equations of perturbed motion are obtained in a Hamiltonian form. On the basis of the method of normal forms and KAM theory, a nonlinear analysis is performed and rigorous conclusions on orbital stability are obtained for almost all parameter values. In particular, the so-called case of degeneracy, when it is necessary to take into account terms of order six in the expansion of the Hamiltonian function, is studied. Keywords: rigid body, satellite, oscillations, orbital stability, Hamiltonian system, local coordinates, normal form Citation: Bardin B. S., Chekina E. A., Chekin A. M., On the Orbital Stability of Pendulum Oscillations of a Dynamically Symmetric Satellite, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 589-607 Maciejewski A. J., Przybylska M. Gyrostatic Suslov Problem In this paper, we investigate the gyrostat under influence of an external potential force with the Suslov nonholonomic constraint: the projection of the total angular velocity onto a vector fixed in the body vanishes. We investigate cases of free gyrostat, the heavy gyrostat in the constant gravity field, and we discuss certain properties for general potential forces. In all these cases, the system has two first integrals: the energy and the geometric first integral. For its integrability, either two additional first integrals or one additional first integral and an invariant $n$-form are necessary. For the free gyrostat we identify three cases integrable in the Jacobi sense. In the case of heavy gyrostat three cases with one additional first integral are identified. Among them, one case is integrable and the non-integrability of the remaining cases is proved by means of the differential Galois methods. Moreover, for a distinguished case of the heavy gyrostat a co-dimension one invariant subspace is identified. It was shown that the system restricted to this subspace is super-integrable, and solvable in elliptic functions. For the gyrostat in general potential force field conditions of the existence of an invariant $n$-form defined by a special form of the Jacobi last multiplier are derived. The class of potentials satisfying them is identified, and then the system restricted to the corresponding invariant subspace of co-dimension one appears to be integrable in the Jacobi sense. Keywords: gyrostat, Suslov constraint, integrability, nonholonomic systems Citation: Maciejewski A. J., Przybylska M., Gyrostatic Suslov Problem, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 609-627 Gadzhiev M. M., Kuleshov A. S. Nonintegrability of the Problem of the Motion of an Ellipsoidal Body with a Fixed Point in a Flow of Particles The problem of the motion, in the free molecular flow of particles, of a rigid body with a fixed point bounded by the surface of an ellipsoid of revolution is considered. This problem is similar in many aspects to the classical problem of the motion of a heavy rigid body about a fixed point. In particular, this problem possesses the integrable cases corresponding to the classical Euler – Poinsot, Lagrange and Hess cases of integrability of the equations of motion of a heavy rigid body with a fixed point. A natural question arises about the existence of analogues of other integrable cases that exist in the problem of motion of a heavy rigid body with a fixed point (Kovalevskaya case, Goryachev – Chaplygin case, etc) for the system considered. Using the standard Euler angles as generalized coordinates, the Hamiltonian function of the system is derived. Equations of motion of the body in the flow of particles are presented in Hamiltonian form. Using the theorem on the Liouville-type nonintegrability of Hamiltonian systems near elliptic equilibrium positions, which has been proved by V. V. Kozlov, necessary conditions for the existence in the problem under consideration of an additional analytic first integral independent of the energy integral are presented. We have proved that the necessary conditions obtained are not fulfilled for a rigid body with a mass distribution corresponding to the classical Kovalevskaya integrable case in the problem of the motion of a heavy rigid body with a fixed point. Thus, we can conclude that this system does not possess an integrable case similar to the Kovalevskaya integrable case in the problem of the motion of a heavy rigid body with a fixed point. Keywords: rigid body with a fixed point, free molecular flow of particles, Hamiltonian system, nonintegrability Citation: Gadzhiev M. M., Kuleshov A. S., Nonintegrability of the Problem of the Motion of an Ellipsoidal Body with a Fixed Point in a Flow of Particles, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 629-637 Shatina A. V., Djioeva M. I., Osipova L. S. Mathematical Model of Satellite Rotation near Spin-Orbit Resonance 3:2 This paper considers the rotational motion of a satellite equipped with flexible viscoelastic rods in an elliptic orbit. The satellite is modeled as a symmetric rigid body with a pair of flexible viscoelastic rods rigidly attached to it along the axis of symmetry. A planar case is studied, i. e., it is assumed that the satellite's center of mass moves in a Keplerian elliptic orbit lying in a stationary plane and the satellite's axis of rotation is orthogonal to this plane. When the rods are not deformed, the satellite's principal central moments of inertia are equal to each other. The linear bending theory for thin inextensible rods is used to describe the deformations. The functionals of elastic and dissipative forces are introduced according to this model. The asymptotic method of motions separation is used to derive the equations of rotational motion reflecting the influence of the fluctuations, caused by the deformations of the rods. The method of motion separation is based on the assumption that the period of the autonomous oscillations of a point belonging to the rod is much smaller than the characteristic time of these oscillations' decay, which, in its turn, is much smaller than the characteristic time of the system's motion as a whole. That is why only the oscillations induced by the external and inertial forces are taken into account when deriving the equations of the rotational motion. The perturbed equations are described by a third-order system of ordinary differential equations in the dimensionless variable equal to the ratio of the satellite's absolute value of angular velocity to the mean motion of the satellite's center of mass, the angle between the satellite's axis of symmetry and a fixed axis and the mean anomaly. The right-hand sides of the equation depend on the mean anomaly implicitly through the true anomaly. A new slow angular variable is introduced in order to perform the averaging for the perturbed system near the 3:2 resonance, and the averaging is performed over the mean anomaly of the satellite's center of mass orbit. In doing so the wellknown expansions of the true anomaly and its sine and cosine in powers of the mean anomaly are used. The steady-state solutions of the resulting system of equations are found and their stability is studied. It is shown that, if certain conditions are fulfilled, then asymptotically stable solutions exist. Therefore, the 3:2 spin-orbital resonance capture is explained. Keywords: Keplerian elliptical orbit, satellite, spin-orbit resonance, dissipation Citation: Shatina A. V., Djioeva M. I., Osipova L. S., Mathematical Model of Satellite Rotation near Spin-Orbit Resonance 3:2, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 651-660 NONLINEAR ENGINEERING AND ROBOTICS Moiseev G. N., Zobova A. A. Dynamics-Based Piecewise Constant Control of an Omnivehicle We consider the dynamics of an omnidirectional vehicle moving on a perfectly rough horizontal plane. The vehicle has three omniwheels controlled by three direct current motors. We study constant voltage dynamics for the symmetric model of the vehicle and get a general analytical solution for arbitrary initial conditions which is shown to be Lyapunov stable. Piecewise combination of the trajectories produces a solution to boundary-value problems for arbitrary initial and terminal mass center coordinates, course angles and their derivatives with one switch point. The proposed control combining translation and rotation of the vehicle is shown to be more energy-efficient than a control splitting these two types of motion. For the nonsymmetrical vehicle configuration, we propose a numerical procedure of solving boundary-value problems that uses parametric continuation of the solution obtained for the symmetric vehicle. It shows that the proposed type of control can be used for an arbitrary vehicle configuration. Keywords: omnidirectional vehicle, omniwheel, universal wheel, dynamics-based control, piecewise control, point-to-point path planning Citation: Moiseev G. N., Zobova A. A., Dynamics-Based Piecewise Constant Control of an Omnivehicle, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 661-680 Artemova E. M., Kilin A. A. A Nonholonomic Model and Complete Controllability of a Three-Link Wheeled Snake Robot This paper is concerned with the controlled motion of a three-link wheeled snake robot propelled by changing the angles between the central and lateral links. The limits on the applicability of the nonholonomic model for the problem of interest are revealed. It is shown that the system under consideration is completely controllable according to the Rashevsky – Chow theorem. Possible types of motion of the system under periodic snake-like controls are presented using Fourier expansions. The relation of the form of the trajectory in the space of controls to the type of motion involved is found. It is shown that, if the trajectory in the space of controls is centrally symmetric, the robot moves with nonzero constant average velocity in some direction. Keywords: nonholonomic mechanics, wheeled vehicle, snake robot, controllability, periodic control Citation: Artemova E. M., Kilin A. A., A Nonholonomic Model and Complete Controllability of a Three-Link Wheeled Snake Robot, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 681-707 Karavaev Y. L. Spherical Robots: An Up-to-Date Overview of Designs and Features This paper describes the existing designs of spherical robots and reviews studies devoted to investigating their dynamics and to developing algorithms for controlling them. An analysis is also made of the key features and the historical aspects of the development of their designs, in particular, taking into account various areas of application. Keywords: spherical robot, rolling, design, modeling Citation: Karavaev Y. L., Spherical Robots: An Up-to-Date Overview of Designs and Features, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 4, pp. 709-750 Introductory Note Citation: Introductory Note, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 753-753 Marchuk E. A., Kalinin Y. V., Sidorova A. V., Maloletov A. V. On the Problem of Position and Orientation Errors of a Large-Sized Cable-Driven Parallel Robot This paper deals with the application of force sensors to estimate position errors of the center of mass of the mobile platform of a cable-driven parallel robot. Conditions of deformations of cables and towers of the robot are included in the numerical model and external disturbance is included too. The method for estimating the error in positioning via force sensors is sensitive to the magnitude of spatial oscillations of the mobile platform. To reduce torsional vibrations of the mobile platform around the vertical axis, a dynamic damper has been included into the system. Keywords: cable, robot, additive, printing, position, orientation, errors, force sensors Citation: Marchuk E. A., Kalinin Y. V., Sidorova A. V., Maloletov A. V., On the Problem of Position and Orientation Errors of a Large-Sized Cable-Driven Parallel Robot, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 755-770 Shaker W. K., Klimchik A. S. Stiffness Modeling of a Double Pantograph Transmission System: Comparison of VJM and MSA Approaches This paper deals with the stiffness modeling of the double pantograph transmission system. The main focus is on the comparison analysis of different stiffness modeling approaches: virtual joint modeling (VJM) and matrix structural analysis (MSA). The aim of this work is to investigate the limitations of the considered approaches. To address this issue, corresponding MSA-based and VJM-based stiffness models were derived. To evaluate the deflections of the end effector, the external loads were applied in different directions at multiple points in the robot workspace. The computational cost and the difference in end-effector deflections were studied and compared. MSA was found to be 2 times faster than VJM for this structure. The results obtained showed that the MSA approach is more appropriate for the double pantograph mechanism. Keywords: stiffness modeling, parallel robot, double pantograph, virtual joint modeling, matrix structural analysis Citation: Shaker W. K., Klimchik A. S., Stiffness Modeling of a Double Pantograph Transmission System: Comparison of VJM and MSA Approaches, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 771-785 Ali Deeb A., Shahhoud F. Image-Based Object Detection Approaches to be Used in Embedded Systems for Robots Navigation This paper investigates the problem of object detection for real-time agents' navigation using embedded systems. In real-world problems, a compromise between accuracy and speed must be found. In this paper, we consider a description of the architecture of different object detection algorithms, such as R-CNN and YOLO, to compare them on different variants of embedded systems using different datasets. As a result, we provide a trade-off study based on accuracy and speed for different object detection algorithms to choose the appropriate one depending on the specific application task. Keywords: robot navigation, object detection, embedded systems, YOLO algorithms, R-CNN algorithms, object semantics Citation: Ali Deeb A., Shahhoud F., Image-Based Object Detection Approaches to be Used in Embedded Systems for Robots Navigation, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 787-802 Saypulaev G. R., Adamov B. I., Kobrin A. I. Comparative Analysis of the Dynamics of a Spherical Robot with a Balanced Internal Platform Taking into Account Different Models of Contact Friction The subject of this paper is a spherical robot with an internal platform with four classictype omniwheels. The motion of the spherical robot on a horizontal surface is considered and its kinematics is described. The aim of the research is to study the dynamics of the spherical robot with different levels of detailing of the contact friction model. Nonholonomic models of the dynamics of the robot with different levels of detailing of the contact friction model are constructed. The programmed control of the motion of the spherical robot using elementary maneuvers is proposed. A simulation of motion is carried out and the efficiency of the proposed control is confirmed. It is shown that, at low speeds of motion of the spherical robot, it is allowed to use a model obtained under the assumption of no slipping between the sphere and the floor. The influence of the contact friction model at high-speed motions of the spherical robot on its dynamics under programmed control is demonstrated. This influence leads to the need to develop more accurate models of the motion of a spherical robot and its contact interaction with the supporting surface in order to increase the accuracy of motion control based on these models. Keywords: spherical robot, dynamics model, kinematics model, omniwheel, omniplatform, multicomponent friction Citation: Saypulaev G. R., Adamov B. I., Kobrin A. I., Comparative Analysis of the Dynamics of a Spherical Robot with a Balanced Internal Platform Taking into Account Different Models of Contact Friction, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 803-815 Demian A. A., Klimchik A. S. Gravity Compensation for Mechanisms with Prismatic Joints This paper is devoted to the design of gravity compensators for prismatic joints. The proposed compensator depends on the suspension of linear springs together with mechanical transmission mechanisms to achieve the constant application of force along the sliding span of the joint. The use of self-locking worm gears ensures the isolation of spring forces. A constantforce mechanism is proposed to generate counterbalance force along the motion span of the prismatic joint. The constant-force mechanism is coupled with a pin-slot mechanism to transform to adjust the spring tension to counterbalance the effect of rotation of the revolute joint. Two springs were used to counterbalance the gravity torque of the revolute joint. One of the springs has a moving pin-point that is passively adjusted in proportion with the moving mass of the prismatic joint. To derive the model of the compensator, a 2-DoF system which consists of a revolute and a prismatic joint is investigated. In contrast to previous work, the proposed compensator considers the combined motion of rotation and translation. The obtained results were tested in simulation based on the dynamic model of the derived system. The simulation shows the effectiveness of the proposed compensator as it significantly reduces the effort required by the actuators to support the manipulator against gravity. The derived compensator model provides the necessary constraints on the design parameters. Keywords: prismatic joints, static balancing, gravity compensation, manipulator design Citation: Demian A. A., Klimchik A. S., Gravity Compensation for Mechanisms with Prismatic Joints, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 817-829 Shamraev A. D., Kolyubin S. A. Bioinspired and Energy-Efficient Convex Model Predictive Control for a Quadruped Robot Animal running has been studied for a long time, but until now robots cannot repeat the same movements with energy efficiency close to animals. There are many controllers for controlling the movement of four-legged robots. One of the most popular is the convex MPC. This paper presents a bioinspirational approach to increasing the energy efficiency of the state-of-theart convex MPC controller. This approach is to set a reference trajectory for the convex MPC in the form of an SLIP model, which describes the movements of animals when running. Adding an SLIP trajectory increases the energy efficiency of the Pronk gait by 15 percent over a range of speed from 0.75 m/s to 1.75 m/s. Keywords: quadruped, model predictive control, spring-loaded inverted pendulum, bioinspiration, energy efficiency Citation: Shamraev A. D., Kolyubin S. A., Bioinspired and Energy-Efficient Convex Model Predictive Control for a Quadruped Robot, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 831-841 Almaghout K., Klimchik A. S. Vision-Based Robotic Comanipulation for Deforming Cables Although deformable linear objects (DLOs), such as cables, are widely used in the majority of life fields and activities, the robotic manipulation of these objects is considerably more complex compared to the rigid-body manipulation and still an open challenge. In this paper, we introduce a new framework using two robotic arms cooperatively manipulating a DLO from an initial shape to a desired one. Based on visual servoing and computer vision techniques, a perception approach is proposed to detect and sample the DLO as a set of virtual feature points. Then a manipulation planning approach is introduced to map between the motion of the manipulators end effectors and the DLO points by a Jacobian matrix. To avoid excessive stretching of the DLO, the planning approach generates a path for each DLO point forming profiles between the initial and desired shapes. It is guaranteed that all these intershape profiles are reachable and maintain the cable length constraint. The framework and the aforementioned approaches are validated in real-life experiments. Keywords: robotic comanipulation, deformable linear objects, shape control, visual servoing Citation: Almaghout K., Klimchik A. S., Vision-Based Robotic Comanipulation for Deforming Cables, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 843-858 Ali W., Kolyubin S. A. EMG-Based Grasping Force Estimation for Robot Skill Transfer Learning In this study, we discuss a new machine learning architecture, the multilayer preceptronrandom forest regressors pipeline (MLP-RF model), which stacks two ML regressors of different kinds to estimate the generated gripping forces from recorded surface electromyographic activity signals (EMG) during a gripping task. We evaluate our proposed approach on a publicly available dataset, putEMG-Force, which represents a sEMG-Force data profile. The sEMG signals were then filtered and preprocessed to get the features-target data frame that will be used to train the proposed ML model. The proposed ML model is a pipeline of stacking 2 different natural ML models; a random forest regressor model (RF regressor) and a multiple layer perceptron artificial neural network (MLP regressor). The models were stacked together, and the outputs were penalized by a Ridge regressor to get the best estimation of both models. The model was evaluated by different metrics; mean squared error and coefficient of determination, or $r^2$ score, to improve the model prediction performance. We tuned the most significant hyperparameters of each of the MLP-RF model components using a random search algorithm followed by a grid search algorithm. Finally, we evaluated our MLP-RF model performance on the data by training a recurrent neural network consisting of 2 LSTM layers, 2 dropouts, and one dense layer on the same data (as it is the common approach for problems with sequential datasets) and comparing the prediction results with our proposed model. The results show that the MLP-RF outperforms the RNN model. Keywords: sEMG signals, multilayer perceptron regressor (MLP), random forest regressor (RF), recurrent neural network (RNN), robot grasping forces, skill transfer learning Citation: Ali W., Kolyubin S. A., EMG-Based Grasping Force Estimation for Robot Skill Transfer Learning, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 859-872 Kazakov Y., Kornaev A., Shutin D., Kornaeva E., Savin L. Reducing Rotor Vibrations in Active Conical Fluid Film Bearings with Controllable Gap Despite the fact that the hydrodynamic lubrication is a self-controlled process, the rotor dynamics and energy efficiency in fluid film bearing are often the subject to be improved. We have designed control systems with adaptive PI and DQN-agent based controllers to minimize the rotor oscillations amplitude in a conical fluid film bearing. The design of the bearing allows its axial displacement and thus adjustment of its average clearance. The tests were performed using a simulation model in MATLAB software. The simulation model includes modules of a rigid shaft, a conical bearing, and a control system. The bearing module is based on numerical solution of the generalized Reynolds equation and its nonlinear approximation with fully connected neural networks. The results obtained demonstrate that both the adaptive PI controller and the DQNbased controller reduce the rotor vibrations even when imbalance in the system grows. However, the DQN-based approach provides some additional advantages in the controller designing process as well as in the system performance. Keywords: active fluid film bearing, conical bearing, simulation modeling, DQN-agent, adaptive PI controller Citation: Kazakov Y., Kornaev A., Shutin D., Kornaeva E., Savin L., Reducing Rotor Vibrations in Active Conical Fluid Film Bearings with Controllable Gap, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 873-883 Savin S. I., Khusainov R. R. Sparse Node-Distance Coordinate Representation for Tensegrity Structures In this work, a nonminimal coordinate representation of tensegrity structures with explicit constraints is introduced. A method is proposed for representation of results on tensegrity structures in sparse models of generalized forces, providing advantages for code generation for symbolic or autodifferentiation derivation tasks, and giving diagonal linear models with constant inertia matrices, allowing one not only to simplify computations and matrix inversions, but also to lower the number of elements that need to be stored when the linear model is evaluated along a trajectory. Keywords: tensegrity, dynamic model, nonminimal representation, linearized model Citation: Savin S. I., Khusainov R. R., Sparse Node-Distance Coordinate Representation for Tensegrity Structures, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 885-898 Mikishanina E. A. Motion Control of a Spherical Robot with a Pendulum Actuator for Pursuing a Target The problem of controlling the rolling of a spherical robot with a pendulum actuator pursuing a moving target by the pursuit method, but with a minimal control, is considered. The mathematical model assumes the presence of a number of holonomic and nonholonomic constraints, as well as the presence of two servo-constraints containing a control function. The control function is defined in accordance with the features of the simulated scenario. Servo-constraints set the motion program. To implement the motion program, the pendulum actuator generates a control torque which is obtained from the joint solution of the equations of motion and derivatives of servo-constraints. The first and second components of the control torque vector are determined in a unique way, and the third component is determined from the condition of minimizing the square of the control torque. The system of equations of motion after reduction for a given control function is reduced to a nonautonomous system of six equations. A rigorous proof of the boundedness of the distance function between a spherical robot and a target moving at a bounded velocity is given. The cases where objects move in a straight line and along a curved trajectory are considered. Based on numerical integration, solutions are obtained, graphs of the desired mechanical parameters are plotted, and the trajectory of the target and the trajectory of the spherical robot are constructed. Keywords: spherical robot, pendulum actuator, control, equations of motion, nonholonomic constraint, servo-constraint, pursuit, target Citation: Mikishanina E. A., Motion Control of a Spherical Robot with a Pendulum Actuator for Pursuing a Target, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 899-913 Kurakin L. G., Ostrovskaya I. V. On the Stability of the System of Thomson's Vortex $n$-Gon and a Moving Circular Cylinder The stability problem of a moving circular cylinder of radius $R$ and a system of $n$ identical point vortices uniformly distributed on a circle of radius $R_0^{}$, with $n\geqslant 2$, is considered. The center of the vortex polygon coincides with the center of the cylinder. The circulation around the cylinder is zero. There are three parameters in the problem: the number of point vortices $n$, the added mass of the cylinder $a$ and the parameter $q=\frac{R^2}{R_0^2}$. The linearization matrix and the quadratic part of the Hamiltonian of the problem are studied. As a result, the parameter space of the problem is divided into the instability area and the area of linear stability where nonlinear analysis is required. In the case $n=2,\,3$ two domains of linear stability are found. In the case $n=4,\,5,\,6$ there is just one domain. In the case $n\geqslant 7$ the studied solution is unstable for any value of the problem parameters. The obtained results in the limiting case as $a\to\infty$ agree with the known results for the model of point vortices outside the circular domain. Keywords: point vortices, Hamiltonian equation, Thomson's polygon, stability Citation: Kurakin L. G., Ostrovskaya I. V., On the Stability of the System of Thomson's Vortex $n$-Gon and a Moving Circular Cylinder, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 915-926 Mathematical problems of nonlinearity Astafyeva P. Y., Kiselev O. M. Formal Asymptotics of Parametric Subresonance The article is devoted to a comprehensive study of linear equations of the second order with an almost periodic coefficient. Using an asymptotic approach, the system of equations for parametric subresonant growth of the amplitude of oscillations is obtained. Moreover, the time of a turning point from the growth of the amplitude to the bounded oscillations in the slow variable is found. Also, a comparison between the asymptotic approximation for the turning time and the numerical one is shown. Keywords: classical analysis and ODEs, subresonant, almost periodic function,small denominator Citation: Astafyeva P. Y., Kiselev O. M., Formal Asymptotics of Parametric Subresonance, Rus. J. Nonlin. Dyn., 2022, Vol. 18, no. 5, pp. 927-937 Indeitsev D. A., Zavorotneva E. V., Lukin A. V., Popov I. A., Igumnova V. S. Nonlinear Dynamics of a Microscale Rate Integrating Gyroscope with a Disk Resonator under Parametric Excitation This article presents an analytical study of the dynamics of a micromechanical integrating gyroscope with a disk resonator. A discrete dynamic model of the resonator is obtained, taking into account the axial anisotropy of its mass and stiffness properties, as well as the action of the electrical control system of oscillations. An analysis of the spectral problem of disk vibrations in the plane is carried out. The nonlinear dynamics of the resonator in the regimes of free and parametrically excited vibrations are investigated. In the mode of parametric oscillations, qualitative dependencies of the gyroscopic drift on the operating voltage, angular velocity and parameters of defects are obtained. Keywords: MEMS, MRIG, nonlinear dynamics, BAW, parametric excitation Citation: Indeitsev D. A., Zavorotneva E. V., Lukin A. V., Popov I. A., Igumnova V. S., Nonlinear Dynamics of a Microscale Rate Integrating Gyroscope with a Disk Resonator under Parametric Excitation, Rus. J. Nonlin. Dyn., 2023 https://doi.org/10.20537/nd230102 Popkov Y. S. Oscillations in Dynamic Systems with an Entropy Operator This paper considers dynamic systems with an entropy operator described by a perturbed constrained optimization problem. Oscillatory processes are studied for periodic systems with the following property: the entire system has the same period as the process generated by its linear part. Existence and uniqueness conditions are established for such oscillatory processes, and a method is developed to determine their form and parameters. Also, the general case of noncoincident periods is analyzed, and a method is proposed to determine the form, parameters, and the period of such oscillations. Almost periodic processes are investigated, and existence and uniqueness conditions are proved for them as well. Keywords: entropy, dynamic systems, optimization, oscillatory process Citation: Popkov Y. S., Oscillations in Dynamic Systems with an Entropy Operator, Rus. J. Nonlin. Dyn., 2023 https://doi.org/10.20537/nd230101 Semernik I. V., Bender O. V., Tarasenko A. A., Samonova C. V. Analysis and Simulation of BER Performance of Chaotic Underwater Wireless Optical Communication Systems In this article, a method for increasing the noise immunity of an underwater wireless optical communication system by applying chaotic oscillations is considered. To solve this problem, it is proposed to use modulation methods based on dynamical chaos at the physical level of the communication channel. Communication channel modeling is implemented by calculating the impulse response using a numerical solution of the radiation transfer equation by the Monte Carlo method. The following modulation methods based on the correlation processing of the received signal are considered: chaotic mode switching, chaotic on-off keying (COOK). On-off keying (OOK) modulation was chosen as a test modulation method to assess the degree of noise immunity of the modulation methods under study. An analysis of the noise immunity of an underwater optical communication channel with a change in its length and parameters of the aquatic environment, which affect the absorption and scattering of optical radiation in the communication channel, is carried out. It is shown that modulation methods based on the phenomenon of dynamic chaos and correlation processing can improve the noise immunity of underwater wireless communication systems. This provides the possibility of signal recovery at negative values of the signal-to-noise ratio. It is shown that the considered modulation methods (COOK and switching of chaotic modes) in combination with the correlation processing of the signal at the physical level of the communication channel provide an advantage of about 15 dB compared to OOK modulation. Keywords: underwater communication, optical communication, wireless communication, dynamical chaos, noise immunity, wideband signals, communication channel modeling, modulation, Monte Carlo method Citation: Semernik I. V., Bender O. V., Tarasenko A. A., Samonova C. V., Analysis and Simulation of BER Performance of Chaotic Underwater Wireless Optical Communication Systems, Rus. J. Nonlin. Dyn., 2022 https://doi.org/10.20537/nd221215 Kilin A. A., Ivanova T. B. The Integrable Problem of the Rolling Motion of a Dynamically Symmetric Spherical Top with One Nonholonomic Constraint This paper addresses the problem of a sphere with axisymmetric mass distribution rolling on a horizontal plane. It is assumed that there is no slipping of the sphere as it rolls in the direction of the projection of the symmetry axis onto the supporting plane. It is also assumed that, in the direction perpendicular to the above-mentioned one, the sphere can slip relative to the plane. Examples of realization of the above-mentioned nonholonomic constraint are given. Equations of motion are obtained and their first integrals are found. It is shown that the system under consideration admits a redundant set of first integrals, which makes it possible to perform reduction to a system with one degree of freedom. Keywords: nonholonomic constraint, first integral, integrability, reduction Citation: Kilin A. A., Ivanova T. B., The Integrable Problem of the Rolling Motion of a Dynamically Symmetric Spherical Top with One Nonholonomic Constraint, Rus. J. Nonlin. Dyn., 2022 https://doi.org/10.20537/nd221205 Golokolenov A. V., Savin D. V. Attractors of a Weakly Dissipative System Allowing Transition to the Stochastic Web in the Conservative Limit This article deals with the dynamics of a pulse-driven self-oscillating system — the Van der Pol oscillator — with the pulse amplitude depending on the oscillator coordinate. In the conservative limit the "stochastic web" can be obtained in the phase space when the function defining this dependence is a harmonic one. The paper focuses on the case where the frequency of external pulses is four times greater than the frequency of the autonomous system. The results of a numerical study of the structure of both parameter and phase planes are presented for systems with different forms of external pulses: the harmonic amplitude function and its power series expansions. Complication of the pulse amplitude function results in the complication of the parameter plane structure, while typical scenarios of transition to chaos visible in the parameter plane remain the same in different cases. In all cases the structure of bifurcation lines near the border of chaos is typical of the existence of the Hamiltonian type critical point. Changes in the number and the relative position of coexisting attractors are investigated while the system approaches the conservative limit. A typical scenario of destruction of attractors with a decrease in nonlinear dissipation is revealed, and it is shown to be in good agreement with the theory of 1:4 resonance. The number of attractors of period 4 seems to grow infinitely with the decrease of dissipation when the pulse amplitude function is harmonic, while in other cases all attractors undergo destruction at certain values of dissipation parameters after the birth of high-period periodic attractors. Keywords: nonlinear dynamics, saddle-node bifurcation, stochastic web, Lyapunov exponent, multistability Citation: Golokolenov A. V., Savin D. V., Attractors of a Weakly Dissipative System Allowing Transition to the Stochastic Web in the Conservative Limit, Rus. J. Nonlin. Dyn., 2023 https://doi.org/10.20537/nd221206 Garashchuk I. R., Sinelshchikov D. I. Excitation of a Group of Two Hindmarsh – Rose Neurons with a Neuron-Generated Signal We study a model of three Hindmarsh – Rose neurons with directional electrical connections. We consider two fully-connected neurons that form a slave group which receives the signal from the master neuron via a directional coupling. We control the excitability of the neurons by setting the constant external currents. We study the possibility of excitation of the slave system in the stable resting state by the signal coming from the master neuron, to make it fire spikes/bursts tonically. We vary the coupling strength between the master and the slave systems as another control parameter. We calculate the borderlines of excitation by different types of signal in the control parameter space. We establish which of the resulting dynamical regimes are chaotic. We also demonstrate the possibility of excitation by a single burst or a spike in areas of control parameters, where the slave system is bistable. We calculate the borderlines of excitation by a single period of the excitatory signal. Keywords: chaos, neuronal excitability, Hindmarsh – Rose model Citation: Garashchuk I. R., Sinelshchikov D. I., Excitation of a Group of Two Hindmarsh – Rose Neurons with a Neuron-Generated Signal, Rus. J. Nonlin. Dyn., 2022 https://doi.org/10.20537/nd220901 Udalov A. A., Uleysky M. Y., Budyansky M. V. Analysis of Stationary Points and Bifurcations of a Dynamically Consistent Model of a Two-dimensional Meandering Jet A dynamically consistent model of a meandering jet stream with two Rossby waves obtained using the law of conservation of potential vorticity is investigated. Stationary points are found in the phase space of advection equations and the type of their stability is determined analytically. All topologically different flow regimes and their bifurcations are found for the stationary model (taking into account only the first Rossby wave). The results can be used in the study of Lagrangian transport, mixing, and chaotic advection in problems of cross-frontal transport in geophysical flows with meandering jets. Keywords: stationary points, separatrices reconnection, jet flow Citation: Udalov A. A., Uleysky M. Y., Budyansky M. V., Analysis of Stationary Points and Bifurcations of a Dynamically Consistent Model of a Two-dimensional Meandering Jet, Rus. J. Nonlin. Dyn., 2022 https://doi.org/10.20537/nd220802 Tsirlin A. M. Methods of Simplifying Optimal Control Problems, Heat Exchange and Parametric Control of Oscillators Methods of simplifying optimal control problems by decreasing the dimension of the space of states are considered. For this purpose, transition to new phase coordinates or conversion of the phase coordinates to the class of controls is used. The problems of heat exchange and parametric control of oscillators are given as examples: braking/swinging of a pendulum by changing the length of suspension and variation of the energy of molecules' oscillations in the crystal lattice by changing the state of the medium (exposure to laser radiation). The last problem corresponds to changes in the temperature of the crystal. Keywords: change of state variables, problems linear in control, heat exchange with minimal dissipation, parametric control, oscillation of a pendulum, ensemble of oscillators Citation: Tsirlin A. M., Methods of Simplifying Optimal Control Problems, Heat Exchange and Parametric Control of Oscillators, Rus. J. Nonlin. Dyn., 2022 https://doi.org/10.20537/nd220801 Baranov D. A., Grines V. Z., Pochinka O. V., Chilina E. E. On a Classification of Periodic Maps on the 2-Torus In this paper, following J. Nielsen, we introduce a complete characteristic of orientationpreserving periodic maps on the two-dimensional torus. All admissible complete characteristics were found and realized. In particular, each of the classes of orientation-preserving periodic homeomorphisms on the 2-torus that are nonhomotopic to the identity is realized by an algebraic automorphism. Moreover, it is shown that the number of such classes is finite. According to V. Z. Grines and A.Bezdenezhnykh, any gradient-like orientation-preserving diffeomorphism of an orientable surface is represented as a superposition of the time-1 map of a gradient-like flow and some periodic homeomorphism. Thus, the results of this work are directly related to the complete topological classification of gradient-like diffeomorphisms on surfaces. Keywords: gradient-like flows and diffeomorphisms on surfaces, periodic homeomorphisms, torus Citation: Baranov D. A., Grines V. Z., Pochinka O. V., Chilina E. E., On a Classification of Periodic Maps on the 2-Torus, Rus. J. Nonlin. Dyn., 2022 https://doi.org/10.20537/nd220702 № 1 2 № 1 2 3 4 № 1 2 3 4 5
CommonCrawl
www.springer.com The European Mathematical Society Pages A-Z StatProb Collection Project talk Difference between revisions of "User talk:Musictheory2math" From Encyclopedia of Mathematics Musictheory2math (talk | contribs) Let $\Bbb N$ be that group and at first write integers as a sequence with starting from $0$ and let identity element $e=1$ be corresponding with $0$ and two generators $m$ & $n$ be corresponding with $1$ & $-1$ so we have $\Bbb N=\langle m\rangle=\langle n\rangle$ for instance: $$0,1,2,-1,-2,3,4,-3,-4,5,6,-5,-6,7,8,-7,-8,9,10,-9,-10,11,12,-11,-12,...$$ $$1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,...$$ then regarding to the sequence find an even rotation number that for this sequence is $4$ and hence equations should be written with module $4$, then consider $4m-2,4m-1,4m,4m+1$ that the last should be $km+1$ and initial be $km+(2-k)$ otherwise equations won't match with definitions of members inverse, and make a table of products of those $k$ elements but during writing equations pay attention if an equation is right for given numbers it will be right generally for other numbers and of course if integers corresponding with two members don't have same signs then product will be a piecewise-defined function for example $12\star _u 15=6$ or $(4\times 3)\star _u (4\times 4-1)=6$ because $(-5)+8=3$ & $-5\to 12,\,\, 8\to 15,\,\, 3\to 6,$ that implies $(4n)\star _u (4m-1)=4m-4n+2$ where $4m-1\gt 4n$ of course it is better at first members inverse be defined for example since $(-9)+9=0$ & $0\to 1,\,\, -9\to 20,\,\, 9\to 18$ so $20\star _u 18=1$, that shows $(4m)\star _u (4m-2)=1$, and with a little bit addition and multiplication all equations will be obtained simply that for this example is: then regarding to the sequence find an even rotation number that for this sequence is $4$ and hence equations should be written with module $4$, then consider $4m-2,4m-1,4m,4m+1$ that the last should be $km+1$ and initial be $km+(2-k)$ otherwise equations won't match with definitions of members inverse, and make a table of products of those $k$ elements but during writing equations pay attention if an equation is right for given numbers it will be right generally for other numbers too and of course if integers corresponding with two members don't have same signs then product will be a piecewise-defined function for example $12\star _u 15=6$ or $(4\times 3)\star _u (4\times 4-1)=6$ because $(-5)+8=3$ & $-5\to 12,\,\, 8\to 15,\,\, 3\to 6,$ that implies $(4n)\star _u (4m-1)=4m-4n+2$ where $4m-1\gt 4n$ of course it is better at first members inverse be defined for example since $(-9)+9=0$ & $0\to 1,\,\, -9\to 20,\,\, 9\to 18$ so $20\star _u 18=1$, that shows $(4m)\star _u (4m-2)=1$, and with a little bit addition and multiplication all equations will be obtained simply that for this example is: $\begin{cases} m\star _u 1=m\\ (4m)\star _u (4m-2)=1=(4m+1)\star _u (4m-1)\\ (4m-2)\star _u (4n-2)=4m+4n-5\\ (4m-2)\star _u (4n-1)=4m+4n-2\\ (4m-2)\star _u (4n)=\begin{cases} 4m-4n-1 & 4m-2\gt 4n\\ 4n-4m+1 & 4n\gt 4m-2\\ 3 & m=n+1\end{cases}\\ (4m-2)\star _u (4n+1)=\begin{cases} 4m-4n-2 & 4m-2\gt 4n+1\\ 4n-4m+4 & 4n+1\gt 4m-2\end{cases}\\ (4m-1)\star _u (4n-1)=4m+4n-1\\ (4m-1)\star _u (4n)=\begin{cases} 4m-4n+2 & 4m-1\gt 4n\\ 4n-4m & 4n\gt 4m-1\\ 2 & m=n\end{cases}\\ (4m-1)\star _u (4n+1)=\begin{cases} 4m-4n-1 & 4m-1\gt 4n+1\\ 4n-4m+1 & 4n+1\gt 4m-1\\ 3 & m=n+1\end{cases}\\ (4m)\star _u (4n)=4m+4n-3\\ (4m)\star _u (4n+1)=4m+4n\\ (4m+1)\star _u (4n+1)=4m+4n+1\\ \Bbb N=\langle 2\rangle=\langle 4\rangle\end{cases}$ I want make some topologies having '''prime numbers properties''' presentable in the collection of '''open sets''', in principle when we image a prime $p$ to real numbers as $w_k(p)$ indeed we accompany prime numbers properties among real numbers which regarding to the expression form of prime number theorem for this aim we should use an important mathematical technique as logarithm function into some planned topologies: '''question''' $2$: Let $M$ be a topological space and $A,B$ are subsets of $M$ with $A\subset B$ and $A$ is dense in $B,$ since $A$ is dense in $B,$ is there some way in which a topology on $B$ may be induced other than the subspace topology? I am also interested in specialisations, for example if $M$ is Hausdorff or Euclidean. ($M=\Bbb R,\,B=[0,1],\,A=S$ or $M=\Bbb R^2,$ $B=[0,1]\times[0,1],$ $A=S\times S$) :Perhaps this technique is useful: an extension of prime number theorem: $\forall n\in\Bbb N,$ and for each subinterval $(a,b)$ of $[0.1,1),$ that $a\neq b,$ :Perhaps this technique is useful: an extension of prime number theorem: $\forall n\in\Bbb N,$ and for each subinterval $(a,b)$ of $[0.1,1),$ that $a\neq b,$ assume: :$\begin{cases} U_{(a,b)}:=\{n\in\Bbb N\mid a\le r(n)\le b\},\\ \\V_{(a,b)}:=\{p\in\Bbb P\mid a\le r(p)\le b\},\\ \\U_{(a,b),n}:=\{m\in U_{(a,b)}\mid m\le n\},\\ \\V_{(a,b),n}:=\{m\in V_{(a,b)}\mid m\le n\},\\ \\w_{(a,b),n}:=(\#U_{(a,b),n})^{-1}\cdot\#V_{(a,b),n}\cdot\log n,\\ \\w_{(a,b)}:=\lim _{n\to\infty} w_{(a,b),n}\end{cases}$ :$\begin{cases} U_{(a,b)}:=\{n\in\Bbb N\mid a\le r(n)\le b\},\\ \\V_{(a,b)}:=\{p\in\Bbb P\mid a\le r(p)\le b\},\\ \\U_{(a,b),n}:=\{m\in U_{(a,b)}\mid m\le n\},\\ \\V_{(a,b),n}:=\{p\in V_{(a,b)}\mid p\le n\},\\ \\w_{(a,b),n}:={\#V_{(a,b),n}\over\#U_{(a,b),n}}\cdot\log n,\\ \\w_{(a,b)}:=\lim _{n\to\infty} w_{(a,b),n}\\ \\z_{(a,b),n}:={\#V_{(a,b),n}\over\#U_{(a,b),n}}\cdot\log{(\#U_{(a,b),n})}\\ \\z_{(a,b)}:=\lim_{n\to\infty}z_{(a,b),n}\end{cases}$ ::Guess $2$: $\forall (a,b)\subset [0.1,1),\,w_{(a,b)}=0.9^{-1}\cdot (b-a)$. ::Guess $2$: $\forall (a,b)\subset [0.1,1),\,w_{(a,b)}={10\over9}\cdot(b-a)$. :::[https://math.stackexchange.com/questions/2683513/an-extension-of-prime-number-theorem/2683561#2683561 Answer] given by [https://math.stackexchange.com/users/82961/peter $@$Peter] from stackexchange site: Imagine a very large number $N$ and consider the range $[10^N,10^{N+1}]$. The natural logarithms of $10^N$ and $10^{N+1}$ only differ by $\ln(10)\approx 2.3$ Hence the reciprocals of the logarithms of all primes in this range virtually coincicde. Because of the approximation $$\int_a^b \frac{1}{\ln(x)}dx$$ for the number of primes in the range $[a,b]$ the number of primes is approximately the length of the interval divided by $\frac{1}{\ln(10^N)}$, so is approximately equally distributed. Hence your conjecture is true. :::Benfords law seems to contradict this result , but this only applies to sequences producing primes as the Mersenne primes and not if the primes are chosen randomly in the range above. ::Guess $3$: $\forall (a,b)\subset [0.1,1),\,z_{(a,b)}={10\over9}\cdot(b-a)$. :::Question $2-1$: What does mean $\lim_{\epsilon\to0}z_{(a-\epsilon,a+\epsilon)}=0,\,a\in(0.1,1)$? <small><s>'''Theorem''' $6$: $(S,\lt _1)$ is a well-ordering set with order relation $\lt _1$ as: $\forall i,n,k\in\Bbb N$ if $p_n$ be $n$-th prime number, relation $\lt _1$ is defined with: $w_i(p_n)\lt _1w_i(p_{n+k})\lt _1w_{i+1}(p_n)$ or $$0.2\lt _10.3\lt _10.5\lt _10.7\lt _10.11\lt _10.13\lt _10.17\lt _1...0.02\lt _10.03\lt _10.05\lt _10.07\lt _10.011\lt _1$$ $$0.013\lt _10.017\lt _1...0.002\lt _10.003\lt _10.005\lt _10.007\lt _10.0011\lt _10.0013\lt _10.0017\lt _1...$$ <small><s>'''Theorem''' $6$: $(S,\lt _1)$ is a well-ordering set with order relation $\lt _1$ as: $\forall i,n,k\in\Bbb N$ if $p_n$ be $n$-th prime number, relation $\lt _1$ is defined with: $w_i(p_n)\lt _1w_i(p_{n+k})\lt _1w_{i+1}(p_n)$ or $$0.2\lt _10.3\lt _10.5\lt _10.7\lt _10.11\lt _10.13\lt _10.17\lt _1...0.02\lt _10.03\lt _10.05\lt _10.07\lt _10.011\lt _1$$ $$0.013\lt _10.017\lt _1...0.002\lt _10.003\lt _10.005\lt _10.007\lt _10.0011\lt _10.0013\lt _10.0017\lt _1...$$ and $(E,\lt _2)$ is another well ordering set with order relation $\lt _2$ as: $\forall i,n,k\in\Bbb N$ that $10\nmid n,\, 10\nmid n+k,$ $w_i(n)\lt _2w_i(n+k)\lt _2w_{i+1}(n)$ or $$0.1\lt _2 0.2\lt _2 0.3\lt _2 ...0.9\lt _2 0.11\lt _2 0.12\lt _2 ...0.19\lt _2 0.21\lt _2 ...0.01\lt _2 0.02\lt _2 0.03\lt _2 ...0.09$$ $$\lt _2 0.011\lt _2 0.012\lt _2 ...0.019\lt _2 0.021\lt _2 ...0.001\lt _2 0.002\lt _2 0.003\lt _2 ...0.009\lt _2 0.0011\lt _2 ...$$ now $M:=S\times S\times E$ is a well-ordering set with order relation $\lt _3$ as: $\forall (a,b,t),(c,d,u)\in S\times S\times E,$ $(a,b,t)\lt _3(c,d,u)$ iff $\,\,\begin{cases} t\lt _2u & or\\ t=u,\,\, a+b\lt _2c+d & or\\ t=u,\,\, a+b=c+d,\,\, b\lt _1 d\end{cases}$ ♦</s></small> and $(E,\lt _2)$ is another well ordering set with order relation $\lt _2$ as: $\forall i,n,k\in\Bbb N$ that $10\nmid n,\, 10\nmid n+k,$ $w_i(n)\lt _2w_i(n+k)\lt _2w_{i+1}(n)$ or $$0.1\lt _2 0.2\lt _2 0.3\lt _2 ...0.9\lt _2 0.11\lt _2 0.12\lt _2 ...0.19\lt _2 0.21\lt _2 ...0.01\lt _2 0.02\lt _2 0.03\lt _2 ...0.09$$ $$\lt _2 0.011\lt _2 0.012\lt _2 ...0.019\lt _2 0.021\lt _2 ...0.001\lt _2 0.002\lt _2 0.003\lt _2 ...0.009\lt _2 0.0011\lt _2 ...$$ now $M:=S\times S\times E$ is a well-ordering set with order relation $\lt _3$ as: $\forall (a,b,t),(c,d,u)\in S\times S\times E,$ $(a,b,t)\lt _3(c,d,u)$ if $\,\,\begin{cases} t\lt _2u & or\\ t=u,\,\, a+b\lt _2c+d & or\\ t=u,\,\, a+b=c+d,\,\, b\lt _1 d\end{cases}$ ♦</s></small> '''Theorem''' $6$: $(S,\lt _1)$ is a well-ordering set with order relation $\lt _1$ as: $\forall a,b\in S,\,a\lt_1b$ iff $a\gt b$ and $(E,\lt _2)$ is another well ordering set with order relation $\lt _2$ as: $\forall x,y\in E,\,x\lt_2y$ iff $x\gt y$, now $M:=S\times S\times E$ is a well-ordering set with order relation $\lt _3$ as: $\forall (a,b,t),(c,d,u)\in S\times S\times E,$ $(a,b,t)\lt _3(c,d,u)$ iff $\,\,\begin{cases} t\lt _2u & or\\ t=u,\,\, a+b\lt _2c+d & or\\ t=u,\,\, a+b=c+d,\,\, b\lt _1 d\end{cases}$ ♦ '''Theorem''' $6$: $(S,\lt _1)$ is a well-ordering set with order relation $\lt _1$ as: $\forall a,b\in S,\,a\lt_1b$ iff $a\gt b$ and $(E,\lt _2)$ is another well ordering set with order relation $\lt _2$ as: $\forall x,y\in E,\,x\lt_2y$ iff $x\gt y$, now $M:=S\times S\times E$ is a well-ordering set with order relation $\lt _3$ as: $\forall (a,b,t),(c,d,u)\in S\times S\times E,$ $(a,b,t)\lt _3(c,d,u)$ if $\,\,\begin{cases} t\lt _2u & or\\ t=u,\,\, a+b\lt _2c+d & or\\ t=u,\,\, a+b=c+d,\,\, b\lt _1 d\end{cases}$ ♦ '''An algorithm''' which makes new integral domains on $\Bbb Z$: Let $(\Bbb Z,\star,\circ)$ be that integral domain then identity element $i$ will be corresponding with $1$ and multiplication of integers will be obtained from multiplication of corresponding integers such that if $t:\Bbb Z\to\Bbb Z$ is a bijection that images top row on to bottom row respectively for instance in example above is seen $t(2)=-1$ & $t(-18)=18$ then we can write laws using by $t$ such as $(-2m+1)\circ(-2n)=$ $t(t^{-1}(-2m+1)\times t^{-1}(-2n))=t((2m)\times(-2n+1))=t(-2\times(2mn-m))=$ $2\times(2mn-m)=4mn-2m$ and of course each integer like $m$ multiplied by an integer corresponding with $-1$ will be $n$ such that $m\star n=0$ & $0\circ m=0$ for instance $(\Bbb Z,\star,\circ)$ is an integral domain with: '''An algorithm''' which makes new integral domains on $\Bbb Z$: Let $(\Bbb Z,\star,\circ)$ be that integral domain then identity element $i$ will be corresponding with $1$ and multiplication of integers will be obtained from multiplication of corresponding integers such that if $t:\Bbb Z\to\Bbb Z$ is a bijection that images top row on to bottom row respectively for instance in example above is seen $t(2)=-1$ & $t(-18)=18$ then we can write laws by using $t$ such as $(-2m+1)\circ(-2n)=$ $t(t^{-1}(-2m+1)\times t^{-1}(-2n))=t((2m)\times(-2n+1))=t(-2\times(2mn-m))=$ $2\times(2mn-m)=4mn-2m$ and of course each integer like $m$ multiplied by an integer corresponding with $-1$ will be $n$ such that $m\star n=0$ & $0\circ m=0$ for instance $(\Bbb Z,\star,\circ)$ is an integral domain with: $\begin{cases} \forall t\in\Bbb Z,\quad t\star0=t\\ \forall m,n\in\Bbb N\\ (2m-1)\star(-2m)=0=(-2m+1)\star(2m)\\ (2m-1)\star(2n-1)=2m+2n-2\\ (2m-1)\star(2n)=\begin{cases} 2m-2n-1 & 2m-1\gt2n\\ 2m-2n-2 & 2n\gt 2m-1\end{cases}\\ (2m-1)\star(-2n+1)=2m+2n-1\\ (2m-1)\star(-2n)=\begin{cases} 2n-2m+1 & 2m-1\gt2n\\ 2n-2m & 2n\gt2m-1\end{cases}\\ (2m)\star(2n)=2m+2n\\ (2m)\star(-2n+1)=\begin{cases} 2m-2n+1 & 2n-1\gt2m\\ 2m-2n & 2m\gt2n-1\end{cases}\\ (2m)\star(-2n)=-2m-2n\\ (-2m+1)\star(-2n+1)=-2m-2n+1\\ (-2m+1)\star(-2n)=\begin{cases} 2m-2n+1 & 2m-1\gt2n\\ 2m-2n & 2n\gt2m-1\\ 1 & m=n\end{cases}\\ (-2m)\star(-2n)=2m+2n-2\\ i=t(1)=1,\quad0\circ m=0,\quad m\star(t(-1)\circ m)=m\star(-2\circ m)=0\\ (2m-1)\circ(2n-1)=4mn-2m-2n+1\\ (2m-1)\circ(2n)=4mn-2n\\ (2m-1)\circ(-2n+1)=-4mn+2n+1\\ (2m-1)\circ(-2n)=-4mn+2m+2n-2\\ (2m)\circ(2n)=-4mn+1\\ (2m)\circ(-2n+1)=4mn\\ (2m)\circ(-2n)=-4mn+2m+1\\ (-2m+1)\circ(-2n+1)=-4mn+1\\ (-2m+1)\circ(-2n)=4mn-2m\\ (-2m)\circ(-2n)=4mn-2m-2n+1\end{cases}$ Whole my previous notes is visible in the revision as of 18:42, 13 April 2018 Alireza Badali 21:52, 13 April 2018 (CEST) 1 $\mathscr B$ $theory$ (algebraic topological analytical number theory) 1.1 Goldbach's conjecture 1.2 Polignac's conjecture 2 Some dissimilar conjectures 2.1 Collatz conjecture 2.2 Erdős–Straus conjecture 2.3 Landaus forth problem 2.4 Lemoine's conjecture 2.5 Primes with beatty sequences 3 Conjectures depending on the new definitions of primes 3.1 Gaussian moat problem 3.2 Grimm's conjecture 3.3 Oppermann's conjecture 3.4 Legendre's conjecture 4 Conjectures depending on the ring theory 4.1 Parallel universes 4.2 Gauss circle problem $\mathscr B$ $theory$ (algebraic topological analytical number theory) Logarithm function as an inverse of the function $f:\Bbb N\to\Bbb R,\,f(n)=a^n,\,a\in\Bbb R$ has prime numbers properties because in usual definition of prime numbers multiplication operation is a point meantime we have $a^n=a\times a\times ...a,$ $(n$ times$),$ hence prime number theorem or its extensions or some other forms is applied in $B$ theory for solving problems on prime numbers exclusively and not all natural numbers. Algebraic structures on the positive numbers & prime number theorem and its extensions or other forms or corollaries & topology with homotopy groups Alireza Badali 00:49, 25 June 2018 (CEST) Goldbach's conjecture Lemma: For each subinterval $(a,b)$ of $[0.1,1),\,\exists m\in \Bbb N$ that $\forall k\in \Bbb N$ with $k\ge m$ then $\exists t\in (a,b)$ that $t\cdot 10^k\in \Bbb P$. Proof given by @Adayah from stackexchange site: Without loss of generality (by passing to a smaller subinterval) we can assume that $(a, b) = \left( \frac{s}{10^r}, \frac{t}{10^r} \right)$, where $s, t, r$ are positive integers and $s < t$. Let $\alpha = \frac{t}{s}$. The statement is now equivalent to saying that there is $m \in \mathbb{N}$ such that for every $k \geqslant m$ there is a prime $p$ with $10^{k-r} \cdot s < p < 10^{k-r} \cdot t$. We will prove a stronger statement: there is $m \in \mathbb{N}$ such that for every $n \geqslant m$ there is a prime $p$ such that $n < p < \alpha \cdot n$. By taking a little smaller $\alpha$ we can relax the restriction to $n < p \leqslant \alpha \cdot n$. Now comes the prime number theorem: $$\lim_{n \to \infty} \frac{\pi(n)}{\frac{n}{\log n}} = 1$$ where $\pi(n) = \# \{ p \leqslant n : p$ is prime$\}.$ By the above we have $$\frac{\pi(\alpha n)}{\pi(n)} \sim \frac{\frac{\alpha n}{\log(\alpha n)}}{\frac{n}{\log(n)}} = \alpha \cdot \frac{\log n}{\log(\alpha n)} \xrightarrow{n \to \infty} \alpha$$ hence $\displaystyle \lim_{n \to \infty} \frac{\pi(\alpha n)}{\pi(n)} = \alpha$. So there is $m \in \mathbb{N}$ such that $\pi(\alpha n) > \pi(n)$ whenever $n \geqslant m$, which means there is a prime $p$ such that $n < p \leqslant \alpha \cdot n$, and that is what we wanted♦ Now we can define function $f:\{(c,d)\mid (c,d)\subseteq [0.01,0.1)\}\to\Bbb N$ that $f((c,d))$ is the least $n\in\Bbb N$ that $\exists t\in(c,d),\,\exists k\in\Bbb N$ that $p_n=t\cdot 10^{k+1}$ that $p_n$ is $n$_th prime and $\forall m\ge f((c,d))\,\,\exists u\in (c,d)$ that $u\cdot 10^{m+1}\in\Bbb P$ and $g:(0,0.09)\cap (\bigcup _{k\in\Bbb N} r_k(\Bbb N))\to\Bbb N,$ is a function by $\forall\epsilon\in (0,0.09)\cap (\bigcup _{k\in\Bbb N} r_k(\Bbb N))$ $g(\epsilon)=max(\{f((c,d))\mid d-c=\epsilon,$ $(c,d)\subseteq [0.01,0.1)\})$. Guess $1$: $g$ isn't an injective function. Question $1$: Assuming guess $1$, let $[a,a]:=\{a\}$ and $\forall n\in\Bbb N,\, h_n$ is the least subinterval of $[0.01,0.1)$ like $[a,b]$ in terms of size of $b-a$ such that $\{\epsilon\in (0,0.09)\cap (\bigcup _{k\in\Bbb N} r_k(\Bbb N))\mid g(\epsilon)=n\}\subsetneq h_n$ and obviously $g(a)=n=g(b)$ now the question is $\forall n,m\in\Bbb N$ that $m\neq n$ is $h_n\cap h_m=\emptyset$? Guidance given by @reuns from stackexchange site: For $n \in \mathbb{N}$ then $r(n) = 10^{-\lceil \log_{10}(n) \rceil} n$, ie. $r(19) = 0.19$. We look at the image by $r$ of the primes $\mathbb{P}$. Let $F((c,d)) = \min \{ p \in \mathbb{P}, r(p) \in (c,d)\}$ and $f((c,d)) = \pi(F(c,d))= \min \{ n, r(p_n) \in (c,d)\}$ ($\pi$ is the prime counting function) If you set $g(\epsilon) = \max_a \{ f((a,a+\epsilon))\}$ then try seing how $g(\epsilon)$ is constant on some intervals defined in term of the prime gap $g(p) = -p+\min \{ q \in \mathbb{P}, q > p\}$ and things like $ \max \{ g(p), p > 10^i, p+g(p) < 10^{i+1}\}$ Another guidance: The affirmative answer is given by Liouville's theorem on approximation of algebraic numbers. Suppose $r:\Bbb N\to (0,1)$ is a function given by $r(n)$ is obtained by putting a point at the beginning of $n$ instance $r(34880)=0.34880$ and similarly consider $\forall k\in\Bbb N,\, w_k:\Bbb N\to (0,1)$ is a function given by $\forall n\in\Bbb N,$ $w_k(n)=10^{1-k}\cdot r(n)$ and let $S=\bigcup _{k\in\Bbb N}w_k(\Bbb P)$. Theorem $1$: $r(\Bbb P)$ is dense in the interval $[0.1,1]$. (proof using lemma above) Regarding to expression form of Goldbach's conjecture, by using this theorem, I wanted enmesh prime numbers properties (prime number theorem should be used for proving this theorem and there is no way except using prime number theorem to prove this density because there is no deference between a prime $p$ and its image $r(p)$ other than a sign or a mark as a point for instance $59$ & $0.59$.) towards Goldbach hence I planned this method. A corollary: For each natural number like $a=a_1a_2a_3...a_k$ that $a_j$ is $j$_th digit for $j=1,2,3,...,k$, there is a natural number like $b=b_1b_2b_3...b_r$ such that the number $c=a_1a_2a_3...a_kb_1b_2b_3...b_r$ is a prime number. Theorem $2$: $S$ is dense in the interval $[0,1]$ and $S\times S$ is dense in the $[0,1]\times [0,1]$. An algorithm that makes new cyclic groups on $\Bbb N$: Let $\Bbb N$ be that group and at first write integers as a sequence with starting from $0$ and let identity element $e=1$ be corresponding with $0$ and two generators $m$ & $n$ be corresponding with $1$ & $-1$ so we have $\Bbb N=\langle m\rangle=\langle n\rangle$ for instance: $$0,1,2,-1,-2,3,4,-3,-4,5,6,-5,-6,7,8,-7,-8,9,10,-9,-10,11,12,-11,-12,...$$ $$1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,...$$ then regarding to the sequence find an even rotation number that for this sequence is $4$ and hence equations should be written with module $4$, then consider $4m-2,4m-1,4m,4m+1$ that the last should be $km+1$ and initial be $km+(2-k)$ otherwise equations won't match with definitions of members inverse, and make a table of products of those $k$ elements but during writing equations pay attention if an equation is right for given numbers it will be right generally for other numbers too and of course if integers corresponding with two members don't have same signs then product will be a piecewise-defined function for example $12\star _u 15=6$ or $(4\times 3)\star _u (4\times 4-1)=6$ because $(-5)+8=3$ & $-5\to 12,\,\, 8\to 15,\,\, 3\to 6,$ that implies $(4n)\star _u (4m-1)=4m-4n+2$ where $4m-1\gt 4n$ of course it is better at first members inverse be defined for example since $(-9)+9=0$ & $0\to 1,\,\, -9\to 20,\,\, 9\to 18$ so $20\star _u 18=1$, that shows $(4m)\star _u (4m-2)=1$, and with a little bit addition and multiplication all equations will be obtained simply that for this example is: Problem $1$: By using matrices rewrite operation of every group on $\Bbb N$. Assume $\forall m,n\in\Bbb N$: $\begin{cases} n\star 1=n\\ (2n)\star (2n+1)=1\\ (2n)\star (2m)=2n+2m\\ (2n+1)\star (2m+1)=2n+2m+1\\ (2n)\star (2m+1)=\begin{cases} 2m-2n+1 & 2m+1\gt 2n\\ 2n-2m & 2n\gt 2m+1\end{cases}\end{cases}$ and $p_n\star _1p_m=p_{n\star m}$ that $p_n$ is $n$_th prime with $e=p_1=2$, obviously $(\Bbb N,\star)$ & $(\Bbb P,\star _1)$ are groups and $\langle 2\rangle =\langle 3\rangle =(\Bbb N,\star)\simeq (\Bbb Z,+)\simeq (\Bbb P,\star _1)=\langle 3\rangle=\langle 5\rangle$. Theorem $3$: $(S,\star _S)$ is a group as: $\forall p,q\in\Bbb P,\,\forall m,n\in\Bbb N,\,\forall w_m(p),w_n(q)\in S,$ $\begin{cases} e=0.2\\ \\(w_m(p))^{-1}=w_{m^{-1}}(p^{-1}) & m\star m^{-1}=1,\, p\star _1 p^{-1}=2\\ \\w_m(p)\star _S w_n(q)=w_{m\star n} (p\star _1 q)\end{cases}$ hence $\langle 0.02,0.3\rangle=(S,\star _S)\simeq\Bbb Z\oplus\Bbb Z$. of course using algorithm above to generate cyclic groups on $\Bbb N$, we can impose another group structure on $\Bbb N$ and consequently on $\Bbb P$ but eventually $S$ with an operation analogous above operation $\star _S$ will be an Abelian group. Theorem $4$: $(S\times S,\star _{S\times S})$ is a group as: $\forall m_1,n_1,m_2,n_2\in\Bbb N,\,\forall p_1,p_2,q_1,q_2\in\Bbb P,$ $\forall (w_{m_1}(p_1),w_{m_2}(p_2)),(w_{n_1}(q_1),w_{n_2}(q_2))\in S\times S,$ $\begin{cases} e=(0.2,0.2)\\ \\(w_{m_1}(p_1),w_{m_2}(p_2))^{-1}=(w_{m_1^{-1}}(p_1^{-1}),w_{m_2^{-1}}(p_2^{-1}))\\ \text{such that}\quad m_1\star m_1^{-1}=1=m_2\star m_2^{-1},\, p_1\star _1p_1^{-1}=2=p_2\star _1p_2^{-1}\\ \\(w_{m_1}(p_1),w_{m_2}(p_2))\star _{S\times S} (w_{n_1}(q_1),w_{n_2}(q_2))=(w_{m_1\star n_1} (p_1\star _1 q_1),w_{m_2\star n_2}(p_2\star _1 q_2))\end{cases}$ hence $\langle (0.02,0.2),(0.2,0.02),(0.3,0.2),(0.2,0.3)\rangle=(S\times S,\star _{S\times S})\simeq\Bbb Z\oplus\Bbb Z\oplus\Bbb Z\oplus\Bbb Z$. of course using algorithm above to generate cyclic groups on $\Bbb N$, we can impose another group structure on $\Bbb N$ and consequently on $\Bbb P$ but eventually $S\times S$ with an operation analogous above operation $\star _{S\times S}$ will be an Abelian group. I want make some topologies having prime numbers properties presentable in the collection of open sets, in principle when we image a prime $p$ to real numbers as $w_k(p)$ indeed we accompany prime numbers properties among real numbers which regarding to the expression form of prime number theorem for this aim we should use an important mathematical technique as logarithm function into some planned topologies: question $2$: Let $M$ be a topological space and $A,B$ are subsets of $M$ with $A\subset B$ and $A$ is dense in $B,$ since $A$ is dense in $B,$ is there some way in which a topology on $B$ may be induced other than the subspace topology? I am also interested in specialisations, for example if $M$ is Hausdorff or Euclidean. ($M=\Bbb R,\,B=[0,1],\,A=S$ or $M=\Bbb R^2,$ $B=[0,1]\times[0,1],$ $A=S\times S$) Perhaps this technique is useful: an extension of prime number theorem: $\forall n\in\Bbb N,$ and for each subinterval $(a,b)$ of $[0.1,1),$ that $a\neq b,$ assume: $\begin{cases} U_{(a,b)}:=\{n\in\Bbb N\mid a\le r(n)\le b\},\\ \\V_{(a,b)}:=\{p\in\Bbb P\mid a\le r(p)\le b\},\\ \\U_{(a,b),n}:=\{m\in U_{(a,b)}\mid m\le n\},\\ \\V_{(a,b),n}:=\{p\in V_{(a,b)}\mid p\le n\},\\ \\w_{(a,b),n}:={\#V_{(a,b),n}\over\#U_{(a,b),n}}\cdot\log n,\\ \\w_{(a,b)}:=\lim _{n\to\infty} w_{(a,b),n}\\ \\z_{(a,b),n}:={\#V_{(a,b),n}\over\#U_{(a,b),n}}\cdot\log{(\#U_{(a,b),n})}\\ \\z_{(a,b)}:=\lim_{n\to\infty}z_{(a,b),n}\end{cases}$ Guess $2$: $\forall (a,b)\subset [0.1,1),\,w_{(a,b)}={10\over9}\cdot(b-a)$. Answer given by $@$Peter from stackexchange site: Imagine a very large number $N$ and consider the range $[10^N,10^{N+1}]$. The natural logarithms of $10^N$ and $10^{N+1}$ only differ by $\ln(10)\approx 2.3$ Hence the reciprocals of the logarithms of all primes in this range virtually coincicde. Because of the approximation $$\int_a^b \frac{1}{\ln(x)}dx$$ for the number of primes in the range $[a,b]$ the number of primes is approximately the length of the interval divided by $\frac{1}{\ln(10^N)}$, so is approximately equally distributed. Hence your conjecture is true. Benfords law seems to contradict this result , but this only applies to sequences producing primes as the Mersenne primes and not if the primes are chosen randomly in the range above. Guess $3$: $\forall (a,b)\subset [0.1,1),\,z_{(a,b)}={10\over9}\cdot(b-a)$. Question $2-1$: What does mean $\lim_{\epsilon\to0}z_{(a-\epsilon,a+\epsilon)}=0,\,a\in(0.1,1)$? Theorem $5$: Let $t_n:\Bbb N\to\Bbb N\setminus\{n\in\Bbb N: 10\mid n\}$ is a surjective strictly monotonically increasing sequence now $\{t_n\}_{n\in\Bbb N}$ is a cyclic group with: $\begin{cases} e=1\\ t_n^{-1}=t_{n^{-1}}\quad\text{that}\quad n\star n^{-1}=1\\ t_n\star _tt_m=t_{n\star m}\end{cases}$ that $(\{t_n\}_{n\in\Bbb N},\star _t)=\langle 2\rangle=\langle 3\rangle$ and let $E:=\bigcup _{k\in\Bbb N} w_k(\Bbb N\setminus\{n\in\Bbb N: 10\mid n\})$ so $(E,\star _E)$ is an Abelian group with $\forall m,n\in\Bbb N,$ $\forall a,b\in\Bbb N\setminus\{n\in\Bbb N: 10\mid n\}$: $\,\,\begin{cases} e=0.1\\ w_n(a)^{-1}=w_{n^{-1}}(a^{-1})\quad\text{that}\quad n\star n^{-1}=1,\, a\star _ta^{-1}=1\\ w_n(a)\star _Ew_m(b)=w_{n\star m}(a\star _tb)\end{cases}$ that $\langle 0.01,0.2\rangle=E\simeq\Bbb Z\oplus\Bbb Z$ ♦ now assume $(S\times S)\oplus E$ is external direct sum of the groups $S\times S$ and $E$ with $e=(0.2,0.2,0.1)$ and $\langle (0.02,0.2,0.1),(0.2,0.02,0.1),(0.3,0.2,0.1),(0.2,0.3,0.1),(0.2,0.2,0.01),(0.2,0.2,0.2)\rangle=$ $(S\times S)\oplus E\simeq\Bbb Z\oplus\Bbb Z\oplus\Bbb Z\oplus\Bbb Z\oplus\Bbb Z\oplus\Bbb Z$. Theorem $6$: $(S,\lt _1)$ is a well-ordering set with order relation $\lt _1$ as: $\forall i,n,k\in\Bbb N$ if $p_n$ be $n$-th prime number, relation $\lt _1$ is defined with: $w_i(p_n)\lt _1w_i(p_{n+k})\lt _1w_{i+1}(p_n)$ or $$0.2\lt _10.3\lt _10.5\lt _10.7\lt _10.11\lt _10.13\lt _10.17\lt _1...0.02\lt _10.03\lt _10.05\lt _10.07\lt _10.011\lt _1$$ $$0.013\lt _10.017\lt _1...0.002\lt _10.003\lt _10.005\lt _10.007\lt _10.0011\lt _10.0013\lt _10.0017\lt _1...$$ and $(E,\lt _2)$ is another well ordering set with order relation $\lt _2$ as: $\forall i,n,k\in\Bbb N$ that $10\nmid n,\, 10\nmid n+k,$ $w_i(n)\lt _2w_i(n+k)\lt _2w_{i+1}(n)$ or $$0.1\lt _2 0.2\lt _2 0.3\lt _2 ...0.9\lt _2 0.11\lt _2 0.12\lt _2 ...0.19\lt _2 0.21\lt _2 ...0.01\lt _2 0.02\lt _2 0.03\lt _2 ...0.09$$ $$\lt _2 0.011\lt _2 0.012\lt _2 ...0.019\lt _2 0.021\lt _2 ...0.001\lt _2 0.002\lt _2 0.003\lt _2 ...0.009\lt _2 0.0011\lt _2 ...$$ now $M:=S\times S\times E$ is a well-ordering set with order relation $\lt _3$ as: $\forall (a,b,t),(c,d,u)\in S\times S\times E,$ $(a,b,t)\lt _3(c,d,u)$ iff $\,\,\begin{cases} t\lt _2u & or\\ t=u,\,\, a+b\lt _2c+d & or\\ t=u,\,\, a+b=c+d,\,\, b\lt _1 d\end{cases}$ ♦ Theorem $6$: $(S,\lt _1)$ is a well-ordering set with order relation $\lt _1$ as: $\forall a,b\in S,\,a\lt_1b$ iff $a\gt b$ and $(E,\lt _2)$ is another well ordering set with order relation $\lt _2$ as: $\forall x,y\in E,\,x\lt_2y$ iff $x\gt y$, now $M:=S\times S\times E$ is a well-ordering set with order relation $\lt _3$ as: $\forall (a,b,t),(c,d,u)\in S\times S\times E,$ $(a,b,t)\lt _3(c,d,u)$ iff $\,\,\begin{cases} t\lt _2u & or\\ t=u,\,\, a+b\lt _2c+d & or\\ t=u,\,\, a+b=c+d,\,\, b\lt _1 d\end{cases}$ ♦ now assume $M$ is a topological space (Hausdorff space) induced by order relation $\lt _3$. Question $3$: Is $S$ a topological group under topology induced by order relation $\lt_1$ and is $(S\times S)\oplus E$ a topological group under topology of $M$? A new version of Goldbach's conjecture: For each even natural number $t$ greater than $4$ and $\forall c,m\in\Bbb N\cup\{0\}$ that $10^c\mid t,\, 10^{1+c}\nmid t$, $A_m=\{(a,b)\mid a,b\in S,\, 10^{-1-m}\le a+b\lt 10^{-m}\}$ and if $u$ is the number of digits in $t$ then $\exists (a,b)\in A_c$ such that $t=10^{c+u}\cdot (a+b),\, 10^{c+u}\cdot a,10^{c+u}\cdot b\in\Bbb P\setminus\{2\},\, (a,b,10^{-c-u}\cdot t)\in M$. Using homotopy groups Goldbach's conjecture will be proved. Alireza Badali 08:27, 31 March 2018 (CEST) Polignac's conjecture In previous chapter above I used an important technique by theorem $1$ for presentment of prime numbers properties as density in discussion that using prime number theorem it became applicable, anyway, but now I want perform another method for Twin prime conjecture (Polignac) in principle prime numbers properties are ubiquitous in own natural numbers. Theorem $1$: $(\Bbb N,\star _T)$ is a group with: $\forall m,n\in\Bbb N,$ $\begin{cases} (12m-10)\star_T(12m-9)=1=(12m-8) \star_T(12m-5)=(12m-7) \star_T(12m-4)=\\ (12m-6) \star_T(12m-1)=(12m-3) \star_T(12m)=(12m-2) \star_T(12m+1)\\ (12m-10) \star_T(12n-10)=12m+12n-19\\ (12m-10) \star_T(12n-9)=\begin{cases} 12m-12n+1 & 12m-10\gt 12n-9\\ 12n-12m-2 & 12n-9\gt 12m-10\end{cases}\\ (12m-10) \star_T(12n-8)=12m+12n-15\\ (12m-10) \star_T(12n-7)=12m+12n-20\\ (12m-10) \star_T(12n-6)=12m+12n-11\\ (12m-10) \star_T(12n-5)=\begin{cases} 12m-12n-3 & 12m-10\gt 12n-5\\ 12n-12m+8 & 12n-5\gt 12m-10\end{cases}\\ (12m-10) \star_T(12n-4)=\begin{cases} 12m-12n-6 & 12m-10\gt 12n-4\\ 12n-12m+3 & 12n-4\gt 12m-10\end{cases}\\ (12m-10) \star_T(12n-3)=12m+12n-18\\ (12m-10) \star_T(12n-2)=\begin{cases} 12m-12n-10 & 12m-10\gt 12n-2\\ 12n-12m+11 & 12n-2\gt 12m-10\end{cases}\\ (12m-10) \star_T(12n-1)=\begin{cases} 12m-12n-7 & 12m-10\gt 12n-1\\ 12n-12m+12 & 12n-1\gt 12m-10\end{cases}\\ (12m-10) \star_T(12n)=\begin{cases} 12m-12n-8 & 12m-10\gt 12n\\ 12n-12m+7 & 12n\gt 12m-10\end{cases}\\ (12m-10) \star_T(12n+1)=12m+12n-10\\ (12m-9) \star_T(12n-9)=12m+12n-16\\ (12m-9) \star_T(12n-8)=\begin{cases} 12m-12n & 12m-9\gt 12n-8\\ 12n-12m+5 & 12n-8\gt 12m-9\end{cases}\\ (12m-9) \star_T(12n-7)=\begin{cases} 12m-12n-1 & 12m-9\gt 12n-7\\ 12n-12m+2 & 12n-7\gt 12m-9\end{cases}\\ (12m-9) \star_T(12n-6)=\begin{cases} 12m-12n-4 & 12m-9\gt 12n-6\\ 12n-12m+9 & 12n-6\gt 12m-9\end{cases}\\ (12m-9) \star_T(12n-5)=12m+12n-12\\ (12m-9) \star_T(12n-4)=12m+12n-17\\ (12m-9) \star_T(12n-3)=\begin{cases} 12m-12n-5 & 12m-9\gt 12n-3\\ 12n-12m+4 & 12n-3\gt 12m-9\end{cases}\\ (12m-9) \star_T(12n-2)=12m+12n-9\\ (12m-9) \star_T(12n-1)=12m+12n-14\\ (12m-9) \star_T(12n)=12m+12n-13\\ (12m-9)\star_T(12n+1)=\begin{cases} 12m-12n-9 & 12m-9\gt 12n+1\\ 12n-12m+6 & 12n+1\gt 12m-9\end{cases}\\ (12m-8) \star_T(12n-8)=12m+12n-11\\ (12m-8) \star_T(12n-7)=12m+12n-18\\ (12m-8) \star_T(12n-6)=12m+12n-7\\ (12m-8) \star_T(12n-5)=\begin{cases} 12m-12n+1 & 12m-8\gt 12n-5\\ 12n-12m-2 & 12n-5\gt 12m-8\end{cases}\\ (12m-8) \star_T(12n-4)=\begin{cases} 12m-12n+2 & 12m-8\gt 12n-4\\ 12n-12m-1 & 12n-4\gt 12m-8\\ 2 & m=n\end{cases}\\ (12m-8) \star_T(12n-3)=12m+12n-10\\ (12m-8) \star_T(12n-2)=\begin{cases} 12m-12n-8 & 12m-8\gt 12n-2\\ 12n-12m+7 & 12n-2\gt 12m-8\end{cases}\\ (12m-8) \star_T(12n-1)=\begin{cases} 12m-12n-3 & 12m-8\gt 12n-1\\ 12n-12m+8 & 12n-1\gt 12m-8\end{cases}\\ (12m-8) \star_T(12n)=\begin{cases} 12m-12n-6 & 12m-8\gt 12n\\ 12n-12m+3 & 12n\gt 12m-8\end{cases}\\ (12m-8) \star_T(12n+1)=12m+12n-8\\ (12m-7) \star_T(12n-7)=12m+12n-15\\ (12m-7) \star_T(12n-6)=12m+12n-10\\ (12m-7) \star_T(12n-5)=\begin{cases} 12m-12n-6 & 12m-7\gt 12n-5\\ 12n-12m+3 & 12n-5\gt 12m-7\end{cases}\\ (12m-7) \star_T(12n-4)=\begin{cases} 12m-12n+1 & 12m-7\gt 12n-4\\ 12n-12m-2 & 12n-4\gt 12m-7\end{cases}\\ (12m-7) \star_T(12n-3)=12m+12n-11\\ (12m-7) \star_T(12n-2)=\begin{cases} 12m-12n-7 & 12m-7\gt 12n-2\\ 12n-12m+12 & 12n-2\gt 12m-7\end{cases}\\ (12m-7) \star_T(12n-1)=\begin{cases} 12m-12n-8 & 12m-7\gt 12n-1\\ 12n-12m+7 & 12n-1\gt 12m-7\end{cases}\\ (12m-7) \star_T(12n)=\begin{cases} 12m-12n-3 & 12m-7\gt 12n\\ 12n-12m+8 & 12n\gt 12m-7\end{cases}\\ (12m-7) \star_T(12n+1)=12m+12n-7\\ (12m-6) \star_T(12n-6)=12m+12n-3\\ (12m-6) \star_T(12n-5)=\begin{cases} 12m-12n+5 & 12m-6\gt 12n-5\\ 12n-12m & 12n-5\gt 12m-6\\ 5 & m=n\end{cases}\\ (12m-6) \star_T(12n-4)=\begin{cases} 12m-12n+4 & 12m-6\gt 12n-4\\ 12n-12m-5 & 12n-4\gt 12m-6\\ 4 & m=n\end{cases}\\ (12m-6) \star_T(12n-3)=12m+12n-8\\ (12m-6) \star_T(12n-2)=\begin{cases} 12m-12n-6 & 12m-6\gt 12n-2\\ 12n-12m+3 & 12n-2\gt 12m-6\end{cases}\\ (12m-6) \star_T(12n-1)=\begin{cases} 12m-12n+1 & 12m-6\gt 12n-1\\ 12n-12m-2 & 12n-1\gt 12m-6\end{cases}\\ (12m-6) \star_T(12n)=\begin{cases} 12m-12n+2 & 12m-6\gt 12n\\ 12n-12m-1 & 12n\gt 12m-6\\ 2 & m=n\end{cases}\\ (12m-6) \star_T(12n+1)=12m+12n-6\\ (12m-5) \star_T(12n-5)=12m+12n-14\\ (12m-5) \star_T(12n-4)=12m+12n-13\\ (12m-5) \star_T(12n-3)=\begin{cases} 12m-12n-1 & 12m-5\gt 12n-3\\ 12n-12m+2 & 12n-3\gt 12m-5\end{cases}\\ (12m-5) \star_T(12n-2)=12m+12n-5\\ (12m-5) \star_T(12n-1)=12m+12n-4\\ (12m-5) \star_T(12n)=12m+12n-9\\ (12m-5) \star_T(12n+1)=\begin{cases} 12m-12n-5 & 12m-5\gt 12n+1\\ 12n-12m+4 & 12n+1\gt 12m-5\end{cases}\\ (12m-4) \star_T(12n-4)=12m+12n-12\\ (12m-4) \star_T(12n-3)=\begin{cases} 12m-12n & 12m-4\gt 12n-3\\ 12n-12m+5 & 12n-3\gt 12m-4\end{cases}\\ (12m-4) \star_T(12n-2)=12m+12n-4\\ (12m-4) \star_T(12n-1)=12m+12n-9\\ (12m-4) \star_T(12n)=12m+12n-14\\ (12m-4) \star_T(12n+1)=\begin{cases} 12m-12n-4 & 12m-4\gt 12n+1\\ 12n-12m+9 & 12n+1\gt 12m-4\end{cases}\\ (12m-3) \star_T(12n-3)=12m+12n-7\\ (12m-3) \star_T(12n-2)=\begin{cases} 12m-12n-3 & 12m-3\gt 12n-2\\ 12n-12m+8 & 12n-2\gt 12m-3\end{cases}\\ (12m-3) \star_T(12n-1)=\begin{cases} 12m-12n-6 & 12m-3\gt 12n-1\\ 12n-12m+3 & 12n-1\gt 12m-3\end{cases}\\ (12m-3) \star_T(12n)=\begin{cases} 12m-12n+1 & 12m-3\gt 12n\\ 12n-12m-2 & 12n\gt 12m-3\end{cases}\\ (12m-3) \star_T(12n+1)=12m+12n-3\\ (12m-2) \star_T(12n-2)=12m+12n-2\\ (12m-2) \star_T(12n-1)=12m+12n-1\\ (12m-2) \star_T(12n)=12m+12n\\ (12m-2) \star_T(12n+1)=\begin{cases} 12m-12n-2 & 12m-2\gt 12n+1\\ 12n-12m+1 & 12n+1\gt 12m-2\end{cases}\\ (12m-1) \star_T(12n-1)=12m+12n\\ (12m-1) \star_T(12n)=12m+12n-5\\ (12m-1) \star_T(12n+1)=\begin{cases} 12m-12n-1 & 12m-1\gt 12n+1\\ 12n-12m+2 & 12n+1\gt 12m-1\end{cases}\\ (12m) \star_T(12n)=12m+12n-4\\ (12m) \star_T(12n+1)=\begin{cases} 12m-12n & 12m\gt 12n+1\\ 12n-12m+5 & 12n+1\gt 12m\end{cases}\\ (12m+1) \star_T(12n+1)=12m+12n+1\end{cases}$ that $\forall k\in\Bbb N,\,\langle 2\rangle =\langle 3\rangle =\langle (2k+1)\star _T (2k+3)\rangle=(\Bbb N,\star _T)\simeq (\Bbb Z,+)$ and $\langle (2k)\star _T(2k+2)\rangle\neq\Bbb N$ and each prime in $\langle 5\rangle$ is to form of $5+12k$ or $13+12k$, $k\in\Bbb N\cup\{0\}$ and each prime in $\langle 7\rangle$ is to form of $7+12k$ or $13+12k$, $k\in\Bbb N\cup\{0\}$ and $\langle 5\rangle\cap\langle 7\rangle=\langle 13\rangle$ and $\Bbb N=\langle 5\rangle\oplus\langle 7\rangle$ but there isn't any proper subgroup including all primes of the form $11+12k,$ $k\in\Bbb N\cup\{0\}$ (probably I have to make another better). $$0,-1,1,-3,-2,-5,3,2,-4,6,5,4,-6,-7,7,-9,-8,-11,9,8,-10,12,11,10,-12,-13,13,-15,$$ $$1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,$$ $$-14,-17,15,14,-16,18,17,16,-18,-19,19,-21,-20,-23,21,20,-22,24,23,22,-24,...$$ $$29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,...$$ Guess $1$: For each group on $\Bbb N$ like $(\Bbb N,\star)$ generated from algorithm above, if $p_i$ be $i$_th prime number and $x_i$ be $i$_th composite number then $\exists m\in\Bbb N,\,\forall n\in\Bbb N$ that $n\ge m$ we have: $2\star3\star5\star7...\star p_n=\prod_{i=1}^{n}p_i\gt\prod _{i=1}^{n}x_i=4\star6\star8\star9...\star x_n$ Guess $2$: For each group on $\Bbb N$ like $(\Bbb N,\star)$ generated from algorithm above, we have: $\lim_{n\to\infty}\prod _{n=1}^{\infty}p_n,\lim_{n\to\infty}\prod _{n=1}^{\infty}x_n\in\Bbb N,\,\,(\lim_{n\to\infty}\prod _{n=1}^{\infty}p_n)\star(\lim_{n\to\infty}\prod _{n=1}^{\infty}x_n)=1$. now let the group $G$ be external direct sum of three copies of the group $(\Bbb N,\star _T)$, hence $G=\Bbb N\oplus\Bbb N\oplus\Bbb N$. Theorem $2$: $(\Bbb N\times\Bbb N\times\Bbb N,\lt _T)$ is a well ordering set with order relation $\lt _T$ as: $\forall (m_1,n_1,t_1),(m_2,n_2,t_2)\in\Bbb N\times\Bbb N\times\Bbb N,\quad (m_1,n_1,t_1)\lt _T(m_2,n_2,t_2)$ if $\begin{cases} t_1\lt t_2 & or\\ t_1=t_2,\, m_1-n_1\lt m_2-n_2 & or\\ t_1=t_2,\, m_1-n_1=m_2-n_2,\, n_1\lt n_2\end{cases}$ and suppose $M=\Bbb N\times\Bbb N\times\Bbb N$ is a topological space (Hausdorff space) induced by order relation $\lt _T$. Question $1$: Is $G$ a topological group with topology of $M$? Now regarding to the group $(\Bbb N,\star_T)$, I am planning an algebraic form of prime number theorem towards twin prime conjecture: Recall the statement of the prime number theorem: Let $x$ be a positive real number, and let $\pi(x)$ denote the number of primes that are less than or equal to $x$. Then the ratio $\pi(x)\cdot{\log x\over x}$ can be made arbitrarily close to $1$ by taking $x$ sufficiently large. Question $2$: Suppose $\pi_1(x)$ is all prime numbers of the form $4k+1$ and less than $x$ and $\pi_2(x)$ is all prime numbers of the form $4k+3$ and less than $x$. Do $\lim_{x\to\infty}\pi_1(x)\cdot{\log x\over x}=0.5=\lim_{x\to\infty}\pi_2(x)\cdot{\log x\over x}\ ?$ Answer given by $@$Milo Brandt from stackexchange site: Basically, for any $k$, the primes are equally distributed across the congruence classes $\langle n\rangle$ mod $k$ where $n$ and $k$ are coprime. This result is known as the prime number theorem for arithmetic progressions. [Wikipedia](https://en.wikipedia.org/wiki/Prime_number_theorem#Prime_number_theorem_for_arithmetic_progressions) discusses it with a number of references and one can find a proof of it by Ivan Soprounov [here](http://academic.csuohio.edu/soprunov_i/pdf/primes.pdf), which makes use of the Dirichlet theorem on arithmetic progressions (which just says that $\pi_1$ and $\pi_2$ are unbounded) to prove this stronger result. Question $3$: For each neutral infinite subset $A$ of $\Bbb N$, does exist a cyclic group like $(\Bbb N,\star)$ such that $A$ is a maximal subgroup of $\Bbb N$? Question $4$: If $(\Bbb N,\star_1)$ is a cyclic group and $n\in\Bbb N$ and $A=\{a_i\mid i\in\Bbb N\}$ is a non-trivial subgroup of $\Bbb N$ then does exist another cyclic group $(\Bbb N,\star_2)$ such that $\prod _{i=1}^{\infty}a_i=a_1\star_2a_2\star_2a_3\star_2...=n$? Question $5$: If $(\Bbb N,\star)$ is a cyclic group and $n\in\Bbb N$ then does exist a non-trivial subset $A=\{a_i\mid i\in\Bbb N\}$ of $\Bbb P$ with $\#(\Bbb P\setminus A)=\aleph_0$ and $\prod _{i=1}^{\infty}a_i=a_1\star a_2\star a_3\star...=n$? Question $6$: If $(\Bbb N,\star_1)$ and $(\Bbb N,\star_2)$ are cyclic groups and $A=\{a_i\mid i\in\Bbb N\}$ is a non-trivial subgroup of $(\Bbb N,\star_1)$ and $B=A\cap\Bbb P$ then does $\prod_{i=1}^{\infty}a_i=a_1\star_2a_2\star_2a_3\star_2...\in\Bbb N$? Alireza Badali 12:34, 28 April 2018 (CEST) Some dissimilar conjectures Algebraic analytical number theory Alireza Badali 16:51, 4 July 2018 (CEST) The Collatz conjecture is a conjecture in mathematics that concerns a sequence defined as follows: start with any positive integer $n$. Then each term is obtained from the previous term as follows: if the previous term is even, the next term is one half the previous term. Otherwise, the next term is $3$ times the previous term plus $1$. The conjecture is that no matter what value of $n$, the sequence will always reach $1$. The conjecture is named after German mathematician Lothar Collatz, who introduced the idea in $1937$, two years after receiving his doctorate. It is also known as the $3n + 1$ conjecture. Theorem $1$: If $(\Bbb N,\star_{\Bbb N})$ is a cyclic group with $e_{\Bbb N}=1$ & $\langle m_1\rangle=\langle m_2\rangle=(\Bbb N,\star_{\Bbb N})$ and $f:\Bbb N\to\Bbb N$ is a bijection such that $f(1)=1$ then $(\Bbb N,\star _f)$ is a cyclic group with: $e_f=1$ & $\langle f(m_1)\rangle=\langle f(m_2)\rangle=(\Bbb N,\star_f)$ & $\forall m,n\in\Bbb N,$ $f(m)\star _ff(n)=f(m\star_{\Bbb N}n)$ & $(f(n))^{-1}=f(n^{-1})$ that $n\star_{\Bbb N}n^{-1}=1$. I want make a group in accordance with Collatz graph but $@$RobertFrost from stackexchange site advised me in addition, it needs to be a torsion group because then it can be used to show convergence, meantime I like apply lines in the Euclidean plane $\Bbb R^2$ too. Question $1$: What is function of this sequence on to natural numbers? $1,2,4,3,6,5,10,7,14,8,16,9,18,11,22,12,24,13,26,15,30,17,34,19,38,20,40,21,42,23,46,25,50,...$ such that we begin from $1$ and then write $2$ then $2\times2$ then $3$ then $2\times3$ then ... but if $n$ is even and previously we have written $0.5n$ and then $n$ then ignore $n$ and continue and write $n+1$ and then $2n+2$ and so on for example we have $1,2,4,3,6,5,10$ so after $10$ we should write $7,14,...$ because previously we have written $3,6$. Answer given by $@$r.e.s from stackexchange site: Following is a definition of your sequence without using recursion. Let $S=(S_0,S_1,S_2,\ldots)$ be the increasing sequence of positive integers that are expressible as either $2^e$ or as $o_1\cdot 2^{o_2}$, where $e$ is an even nonnegative integer, $o_1>1$ is an odd positive integer and $o_2$ is an odd positive integer. Thus $$S=(1, 4, 6, 10, 14, 16, 18, 22, 24, 26, 30, 34, 38, 40,42,\ldots).$$ Let $\bar{S}$ be the complement of $S$ with respect to the positive integers; i.e., $$\bar{S}=(2, 3, 5, 7, 8, 9, 11, 12, 13, 15, 17, 19, 20, 21, 23, 25,\ldots).$$ Your sequence is then $T=(T_0,T_1,T_2,\ldots)$, where $$T_n:=\begin{cases}S_{n\over 2}&\text{ if $n$ is even}\\ \bar{S}_{n-1\over 2}&\text{ if $n$ is odd.} \end{cases} $$ Thus $T=(1, 2, 4, 3, 6, 5, 10, 7, 14, 8, 16, 9, 18, 11, 22, 12, 24, 13, 26, 15, 30, 17, 34, 19, 38, 20, \ldots).$ Sequences $S,\bar{S},T$ are OEIS [A171945](http://oeis.org/A171945), [A053661](http://oeis.org/A053661), [A034701](http://oeis.org/A034701) respectively. These are all discussed in ["The vile, dopey, evil and odious game players"](https://www.sciencedirect.com/science/article/pii/S0012365X11001427). Sage code: def is_in_S(n): return ( (n.valuation(2) % 2 == 0) and (n.is_power_of(2)) ) or ( (n.valuation(2) % 2 == 1) and not(n.is_power_of(2)) ) S = [n for n in [1..50] if is_in_S(n)] S_ = [n for n in [1..50] if not is_in_S(n)] T = [] for i in range(max(len(S),len(S_))): if i % 2 == 0: T += [S[i/2]] else: T += [S_[(i-1)/2]] print S_ print T [1, 4, 6, 10, 14, 16, 18, 22, 24, 26, 30, 34, 38, 40, 42, 46, 50] [2, 3, 5, 7, 8, 9, 11, 12, 13, 15, 17, 19, 20, 21, 23, 25, 27, 28, 29, 31, 32, 33, 35, 36, 37, 39, 41, 43, 44, 45, 47, 48, 49] [1, 2, 4, 3, 6, 5, 10, 7, 14, 8, 16, 9, 18, 11, 22, 12, 24, 13, 26, 15, 30, 17, 34, 19, 38, 20, 40, 21, 42, 23, 46, 25, 50] Theorem $2$: If $(\Bbb N,\star_1)$ & $(\Bbb N,\star_2)$ are cyclic groups with generators respectively $u_1$ & $v_1$ and $u_2$ & $v_2$ then $C_1=\{(m,2m)\mid m\in\Bbb N\}$ is a cyclic group with: $\begin{cases} e_{C_1}=(1,2)\\ \\\forall m,n\in\Bbb N,\,(m,2m)\star_{C_1}(n,2n)=(m\star_1n,2(m\star_1n))\\ (m,2m)^{-1}=(m^{-1},2\times m^{-1})\qquad\text{that}\quad m\star_1m^{-1}=1\\ \\C_1=\langle(u_1,2u_1)\rangle=\langle(v_1,2v_1)\rangle\end{cases}$ and $C_2=\{(3m-1,2m-1)\mid m\in\Bbb N\}$ is a cyclic group with: $\begin{cases} e_{C_2}=(2,1)\\ \\\forall m,n\in\Bbb N,\,(3m-1,2m-1)\star_{C_2}(3n-1,2n-1)=(3(m\star_2n)-1,2(m\star_2n)-1)\\ (3m-1,2m-1)^{-1}=(3\times m^{-1}-1,2\times m^{-1}-1)\qquad\text{that}\quad m\star_2 m^{-1}=1\\ \\C_2=\langle(3u_2-1,2u_2-1)\rangle=\langle(3v_2-1,2v_2-1)\rangle\end{cases}$• And let $C:=C_1\oplus C_2$ be external direct sum of the groups $C_1$ & $C_2$. Question $2$: What are maximal subgroups of $C_1$ & $C_2$ & $C$? Theorem $3$: If $(\Bbb N,\star)$ is a cyclic group with generators $u,v$ and identity element $e=1$ and $f:\Bbb N\to\Bbb R$ is an injection then $(f(\Bbb N),\star_f)$ is a cyclic group with generators $f(u),f(v)$ and identity element $e_f=f(1)$ and operation law: $\forall m,n\in\Bbb N,$ $f(m)\star_ff(n)=f(m\star n)$ and inverse law: $\forall n\in\Bbb N,$ $(f(n))^{-1}=f(n^{-1})$ that $n\star n^{-1}=1$. Suppose $\forall m,n\in\Bbb N,\qquad$ $\begin{cases} m\star 1=m\\ (4m)\star (4m-2)=1=(4m+1)\star (4m-1)\\ (4m-2)\star (4n-2)=4m+4n-5\\ (4m-2)\star (4n-1)=4m+4n-2\\ (4m-2)\star (4n)=\begin{cases} 4m-4n-1 & 4m-2\gt 4n\\ 4n-4m+1 & 4n\gt 4m-2\\ 3 & m=n+1\end{cases}\\ (4m-2)\star (4n+1)=\begin{cases} 4m-4n-2 & 4m-2\gt 4n+1\\ 4n-4m+4 & 4n+1\gt 4m-2\end{cases}\\ (4m-1)\star (4n-1)=4m+4n-1\\ (4m-1)\star (4n)=\begin{cases} 4m-4n+2 & 4m-1\gt 4n\\ 4n-4m & 4n\gt 4m-1\\ 2 & m=n\end{cases}\\ (4m-1)\star (4n+1)=\begin{cases} 4m-4n-1 & 4m-1\gt 4n+1\\ 4n-4m+1 & 4n+1\gt 4m-1\\ 3 & m=n+1\end{cases}\\ (4m)\star (4n)=4m+4n-3\\ (4m)\star (4n+1)=4m+4n\\ (4m+1)\star (4n+1)=4m+4n+1\\ \Bbb N=\langle 2\rangle=\langle 4\rangle\end{cases}$ and let $C_1=\{(m,2m)\mid m\in\Bbb N\}$ is a cyclic group with: $\begin{cases} e_{C_1}=(1,2)\\ \\\forall m,n\in\Bbb N,\,(m,2m)\star_{C_1}(n,2n)=(m\star n,2(m\star n))\\ (m,2m)^{-1}=(m^{-1},2\times m^{-1})\qquad\text{that}\quad m\star m^{-1}=1\\ \\C_1=\langle(2,4)\rangle=\langle(4,8)\rangle\end{cases}$ and $C_2=\{(3m-1,2m-1)\mid m\in\Bbb N\}$ is a cyclic group with: $\begin{cases} e_{C_2}=(2,1)\\ \\\forall m,n\in\Bbb N,\, (3m-1,2m-1)\star_{C_2}(3n-1,2n-1)=(3(m\star n)-1,2(m\star n)-1)\\ (3m-1,2m-1)^{-1}=(3\times m^{-1}-1,2\times m^{-1}-1)\qquad\text{that}\quad m\star m^{-1}=1\\ \\C_2=\langle(5,3)\rangle=\langle(11,7)\rangle\end{cases}$. and let $C:=C_1\oplus C_2$ be external direct sum of the groups $C_1$ & $C_2$, Question $3$: What are maximal subgroups of $C_1$ & $C_2$ & $C$? Alireza Badali 10:02, 12 May 2018 (CEST) Erdős–Straus conjecture Theorem: If $(\Bbb N,\star)$ is a cyclic group with identity element $e=1$ and generators $a,b$ then $E=\{({1\over x},{1\over y},{1\over z},{-4\over n+1},n)\mid x,y,z,n\in\Bbb N\}$ is an Abelian group with: $\forall x,y,z,n,x_1,y_1,z_1,n_1\in\Bbb N$ $\begin{cases} e_E=(1,1,1,-2,1)=({1\over 1},{1\over 1},{1\over 1},{-4\over 1+1},1)\\ \\({1\over x},{1\over y},{1\over z},{-4\over n+1},n)^{-1}=({1\over x^{-1}},{1\over y^{-1}},{1\over z^{-1}},\frac{-4}{n^{-1}+1},n^{-1})\quad\text{that}\\ x\star x^{-1}=1=y\star y^{-1}=z\star z^{-1}=n\star n^{-1}\\ \\({1\over x},{1\over y},{1\over z},\frac{-4}{n+1},n)\star_E({1\over x_1},{1\over y_1},{1\over z_1},\frac{-4}{n_1+1},n_1)=(\frac{1}{x\star x_1},\frac{1}{y\star y_1},\frac{1}{z\star z_1},\frac{-4}{n\star {n_1}+1},n\star n_1)\\ \\E=\langle({1\over a},1,1,-2,1),(1,{1\over a},1,-2,1),(1,1,{1\over a},-2,1),(1,1,1,\frac{-4}{a+1},1),(1,1,1,-2,a)\rangle=\\ \langle({1\over b},1,1,-2,1),(1,{1\over b},1,-2,1),(1,1,{1\over b},-2,1),(1,1,1,\frac{-4}{b+1},1),(1,1,1,-2,b)\rangle\end{cases}$• Let $(\Bbb N,\star)$ is a cyclic group with: $\begin{cases} n\star 1=n\\ (2n)\star (2n+1)=1\\ (2n)\star (2m)=2n+2m\\ (2n+1)\star (2m+1)=2n+2m+1\\ (2n)\star (2m+1)=\begin{cases} 2m-2n+1 & 2m+1\gt 2n\\ 2n-2m & 2n\gt 2m+1\end{cases}\\\Bbb N=\langle 2\rangle =\langle 3\rangle \end{cases}$ Question: Is $E_0=\{({1\over x},{1\over y},{1\over z},\frac{-4}{n+1},n)\mid x,y,z,n\in\Bbb N,\, {1\over x}+{1\over y}+{1\over z}-{4\over n+1}=0\}$ a subgroup of $E$? Landaus forth problem Friedlander–Iwaniec theorem: there are infinitely many prime numbers of the form $a^2+b^4$. I want use this theorem for Landaus forth problem but prime numbers properties have been applied for Friedlander–Iwaniec theorem hence no need to prime number theorem or its other forms or extensions. Theorem: If $(\Bbb N,\star)$ is a cyclic group with identity element $e=1$ and generators $u,v$ then $F=\{(a^2,b^4)\mid a,b\in\Bbb N\}$ is a group with: $\forall a,b,c,d\in\Bbb N\,$ $\begin{cases} e_F=(1,1)\\ (a^2,b^4)\star_F(c^2,d^4)=((a\star c)^2,(b\star d)^4)\\ (a^2,b^4)^{-1}=((a^{-1})^2,(b^{-1})^4)\qquad\text{that}\quad a\star a^{-1}=1=b\star b^{-1}\\ F=\langle (1,u^4),(u^2,1)\rangle=\langle (1,v^4),(v^2,1)\rangle\end{cases}$ now let $H=\langle\{(a^2,b^4)\mid a,b\in\Bbb N,\,b\neq 1\}\rangle$ and $G=F/H$ is quotient group of $F$ by $H$. ($G$ is a group including prime numbers properties only of the form $1+n^2$.) and also $L=\{1+n^2\mid n\in\Bbb N\}$ is a cyclic group with: $\forall m,n\in\Bbb N$ $\begin{cases} e_L=2=1+1^2\\ (1+n^2)\star_L(1+m^2)=1+(n\star m)^2\\ (1+n^2)^{-1}=1+(n^{-1})^2\quad\text{that}\;n\star n^{-1}=1\\ L=\langle 1+u^2\rangle=\langle 1+v^2\rangle\end{cases}$ but on the other hand we have: $L\simeq G$ hence we can apply $L$ instead $G$ of course since we are working on natural numbers generally we could consider from the beginning the group $L$ without involvement with the group $G$ anyhow. Question $1$: For each neutral cyclic group on $\Bbb N$ then what are maximal subgroups of $L$? Guess $1$: For each cyclic group structure on $\Bbb N$ like $(\Bbb N,\star)$ then for each non-trivial subgroup of $\Bbb N$ like $T$ we have $T\cap\Bbb P\neq\emptyset$. I think this guess must be proved via prime number theorem. For each neutral cyclic group on $\Bbb N$ if $L\cap\Bbb P=\{1+n_1^2,1+n_2^2,...,1+n_k^2\},\,k\in\Bbb N$ and if $A=\bigcap _{i=1}^k\langle 1+n_i^2\rangle$ so $\exists m\in\Bbb N$ that $A=\langle 1+m^2\rangle$ & $m\neq n_i$ for $i=1,2,3,...,k$ (intelligibly $k\gt1$) so we have: $A\cap\Bbb P=\emptyset$. Question $2$: Is $A$ only unique greatest subgroup of $L$ such that $A\cap\Bbb P=\emptyset$? Lemoine's conjecture Theorem: If $(\Bbb N,\star)$ is a cyclic group with identity element $e=1$ & generators $u,v$ then $L=\{(p_{n_1},p_{n_2},p_{n_3},-2n-5)\mid n,n_1,n_2,n_3\in\Bbb N,\,p_{n_i}$ is $n_i$_th prime for $i=1,2,3\}$ is an Abelian group with: $\forall n_1,n_2,n_3,n,m_1,m_2,m_3,m\in\Bbb N$ $\begin{cases} e_L=(2,2,2,-7)=(2,2,2,-2\times 1-5)\\ \\(p_{n_1},p_{n_2},p_{n_3},-2n-5)\star_L(p_{m_1},p_{m_2},p_{m_3},-2m-5)=(p_{n_1\star m_1},p_{n_2\star m_2},p_{n_3\star m_3},-2\times(n\star m)-5)\\ \\(p_{n_1},p_{n_2},p_{n_3},-2n-5)^{-1}=(p_{n_1^{-1}},p_{n^{-1}_2},p_{n_3^{-1}},-2\times n^{-1}-5)\quad\text{that}\\ n_1\star n_1^{-1}=1=n_2\star n_2^{-1}=n_3\star n_3^{-1}=n\star n^{-1}\\ \\L=\langle(p_u,2,2,-7),(2,p_u,2,-7),(2,2,p_u,-7),(2,2,2,-2u-5)\rangle=\\\langle(p_v,2,2,-7),(2,p_v,2,-7),(2,2,p_v,-7),(2,2,2,-2v-5)\rangle\end{cases}$• Theorem: $\forall n\in\Bbb N,\,\exists (p_{m_1},p_{m_2},p_{m_3},-2n-5)\in(L,\star_L)$ such that $p_{m_1}+p_{m_2}+p_{m_3}-2n-5=0$. Proof using Goldbach's weak conjecture. Question: Is $L_0=\{(p_{m_1},p_{m_2},p_{m_2},-2n-5)\mid\forall m_1,m_2\in\Bbb N,\,\exists n\in\Bbb N,$ such that $p_{m_1}+2p_{m_2}-2n-5=0\}$ a subgroup of $L$? Alireza Badali 19:30, 3 June 2018 (CEST) Primes with beatty sequences How can we understand $\infty$? we humans only can think on natural numbers and other issues are only theorizing, algebraic theories can be some features for this aim. Conjecture: If $r$ is an irrational number and $1\lt r\lt 2$, then there are infinitely many primes in the set $L=\{\text{floor}(n\cdot r)\mid n\in\Bbb N\}$. Theorem $1$: If $(\Bbb N,\star)$ is a cyclic group with identity element $e=1$ & generators $u,v$ and $r\in[1,2]\setminus\Bbb Q$ then $L=\{\lfloor n\cdot r\rfloor\mid n\in\Bbb N\}$ is another cyclic group with: $\forall m,n\in\Bbb N$ $\begin{cases} e_L=1\\ \lfloor n\cdot r\rfloor\star_L\lfloor m\cdot r\rfloor=\lfloor (n\star m)\cdot r\rfloor\\ (\lfloor n\cdot r\rfloor)^{-1}=\lfloor n^{-1}\cdot r\rfloor\qquad\text{that}\quad n\star n^{-1}=1\\ L=\langle\lfloor u\cdot r\rfloor\rangle=\langle\lfloor v\cdot r\rfloor\rangle\end{cases}$. Guess $1$: $\prod_{n=1}^{\infty}\lfloor n\cdot r\rfloor=\lfloor 1\cdot r\rfloor\star\lfloor 2\cdot r\rfloor\star\lfloor 3\cdot r\rfloor\star...\in\Bbb N$. The conjecture generalized: if $r$ is a positive irrational number and $h$ is a real number, then each of the sets $\{\text{floor}(n\cdot r+h)\mid n\in\Bbb N\}$, $\{\text{round}(n\cdot r+h)\mid n\in\Bbb N\}$, and $\{\text{ceiling}(n\cdot r+h)\mid n\in\Bbb N\}$ contains infinitely many primes. Theorem $2$: If $(\Bbb N,\star)$ is a cyclic group with identity element $e=1$ & generators $u,v$ & $r$ is a positive irrational number & $h\in\Bbb R$ then $G=\{n\cdot r+h\mid n\in\Bbb N\}$ is another cyclic group with: $\forall m,n\in\Bbb N$ $\begin{cases} e_G=\lfloor r+h\rfloor\\ \lfloor n\cdot r+h\rfloor\star_G\lfloor m\cdot r+h\rfloor=\lfloor (n\star m)\cdot r+h\rfloor\\ (\lfloor n\cdot r+h\rfloor)^{-1}=\lfloor n^{-1}\cdot r+h\rfloor\qquad\text{that}\quad n\star n^{-1}=1\\ L=\langle\lfloor u\cdot r+h\rfloor\rangle=\langle\lfloor v\cdot r+h\rfloor\rangle\end{cases}$. Guess $2$: $\prod_{n=k}^{\infty}\lfloor n\cdot r+h\rfloor=\lfloor k\cdot r+h\rfloor\star\lfloor (k+1)\cdot r+h\rfloor\star\lfloor (k+2)\cdot r+h\rfloor\star...\in\Bbb N$ in which $\lfloor k\cdot r+h\rfloor\in\Bbb N$ & $\lfloor (k-1)\cdot r+h\rfloor\lt1$. Conjectures depending on the new definitions of primes A problem: For each cyclic group on $\Bbb N$ like $(\Bbb N,\star)$ find a new definition of prime numbers matching with the operation $\star$ in the group $(\Bbb N,\star)$. $\Bbb N$ is a cyclic group by: $\begin{cases} \forall m,n\in\Bbb N\\ n\star 1=n\\ (2n)\star (2n+1)=1\\ (2n)\star (2m)=2n+2m\\ (2n+1)\star (2m+1)=2n+2m+1\\ (2n)\star (2m+1)=\begin{cases} 2m-2n+1 & 2m+1\gt 2n\\ 2n-2m & 2n\gt 2m+1\end{cases}\\ (\Bbb N,\star)=\langle2\rangle=\langle3\rangle\simeq(\Bbb Z,+)\end{cases}$ in the group $(\Bbb Z,+)$ an element $p\gt 1$ is a prime iff don't exist $m,n\in\Bbb Z$ such that $p=m\times n$ & $m,n\gt1$ for instance since $12=4\times3=3+3+3+3$ then $12$ isn't a prime but $13$ is a prime, now inherently must exists an equivalent definition for prime numbers in the $(\Bbb N,\star)$. prime number isn't an algebraic concept so we can not define primes by using isomorphism (and via algebraic equations primes can be defined) but since Gaussian integers contain all numbers of the form $m+ni,$ $m,n\in\Bbb N$ hence by using algebraic concepts we can solve some problems in number theory. Question: what is definition of prime numbers in the $(\Bbb N,\star)$? Gaussian moat problem Grimm's conjecture Oppermann's conjecture Legendre's conjecture Conjectures depending on the ring theory An algorithm which makes new integral domains on $\Bbb N$: Let $(\Bbb N,\star,\circ)$ be that integral domain then identity element $i$ will be corresponding with $1$ and multiplication of natural numbers will be obtained from multiplication of integers corresponding with natural numbers and of course each natural number like $m$ multiplied by a natural number corresponding with $-1$ will be $-m$ such that $m\star(-m)=1$ & $1\circ m=1$. for instance $(\Bbb N,\star,\circ)$ is an integral domain with: $\begin{cases} \forall m,n\in\Bbb N\\ n\star 1=n\\ (2n)\star (2n+1)=1\\ (2n)\star (2m)=2n+2m\\ (2n+1)\star (2m+1)=2n+2m+1\\ (2n)\star (2m+1)=\begin{cases} 2m-2n+1 & 2m+1\gt 2n\\ 2n-2m & 2n\gt 2m+1\end{cases}\\1\circ m=1\\ 2\circ m=m\\ 3\circ m=-m\qquad\text{that}\quad m\star (-m)=1\\ (2n)\circ(2m)=2mn\\ (2n+1)\circ(2m+1)=2mn\\ (2n)\circ(2m+1)=2mn+1\end{cases}$ Question $1$: Is $(\Bbb N,\star,\circ)$ an unique factorization domain or the same UFD? what are irreducible elements in $(\Bbb N,\star,\circ)$? Question $2$: How can we make a UFD on $\Bbb N$? Question $3$: Under usual total order on $\Bbb N$, do there exist any integral domain $(\Bbb N,\star,\circ)$ and an Euclidean valuation $v:\Bbb N\setminus\{1\}\to\Bbb N$ such that $(\Bbb N,\star,\circ,v)$ is an Euclidean domain? no. Guess $1$: For each integral domain $(\Bbb N,\star,\circ)$ there exist a total order on $\Bbb N$ and an Euclidean valuation $v:\Bbb N\setminus\{1\}\to\Bbb N$ such that $(\Bbb N,\star,\circ,v)$ is an Euclidean domain. Professor Jeffrey Clark Lagarias advised me that you can apply group structure on $\Bbb N\cup\{0\}$ instead only $\Bbb N$ and now I see his plan is useful on the field theory, now suppose we apply two algorithms above on $\Bbb N\cup\{0\}$ hence we will have identity element for the group $(\Bbb N,\star)$ of the first algorithm is $0$ corresponding with $0$. Question $4$: If $(\Bbb N\cup\{0\},\star,\circ)$ is a UFD then what are irreducible elements in $(\Bbb N\cup\{0\},\star,\circ)$ and is $(\Bbb Q^{\ge0},\star_1,\circ_1)$ a field by: $\begin{cases} \forall m,n,u,v\in\Bbb N\cup\{0\},\,\,n\neq0\neq v\\ e_1=0,\qquad i_1=1\\ {m\over n}\star_1{u\over v}=\frac{(m\circ v)\star(u\circ n)}{n\circ v}\\ {m\over n}\circ_1{u\over v}=\frac{m\circ u}{n\circ v}\\ ({m\over n})^{-1}={n\over m}\,\qquad m\neq0\\ -({m\over n})={-m\over n}\qquad m\star(-m)=0\end{cases}$• Algebraic theories on positive numbers help us to solve some open problems depending on the positive numbers. Question $5$: Is $(\Bbb N\cup\{0\},\star,\circ)$ a UFD by: $\begin{cases} \forall m,n\in\Bbb N\\ e=0\\ (2m-1)\star(2m)=0\\ (2m)\star(2n)=2m+2n\\ (2m-1)\star(2n-1)=2m+2n-1\\ (2m)\star(2n-1)=\begin{cases} 2m-2n & 2m\gt 2n-1\\ 2n-2m-1 & 2n-1\gt 2m\end{cases}\\i=1\\ 0\circ m=0\\ 2\circ m=-m\quad m\star(-m)=0\\ (2m)\circ(2n)=2mn-1\\ (2m-1)\circ(2n-1)=2mn-1\\ (2m)\circ(2n-1)=2mn\end{cases}$ and what are irreducible elements in $(\Bbb N\cup\{0\},\star,\circ)$ and also is $(\Bbb Q^{\ge0},\star_1,\circ_1)$ a field by: $\begin{cases} \forall m,n,u,v\in\Bbb N\cup\{0\},\,\,n\neq0\neq v\\ e_1=0,\qquad i_1=1\\{m\over n}\star_1{u\over v}=\frac{(m\circ v)\star(u\circ n)}{n\circ v}\\ {m\over n}\circ_1{u\over v}=\frac{m\circ u}{n\circ v}\\ ({m\over n})^{-1}={n\over m}\,\qquad m\neq0\\ -({m\over n})={-m\over n}\qquad m\star(-m)=0\end{cases}$ • Conjecture $1$: Let $x$ be a positive real number, and let $\pi(x)$ denote the number of primes that are less than or equal to $x$ then $$\lim_{x\to\infty}\frac{x-\pi(x)}{\pi(e^u)}=1,\quad u=\sqrt{2\log(x\log x-x)}\,.$$ Answer given by $@$Jan-ChristophSchlage-Puchta from stackexchange site: The conjecture is obviously wrong. The numerator is at least $x/2$, the denominator is at most $e^u$, and $u\lt2\sqrt\log x$, so the limit is $\infty$. Problem $1$: Find a function $f:\Bbb R\to\Bbb R$ such that $\lim_{x\to\infty}\frac{x-\pi(x)}{\pi(f(x))}=1$. Prime number theorem and its extensions or algebraic forms or corollaries allow us via infinity concept reach to some results. Prime numbers properties are stock in whole natural numbers including $\infty$ and not in any finite subset of $\Bbb N$ hence we can know them only in $\infty$, which prime number theorem prepares it, but what does mean a cognition of prime numbers I think according to the distribution of prime numbers, a cognition means only in $\infty$, this function $f$ can be such a cognition but only in $\infty$ because we have: $$\lim_{x\to\infty}\frac{x-\pi(x)}{\pi(f(x))}=1=\lim_{x\to\infty}\frac{f(x)-\pi(f(x))}{\pi(f(f(x)))}=\lim_{x\to\infty}\frac{f(f(x))-\pi(f(f(x)))}{\pi(f(f(f(x))))}=...$$ and I guess $f$ is to form of $e^{g(x)}$ in which $g:\Bbb R\to\Bbb R$ is a radical logarithmic function or probably as a radical logarithmic series. Conjecture $2$: Let $h:\Bbb R\to\Bbb R,\,h(x)=\frac{f(x)}{(\log x-1)\log(f(x))}$ then $\lim_{x\to\infty}{\pi(x)\over h(x)}=1$. Answer given by $@$Wojowu from stackexchange site: Since $x−\pi(x)\sim x$, you want $\pi(f(x))\sim x$, and $f(x)=x\log x$ works, and let $u=\log(x\log x)$. Problem $2$: Based on prime number theorem very large prime numbers are equivalent to the numbers of the form $n\cdot\log n,\,n\in\Bbb N$ hence I think a test could be made to check correctness of some conjectures or problems relating to the prime numbers, and maybe some functions such as $h$ prepares it! Question $6$: If $p_n$ is $n$_th prime number then does $$\lim_{n\to\infty}\frac{p_n}{e^{\sqrt{2\log n}}\over (\log n-1)\sqrt{2\log n}}=1\,?$$ Answer given by $@$ToddTrimble from stackexchange site: The numerator is asymptotically greater than $n$, and the denominator is asymptotically less. Parallel universes An algorithm that makes new cyclic groups on $\Bbb Z$: Let $(\Bbb Z,\star)$ be that group and at first write integers as a sequence with starting from $0$ and then write integers with a fixed sequence below it, and let identity element $e=0$ be corresponding with $0$ and two generators $m$ & $n$ be corresponding with $1$ & $-1$, so we have $(\Bbb Z,\star)=\langle m\rangle=\langle n\rangle$ for instance: $$0,1,2,-2,-1,3,4,-4,-3,5,6,-6,-5,7,8,-8,-7,9,10,-10,-9,11,12,-12,-11,13,14,-14,-13,...$$ $$0,1,-1,2,-2,3,-3,4,-4,5,-5,6,-6,7,-7,8,-8,9,-9,10,-10,11,-11,12,-12,13,-13,14,-14,...$$ then regarding the sequence above find an even rotation number that for this sequence is $4$ (or $2k$) and hence equations should be written with module $2$ (or $k$) then consider $2m-1,2m,-2m+1,-2m$ (that general form is: $km,km-1,km-2,...,km-(k-1),-km,-km+1,-km+2,...,-km+(k-1)$) and make a table of products of those $4$ (or $2k$) elements but during writing equations pay attention if an equation is right for given numbers it will be right generally for other numbers too and of course if integers corresponding with two numbers don't have same signs then product will be a piecewise-defined function for example $7\star(-10)=2$ $=(2\times4-1)\star(-2\times5)$ because $7+(-9)=-2,\,7\to7,\,-9\to-10,\,-2\to2$ that implies $(2m-1)\star(-2n)=2n-2m$ where $2n\gt 2m-1$, of course it is better at first members inverse be defined for example since $7+(-7)=0,\,7\to7,\,-7\to-8$ so $7\star(-8)=0$ that shows $(2m-1)\star(-2m)=0$ and with a little bit addition and multiplication all equations will be obtained simply that for this example is: $\begin{cases} \forall t\in\Bbb Z,\quad t\star0=t\\ \forall m,n\in\Bbb N\\ (2m-1)\star(-2m)=0=(-2m+1)\star(2m)\\ (2m-1)\star(2n-1)=2m+2n-2\\ (2m-1)\star(2n)=\begin{cases} 2m-2n-1 & 2m-1\gt2n\\ 2m-2n-2 & 2n\gt 2m-1\end{cases}\\ (2m-1)\star(-2n+1)=2m+2n-1\\ (2m-1)\star(-2n)=\begin{cases} 2n-2m+1 & 2m-1\gt2n\\ 2n-2m & 2n\gt2m-1\end{cases}\\ (2m)\star(2n)=2m+2n\\ (2m)\star(-2n+1)=\begin{cases} 2m-2n+1 & 2n-1\gt2m\\ 2m-2n & 2m\gt2n-1\end{cases}\\ (2m)\star(-2n)=-2m-2n\\ (-2m+1)\star(-2n+1)=-2m-2n+1\\ (-2m+1)\star(-2n)=\begin{cases} 2m-2n+1 & 2m-1\gt2n\\ 2m-2n & 2n\gt2m-1\\ 1 & m=n\end{cases}\\ (-2m)\star(-2n)=2m+2n-2\\ \Bbb Z=\langle1\rangle=\langle-2\rangle\end{cases}$ An algorithm which makes new integral domains on $\Bbb Z$: Let $(\Bbb Z,\star,\circ)$ be that integral domain then identity element $i$ will be corresponding with $1$ and multiplication of integers will be obtained from multiplication of corresponding integers such that if $t:\Bbb Z\to\Bbb Z$ is a bijection that images top row on to bottom row respectively for instance in example above is seen $t(2)=-1$ & $t(-18)=18$ then we can write laws by using $t$ such as $(-2m+1)\circ(-2n)=$ $t(t^{-1}(-2m+1)\times t^{-1}(-2n))=t((2m)\times(-2n+1))=t(-2\times(2mn-m))=$ $2\times(2mn-m)=4mn-2m$ and of course each integer like $m$ multiplied by an integer corresponding with $-1$ will be $n$ such that $m\star n=0$ & $0\circ m=0$ for instance $(\Bbb Z,\star,\circ)$ is an integral domain with: Question $1$: Is $(\Bbb Z,\star,\circ)$ a UFD? what are irreducible elements in $(\Bbb Z,\star,\circ)$? is $(\Bbb Q,\star_1,\circ_1)$ a field by: $\begin{cases} \forall m,n,u,v\in\Bbb Z,\,\,n\neq0\neq v\\ e_1=0,\qquad i_1=1\\{m\over n}\star_1{u\over v}=\frac{(m\circ v)\star(u\circ n)}{n\circ v}\\ {m\over n}\circ_1{u\over v}=\frac{m\circ u}{n\circ v}\\ ({m\over n})^{-1}={n\over m}\,\qquad m\neq0\\ -({m\over n})={w\over n}\qquad\,\,\,m\star w=0\end{cases}$ • Question $2$: If $(\Bbb Z,\star,\circ)$ is a UFD then what are irreducible elements in $(\Bbb Z,\star,\circ)$ and is $(\Bbb Q,\star_1,\circ_1)$ a field by: $\begin{cases} \forall m,n,u,v\in\Bbb Z,\,\,n\neq0\neq v\\ e_1=0,\qquad i_1=1\\ {m\over n}\star_1{u\over v}=\frac{(m\circ v)\star(u\circ n)}{n\circ v}\\ {m\over n}\circ_1{u\over v}=\frac{m\circ u}{n\circ v}\\ ({m\over n})^{-1}={n\over m}\,\qquad m\neq0\\ -({m\over n})={w\over n}\qquad\,\,\,m\star w=0\end{cases}$• Question $3$: Under usual total order on $\Bbb Z$, do there exist any integral domain $(\Bbb Z,\star,\circ)$ and an Euclidean valuation $v:\Bbb Z\setminus\{0\}\to\Bbb N$ such that $(\Bbb Z,\star,\circ,v)$ is an Euclidean domain? no. Guess $1$: For each integral domain $(\Bbb Z,\star,\circ)$ there exist a total order on $\Bbb Z$ and an Euclidean valuation $v:\Bbb Z\setminus\{0\}\to\Bbb N$ such that $(\Bbb Z,\star,\circ,v)$ is an Euclidean domain. Gauss circle problem Please just insert your comment here! Alireza Badali 20:47, 15 April 2018 (CEST) How to Cite This Entry: Musictheory2math. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Musictheory2math&oldid=43365 Retrieved from "https://www.encyclopediaofmath.org/index.php?title=User_talk:Musictheory2math&oldid=43365" About Encyclopedia of Mathematics Impressum-Legal
CommonCrawl
Direct impacts of landslides on socio-economic systems: a case study from Aranayake, Sri Lanka E. N. C. Perera1, D. T. Jayawardana2, P. Jayasinghe3, R. M. S. Bandara3 & N. Alahakoon4 Geoenvironmental Disasters volume 5, Article number: 11 (2018) Cite this article Landslides area controversial issue worldwide and cause a wide range of impacts on the socio-economic systems of the affected community. However, empirical studies of affected environments remain inadequate for prediction and decision making. This study aims to estimate the direct impact of a massive landslide that occurred around areas with Kandyan home gardens (KHGs)in Aranayake, Sri Lanka. Primary data were gathered by structured questionnaire from residents of the directly affected regions; the questionnaire data were combined with spatial data to acquire detailed information about the livelihoods and hazards at the household level. Satellite images were used to find affected land use and households prior to the landslide. Further, secondary data were obtained to assess the recovery cost. A multiple regression model was established to estimate the economic value of the home gardens. Field surveys and satellite images revealed that land-use practices during the past decades have caused environmental imbalance and have led to slope instability. The results reveal that 52% of household income is generated by the KHG and that the income level highly depends on the extent of the land (R2 = 0.85, p < 0.05). The extent of destroyed land that was obtained from the satellite images and the age of the KHG were used to develop a multiple regression model to estimate the economic loss of the KHG. It was found that the landslide affected region had been generating approximately US$ 160,000 annually from their home gardens toward the GDP of the country. This study found that almost all houses in the area were at risk of further sliding, and all of them were partially or entirely affected by the landslide. Among the affected households, 60% (40 houses) had completelycollapsed, whereas 40% (27 houses) were partially damaged. Because of these circumstances, the government must provide US $ 40,369 to recover the fully and partially damaged households. Finally, a lack of awareness and unplanned garden cultivation were the main contributing factors that increased the severity of the damage. Natural disasters are complex detrimental events that occur entirely beyond the control of humans (Alimohammadlou et al., 2013). Natural disasters can be classified based on the speed of onset; some disasters occur within seconds (landslides), minutes (tornadoes) or hours (flash floods and tsunamis) and others may take months or years to manifest themselves (droughts). Furthermore, rapid onset disasters such as landslide shave a massive impact on human life and property. Landslides occur over a wide range of velocities and are recognized as the third most crucial natural disaster worldwide (Zillman, 1999). Landslides are usually triggered without warning, giving people less time to evacuate. Therefore, the direct impact of landslides on the socio-economic system is crucial (Christopher, 2016). Landslides are responsible for significant loss of life and injury to people and their livestock as well as damage to infrastructure, agricultural lands and housing (Schuster and Fleming, 1986; JRC, 2003; Blöchl and Braun, 2005; Guzzetti et al., 2012).Economic losses from landslides have been increasing over recent decades (Petley et al., 2005; Guha-sapir et al., 2011; Guzzetti, 2012), mainly due to increasing development and investment in landslide-prone areas (Bandara et al., 2013; Petley et al., 2005). There are few studies that have attempted to quantify the impact of landslides on socio-economic systems (Mertens et al., 2017). In Sri Lanka, the socio-economic impacts from landslides have not been studied adequately. Landslides in Sri Lanka were considered a minor disaster up until the late twentieth century (Rathnaweera and Nawagamuwa, 2013). For instance, the annual average number of landslides was less than 50 until the year 2002. However, the frequency of landslide occurrence rapidly increased after 2003. Studies undertaken by the National Building Research Organization of Sri Lanka (NBRO) revealed that the number of landslides increased due to increasing human intervention such as unplanned cultivation, non-engineered construction, and deforestation. In general, most of the socio-economic impact assessments on landslides are limited due to a lack of data (Deheragoda, 2008). Lossesfrom landslides can be estimated through the integration of field investigation, socio-economic surveys, and remote sensing. Moreover, recent studies have revealed the complexity involved in the quantification of the direct impact that landslides have on socio-economic systems (Mertens et al., 2016). Agroforestry makes a significant contribution tothesocio-economic system of rural communities in Sri Lanka. In general, agro-forests are located on slopes, and most are vulnerable to landslides. Because of the financial benefits of agro-forestry and home gardens, the rural community is engaged in many agricultural activities, which means the land is at higher risk. This study differs from other recent studies on the impact of landslides in many ways. First, it attempts to investigate an overview of landslides. Second, it focuses on the use of integrated remote sensing to quantify socio-economic losses in agro-forest system named Kandyan Home Garden (KHG) system to estimate the direct impact of a massive landslide on household income and property damage as a case study. Physical setting A tragic landslide resulted in a catastrophic situation, burying parts of the two rural villages of Elangapitiya and Pallebage. Those villages belong to the Aranayake divisional secretariat in Kegalle, Sri Lanka (Fig. 1). Aranayake is a mountainous region in the wet zone of the country. The area receives heavy rains during the rainy periods (May–September, southwest monsoon; October–November, inter-monsoon) and bright sunshine during the dry season (March–December). The average annual rainfall is from 2500 mm to 3000 mm (Jayawardana et al., 2014). The most rainfall is usually received during the monsoons. However, average rainfall amountvaries during the cyclone season. Figure showing land-use of Landslide affected GN divisions; Debathgama, Pallebage, and Elangapitiya. Land use based on 1:50000 maps General description of the Aranayake landslide During the recent past, there has been no record of major landslides in the region. Therefore, people tend to use the slopes for unplanned cultivation with poor surface water management and unplanned construction. Consequently, people have less awareness of the possibility of disaster. However, evidence of paleo-landslides can be observed throughout the region. Paleo-landslides seem to have been active approximately 500–1000 years ago (Jayasinghe, 2016); hence, people living in Aranayake have few experiences of a landslide in their lifetime. In fact, observing old landslides is a good indication that the area has unstable geology and that more landslides are likely in the future. The Aranayake region experienced 435 mm of cumulative rainfall from 14-May-2016 to 17-May-2016 (~ 72 h). The exceptionally high rainfall was mainly due to the development of a low-pressure zone around Sri Lanka caused by a tropical cyclone in the Indian Ocean. This sustained torrential rainfall triggered a landslide on 17-May-2016 at approximately 16.30–17. 00 h. Thelandslide buried houses and property and resulted in massive casualties. According to field observation, this landslide was a debris flow landslide having a very complex translational model. Socio-economic background of the area The population of Aranayake is approximately 68,464 with 1:1 male to female ratio. Overall, 47% of the residents are permanently or temporarily employed. The high rate of dependency reaches 53% of the total population. More than 50% of the population is included in the labor force, and most of them are engaged in home garden and plantation agriculture. Although recorded incomes are low, people have alternative income sources and food security from their home gardens. The traditional home gardens and agroforestry Aranayake traditional home gardens and the agroforestry system clearly reflect the typical KHG system in the wet zone of the country. Home gardens in the Aranayake region have a functional relationship with the occupants related to economic, biophysical and social aspects. The Aranayake KHG consists of a mixture of annual and perennial crops, such as tea, rubber, paddies, cardamom, black paper, jackfruit, coconut, and cocoa. The crops are not grown according to any specific pattern and appear to be in a random, intimately mixed pattern. According to the typical pattern of KHGs, tea land is available on steeps slopes, rubber exists in moderate terrain and flat terrain is for paddies. In addition, minor crops can be seen near households. The most fundamental social benefit of KHGs is their direct contribution to a secure household food supply. The livelihood benefits of KHGs, however, are well beyond the food supply. In general, selling excess KHG production significantly improves the financial status of the community. The KHG system was significantly damaged by the Aranayake landslide reducing the income and food security of the region. Field investigation Several exploratory field investigations were done after the landslide to grasp the overall view. Calibrated handheld GPS was used to collect all field information. To analyze the related socio-economic conditions during the field visits, detailed studies were done on human settlement and topography. Sampling and primary data collection Primary data were gathered by structured questionnaire from two directly affected Grama Niladhari (GN) divisions (Fig. 1). Before data collection, a pilot survey was conducted on 30 randomly selected houses, and the questionnaire was revised according to the responses. The survey mainly covered various income sources, social capital, household demography, household type, living condition, land-use type, KHG production and landslide experience. It was decided thatdata collection needed to be maintained at a high precision with a 95% confidence level according to the Department of Senses and Statistics of Sri Lanka standards. Sampling was done based on a proportionate stratified random sampling method from both villages (Kumar, 2007). Additionally, the following formula was used to determine the sample size (Eq. (1); Mathers et al., 2007). $$ n\kern0.5em =\kern0.5em {\left({Z}_{\alpha /2}\kern0.5em \times \kern0.5em \sigma /E\right)}^2 $$ where n = sample size, Zα/2 = confidence level, σ = standard deviation, and E = error. According to the equation, the estimated sample size was 120under 90% accuracy levels. In this study, 127households were selected as the primary data source (592 individuals). Secondary data collection and analysis Secondary information and maps were predominantly used to evaluate the socio-economic status before the landside. Socio-economic data were obtained from the recently updated database in the Aranayake Divisional Secretariat. The 1:10,000 land-use data were obtained from the Land-use and Policy Planning Department (LUPPD) of Sri Lanka. The collected information and maps were used to evaluate socio-economic conditions. The present study integrates the socio-economic and GIS data simultaneously for the landslide impact assessment. Socio-economic data were analyzed by the descriptivestatistical method, chi-squired test,and correlation analysisusingSPSS 19 software. Conversely, the spatial data processing and analyses have been incorporated using ArcGIS 10.2. Landslide inventory and affected area mapping Landslide boundary demarcation and mapping are essential to study the extent of damage (Guzzetti, 2006). During the past decades, use of satellite information for landslide investigation has increased significantly (Guzzetti et al., 2012). For instance, landslide damage in forest terrain has been identified by high-resolution Google-Earth images with the help of many other attributes (Guzzetti et al., 2012, Qiong et al., 2013). Cloud-free Google-Earth images of the Aranayake landslide area were acquired. According to the images, the landslide had a clear boundary; thus, boundary demarcation was able to be accurate. In addition, ground truth GPS locations were incorporated to minimize errors. The collected information was converted to polygon data using geographical information systems (GIS; Raghuvanshi et al., 2015). In addition to boundary demarcation, the Google-Earth data have the highest accuracy in household mapping (Escamilla et al., 2014). Therefore, the affected households were mapped using Google-Earth images before the incident and superimposed on the inventory map. In this exercise, the location of the remaining households was also mapped for cross-validation. The affected area map was developed by overlaying the landslide inventory map with different thematic layers such as land-use type, building distribution, transportation networks and water streams in the region. Then, the affected area map was used to find the area covered by different land-use categories. In addition, real damage values for different land-use types were estimated using the affected area map incorporated with a unit value for each land-use category. Model economic value of KHG and direct loss estimation There is no direct method to analyze the economic value of home gardens. Generally, they are estimated by prediction models. Land size and number of years of cultivation are the typicalparameters used for estimating the values (Mohan et al., 2006). Economic values also quantify the benefit provided by home gardens (Galahena et al., 2013; Langellotto, 2014). According to the literature, the following multiple regression model was used to estimate the economic value of KHG production destroyed by the landslide (Eq. (2)). The model was established using the primary data obtained from the affected villages. $$ Y\kern0.5em =\kern0.5em \upalpha \kern0.5em +\kern0.5em {\beta}_1{X}_1\kern0.5em +\kern0.5em {\beta}_2{X}_2 $$ where y = economic value of a home garden; α, β1and β2 = regression coefficients; x1 = land area in sq.m; and x2 = number of years in cultivation. Direct loss from the landslide was determined by assessing loss on agricultural land, damage to cultivation and households. Further, all the replacement costs for landslide related damage are considered in loss estimation. The results from social surveys revealed that the affected villages of Aranayake (Elagipitia and Pallebage) are agriculturally based rural areas depending on KHGs (Fig. 1). The unexpected landslide completely destroyed a large land area and was one of the largest landslides recorded in Sri Lankan history. Fourteen families were completely buried, and 127 lives were lost in this landslide. These results have been identified as a common feature of many massive landslides (Alimohammadlou et al., 2013). The village community has a middle-income level based on the per-capita income of the country (Table 1). Descriptive statistics of collected primary data related to total monthly income, contribution of KHG for monthly income, savings and expenditure with respect to age of the KHG is summarized in the table. However, income also depends on the diversity of KHGs. It is clear that the highest land area has been cultivated during the past two decades (n = 54; Table 1). Therefore, it can be conclude that land use of the region has been affected significantly during that period. This land use change contributed to increase the landslide vulnerability of the region. It was found that the landslide affected region has been generating approximately US$ 160,000 annually from their home gardens and plantations (Tea, Rubber and Paddy) toward the GDP. Thus, the results revealed that both the social and economic systems were highly influenced by the landslide, especially the KHGs in the region (Fig. 1 and Table 1). Table 1 Summary of the monthly economic status of the household in Aranayaka landslide area. Data obtained from a structuredquestionnaire survey Overview of the landslide During the field visits, it was found that a huge amount of rocks and debris was piled up at the base of the mountain, largely consisting of gneissic boulders more than 10 m in diameter. In general, most of the human settlements are spread around the affected base area. The average width around the landslide scarp is approximately 350–350 m, the height is approximately 50–75 m, and the widestpart of the slide is approximately 600 m. The affected home gardens and a natural vegetation cover could be observed in the surrounding area and the middle of the slide; few houses not damaged. In addition, a quite rapid and muddy groundwater flow could be observedon the right-side of the landslide, which is still flowing and forming small water streams. The debris at the toe of the landslide could be split into two regions. The left side is approximately 75–125 m in width, and the right-side is approximately 350–450 m (Fig. 2). Many houses and home gardens were located in the damaged slope, which is higher than 35 degrees (Figs. 3 and 4). The entire area had been cultivated with minor export crops (cloves, cardamom, and pepper) and fruits being common in KHGs (Perera and Rajapakse, 1991). Most of the access roads were madeofconcrete or asphalt. Figure illustrates escarpment of the landslide, evidence for human intervention in escarpment and above it and difficult translational path of the debris flow Figure illustrates before the landslide occurred, just after the incident with settlements that were destroyed by the landslide and 9 months after the incident Figure illustrates the outline of the landslide superimposed onto the land use of the region, distribution of different land use classes in the affected area Major landslide contributing factors have been identified by detailed assessment. The escarpment slope of the mountain was made up of metamorphic rocks having high joint/fracture density and thin soil overburden. Weathering conditions of the exposed slide surface of the basement rock revealed weakening along the mica and feldspar-rich layers. This mica and feldspar in the hornblende and granite gneiss can weaken the surface by intensive chemical weathering (Sajinkumar et al., 2011). In addition, due to un-planned tea cultivation and KHGs on the upper region of the slope, rainwater infiltration was quite significant. Consequently, high pore-water pressure built by the heavy, prolonged rainfall generated strong destabilizing forces on the slope (Matsuura et al., 2008). The excess pore-water pressure within the soil could have caused the reduction of shear strength and the boulders that floated on the moving debris (Kang et al., 2017). Awareness of landslide mitigation According to an eyewitness, there was heavy rainfall a few days before the landslide. The mountain stood calm and quiet during this rain, and no one noticed any clue of a possible disaster. It is well known that heavy rain is the main reason for massive landslides on vulnerable slopes (Gariano and Guzzetti, 2016). The villagers were awakened for a possible incident but not evacuated because there was no appropriate evacuation plan. Permanent evacuation from the possible landslide area is usually avoided due to the misleading behavior of the officials during the relocation of the residences. Despite this, it is necessary to practice successful emergency evacuation to protect the community (Huang et al., 2015). The evacuation of people is often a combinedeffort of the relevant government officials; however, there are no such systems commonly practiced in Sri Lanka. Only the NBRO issues warning awareness messages to the general public during heavy rainfall. During the early hours of the day that the landslide occurred, cracks with muddy groundwater appeared inside one house, indicating a sign of the landslide. However, all the villages were not fully aware of this matter. In contrast, the socialsurvey reveals 80% of the residents are not ready to leave their homes, mainly due to wealthy KHG and traditional beliefs. Therefore, it is necessary to build specific awareness programs for such socio-economic systems and to educate residents on natural warning signs and the severity of disasters (Bhatia, 2013). Further, the community should have a proper evacuation plan and integrated emergency management mechanism (Dorasamy, 2017). Worldwide experience has proposed community-based mitigation activities for landslides (Shum and Lam, 2011). Moreover, essential mitigation activities have not been implanted in many landslide prone areas in Sri Lanka. Community-based short-term mitigation measures such as surface drainage control, application of erosion controls and dewatering of high elevated groundwater can be implemented. Those essential mitigations will be able to control the infiltration capacity available in the KHG and to stabilize the prevailing conditions (Pushpakumara et al., 2012). Impact of landslides on rural socio-economic systems Human settlements are randomly spread along the landslide affected slope (Figs. 3 and 4) and are widespread in rural Sri Lanka (MHCPU, 1996). Due to gentle slope conditions, the houses are mainly located in the middle and foot regions. The inventory map reveals that the adverse impact at the middle and foot regions are mainly due to wideranging debris movements, which is a well-known characteristic for debris flows worldwide (Gao and Sang, 2017). The debris flow flooded over the different land uses with thicknesses of 1.5–3.5 m. However, the initial region of the landslide had relatively less impact on human settlements. Therefore, to mitigate possible damage, disaster risk preparation is necessary (Ardaya et al., 2017). More than 3 billion people live in the developing world in rural areas that comprise farming communities (Godoy, 2010). Aligning with global rural communities, most of the landslide-prone district in rural Sri Lanka, such as Aranayake, depends on KHGs. Cash crop products such as tea and rubber provide a regular source of monthly income in Aranayake. However, it was found that 98% of the tea-growing areas are owned by medium-scale producers. Therefore, tea production has an indirect contribution to the income of the local community. Conversely, minor export crops in KHGs such as pepper, cloves, and cardamom provide direct additional income (Jacob and Alles, 1987; Perera and Rajapakse, 1991). KHGs not only strengthen the household economy but also sustain food security by providing fruits, vegetables, and paddies for consumption. The present study observed multiple social benefits from traditional home gardens, such as improving family health and human capacity, empowering women, and preserving indigenous knowledge and culture. According to the remote sensing data, land uses such as tea (59%), rubber (18%), home garden (13%) and paddies (10%) covered 33.7,10.2, 7.2 and 5.8 ha, respectively, within the affected area (Table 2). Temporal analyses revealed that the natural vegetation in the affected region had been removed for plantations and home gardens during the past decades (Figs. 2 and 3). Change in land cover is considered the primary cause for debris flow slides during intense rainfall (Schneider et al., 2010). Different trees have different root patterns and penetration depth, and they can impact the stability of a slope under different soil conditions (Vergani et al., 2017). Despite short-term socio-economic gain by changing vegetation, this study reveals that slope instability is another alarming socio-economic issue. Recent past historical data show those minor export crop earnings in the Aranayake area have increased by 215% with the remarkable development of value-added products (MMECP, 2013). Descriptive statistics revealed 52% of the household income is covered by KHGs. The average monthly income of Aranayake households is US$ 205, and the mean monthly expenditure is US$ 157 (Table 1). Conversely, per capita income per month is US$ 300 with expenditure of US$ 270 (CBSL, 2016). These findings indicate that the income and expenditures of the region are lower than the national average. Despite this, low household expenditure is an absolute gain for the community. As a result,there is increased savings with an annual average ratio of 12%. Unfortunately, the Aranayake landslide has completely abolished such self-dependent socio-economic systems. Additionally, the communities surrounding the landslide are now abandoned, and their inhabitants are living in shelters. Most of landslide prone rural Sri Lanka that has similar socio-economic conditions is now at risk (Jacob and Alles, 1987). Thus, there should be provisions for proper socio-economic development and land-use planning to mitigate landslide disasters in the current environment. Table 2 Estimate the economic value of the productions from selected land-use categories available in the landslide area The regression model reveals that mean monthly income has a strong positive correlation with the cultivated land area of individual home gardens (R2 = 0.85, p < 0.05; Fig. 5). It is concluded that farmers with large farming area might have higher productivity per unit land and are encouraged to use the land intensively. According to damage done by the landslide to the agricultural potential of the region (Table 1 and Table 2), it is well understood that farmers who havemore significant land are willing to conserve the environment, but it was not adequately done for Aranayake. Hence, comprehensive guidelines, especially on groundwater and surface water management during heavy rainfall, will be needed to protect them from massive landslides on any slope (Masaba et al., 2017). However, this study recognizes the lack of such knowledge within the farming community. Figure illustrates a linear relationship between the cultivated land area of individual home gardens size in m2 Educational background is quite a distinct factor for disaster mitigation. The Aranayake region has educational backgrounds ranging from primary to graduate levels (Fig. 6). The results revealed that, despite the many schools available in the region, the majority of the community (< 55%) has ordinary level (junior high school) qualifications. More than 30% have completed advanced level examination (senior high school) and have qualified for government jobs. In general, previous studies indicate the positive relationship between educational level and household income (Saadv and Adam, 2016). However, household income in Aranayake is independent of the level of education (P < 0.05). Thisfinding may be due to additional financial gain from household farming regardless of education level. This trend may lead to less protection for the natural environment among the rural community. Despite the level of education, people in the rural community are less aware of the stability of the prevailing environment and of proper land management compared to concern for economic benefits. Human activities contribute to changing land cover types that increase slope instability and landslide risk (Proper et al., 2014). Figure illustrates educational level in percentage wise in the region Consequently, increased soil infiltration from poor water management in plantations and KHGs, destabilize the slope (Keith and Broadhead, 2011). Additionally, the survey reveals that 90% of local people do not have minimum knowledge of the causative factors of landslides and proper land-use plans for steep slopes. Regression model of the economic value of KHGs Despite the interest, economic analyses after massive landslides have not been done in Sri Lanka or for any part of the world. Hence, there is no established model to assess actual damage. This study focuses on evaluating the level of KHG damage using a regression model (Mohan et al., 2006). Primary information acquired directly from two affected villages is given in Table 1. Those data were used to model economic value using a proposed model (Eq. (3)). The result from the multiple regression model on net economic value for KHGs (Y in $) indicates that land size (X1in m2) and years in cultivation (X2 in years) are statistically significant (p < 0.05). $$ Y\kern0.5em =\kern0.5em 2196\kern0.5em +10.51{X}_1\kern0.5em +20.840{X}_2 $$ Because of the uniform land use pattern in the region, this model can be used to predict the economic value of the landslide affected home gardens in any location. The total home garden affected area obtained from the remote sensing data is approximately 72,400 m2 (Area of KHG = 7.2 Hectares, Table 2), and from the primary data, the average age of the KHG has been assumed as25years. Therefore, the estimated economic value for the entire extent of KHGs in the region is US$ 4927. This sort of estimation helps to assess the real damage to socio-economic conditions of the affected area. Additionally, it is useful to evaluate the possible evacuation of affected people and to consider their past socio-economic status for necessary subsidy estimates. Evaluation of other economic losses Economic losses by landslide affected regions are quite significant. The highest income generated from tea was US$ 144,769 per year. From this income, only 2% is shared with the general rural community (US$ 2896). Other cultivation such as rubber, KHGs, and paddies generate US$ 7231, US$ 4864 and US$ 3195 per year, respectively, indicating the landslide affected region has been generating around US$ 160,000 for annual GDP. In addition to the disturbance, emotions and sentimentality cannot be calculated in financial terms. Nevertheless, if arbitrarily equated, this estimate would run into millions. This study revealed that landslides in rural areas could severely affecthousehold income as much as in other parts of the world (Msilimba, 2009; Haigh, 2012). Cost estimates for the damaged houses are quite significant. This study found that almost all houses in the area are at risk for further sliding, and all of them were partially or entirely affected by the landslide. Among the affected households, 60% (40 houses) had completelycollapsed whereas 40% (27houses) were partially damaged. The department of valuation for Sri Lanka has estimated the values of the collapsed houses as SU$ 7806. Partially collapsed houses were estimated according to the level of damage. Eventually, it was found that the total cost of damaged houses is approximately US$ 40,369. The results indicate that the plantations and KHGs on steep slopes are vulnerable to landslides. Landslides have significant impact on community income sources and households, and higher costs are incurred for subsequent rehabilitation and ongoing maintenance. It has been further observed that the severity of the impact on household income is highly dependent on the affected land size. In an attempt to compensate for income loss after a landslide, household members in our sample seek self-employment or labor. The income obtained by this employment or labor does not adequately compensate for income lost due to landslides. Due to the landslide, the most economic activity was abandoned, which is not actually accounted for in the evaluation. This study concluded that removing natural vegetation and plantations causes an imbalanced runoff-to-infiltration ratio destabilizing the slope. This result reflects that agriculture and the plantation-based socio-economic system are favorable for causing landslides, especially in the paleo-landslide environment. The results of the survey show that awareness of landslides and mitigation are the critical issues of the socio-economic system. This reduces a significant level of their income from KHGs. Based on the findings, two ultimate conclusions can be made to revive the life of affected people. They are as follows: create appropriate job opportunities apart from agriculture and introduce suitable cash crops by considering bioengineering approaches. Integrated spatial data can effectively be used for accurate loss estimates of the direct impact of landslides, and they can be used in the decision-making process of the affected socio-economic system. Further, any changes in the frequency, intensity, and exposure to landslides require an economic assessment framework that takes into consideration household income, including the contribution of home gardens. Thisframework is important since a proper understanding of possible damage can lead to more effective emergency management and to the development of mitigation and preparedness activities designed to reduce the loss of lives andthe associated economy. Alimohammadlou, Y., A. Najafi, and A. Yalcin. 2013. Landslide process and impacts: A proposed classification method. Catena 104: 219–232. Ardaya, A.B., M. Evers, and L. Ribbe. 2017. What influences disaster risk perception intervention measures, flood and landslide risk perception of the population living in flood risk areas in Rio de Janeiro state, Brazil. International Journal of Disaster Risk Science 8 (2): 208–223. Bandara, R.M.S., and K.M. Weerasinghe. 2013. Overview of landslide risk reduction studies in Sri Lanka. In Landslide science and practice, ed. C. Margottini, P. Canuti, and K. Sassa, 345–352. Berlin, Heidelberg: Springer International Journal of Landslide Inventory and Susceptibility and Hazard Zoning. 1, 489-492. Bhatia, J. 2013. Landslide awareness, preparedness and response management in India. In: Margottini C., Canuti P., Sassa, K, editors. Landslide Science and Practice. Social and Economic Impact and Policies, Springer, Berlin, Heidelberg 7: 281–290. Blöchl, A., and B. Braun. 2005. Economic assessment of landslide risksin the Schwabian Alb, Germany-research framework and first resultsof homeowners and experts surveys, Nat. Hazards Earth System Science 5: 389–396. CBSL (Central Bank of Sri Lanka). 2016. Sri Lanka socio-economic data. Colombo XXXIX 107. Christopher, K.S., E. Arusei, and M. Kupti. 2016. The causes and socio-economy impacts of landslidein Kerio Valley, Kenya. Agricultural Science and Soil Sciences 4: 58–66. Deheragoda, C.K.M. 2008. Social impacts of landslide disaster with special reference to Sri Lanka. Vidyodaya Journal of Humanities and Social Science 2: 133–160. Dorasamy, M., M. Raman, and M. Kaliannan. 2017. Integrated community emergency management and awareness system: A knowledge management system for disaster support. Technological Forecasting,and Social Change. Elsevier Inc 121: 139–167. Escamilla, V., M. Emch, and L. Dandalo. 2014. Sampling at the community level by using satellite imagery and geographical analysis, 690–694. New York: World Health Organization. Galhena, D.H., R. Freed, K.M. Maredia, and G. Mikunthan. 2013. Home Gardens: a Promising Approach to Enhance Household Food security and Wellbeing. Journal of Agriculture and Food Security 2: 1–13. Gao, J., and Y. Sang. 2017. Identification and estimation of landslide-debris flow disaster risk in primary and middle school campuses in a mountainous area of Southwest China. International Journal of Disaster Risk Reduction 10: 402–406. Gariano, S.L., and F. Guzzetti. 2016. Landslides in a changing climate. Earth-Science Reviews 162: 227–252. Godoy, D.C., Dewbre, J., 2010. The economic importance of agriculture for poverty reduction, OECD food, agriculture and fisheries working papers, No. 23, OECD Publishing. Guha-Sapir, D., Hoyois, P., Below, R., 2011. Annual disaster statistical review 2010,the numbers and trends, Centre for Research on the Epidemiology of Disasters (CRED). Brussels, Belgium. Guzzetti, F., 2006. Landslide Hazard and Risk Assessment. Mathematisch- Naturwissenschaftlichen Fakultät der Rheinischen Friedrich-Wilhelms-Universität, University of Bonn Germany, Ph.D. Thesis, pp 389-401. Guzzetti, F., A.C. Mondini, M. Cardinali, A. Pepe, M. Cardinali, G. Zeni, P. Reichenbach, and R. Lanari. 2012. Landslide inventory maps: New tools for an old problem. Earth-Science Review 112: 42–66. Haigh, M., and J.S. Rawat. 2012. Landslide disasters: Seeking causes – A case study from Uttarakhand, India, 218–253. Huang, B., W. Zheng, Z. Yu, and G. Liu. 2015. A successful case of emergency landslide response - Sept. 2, 2014, Shanshucao landslide, Three Gorges Reservoir, China. Geoenvironmental Disasters 2: 1–9. Jacob, V.J., and W.S. Alles. 1987. Kandyan gardens of Sri Lanka. Agroforestry Systems 5: 123–137. Jayasinghe, P. 2016. Social geology and landslide disaster risk reduction in Sri Lanka. Journal of Tropical Forestry and Environment 6: 1–13. Jayawardana, D.T., H.M.T.G.A. Pitawala, and H. Ishiga. 2014. Assessment of soil geochemistry around some selected agricultural sites of Sri Lanka. Environmental Earth Sciences 71: 4097–4106. JRC EU (Joint Research Centre European Commission)., 2003. Expert working group on disaster damage and loss data, guidance for recording and sharing disaster damage and loss data: towards the development of operational indicators to translate the Sendai framework into action, JRC Science and Policy Reports. Kang, S., S.R. Lee, N.N. Vasu, J.Y. Park, and D.H. Lee. 2017. Development of initiation criterionfor debris flows based on local topographic properties and applicability assessment at a regional scale. Engineering Geology 230: 64–76. Keith, F., and J. Broadhead. 2011. Forests and landslides-the role of trees and forests in the prevention of landslides and rehabilitation of landslide-affected areas in Asia. Bangkok: Food and Agriculture Organization of the United Nations Regional Office for Asia and the Pacific. Kumar, N. 2007. Spatial sampling design for a demographic and health survey. Population Research and Policy Review 26: 581–599. Langellotto, G.A. 2014. What are the economic costs and benefits of home vegetable gardens. Journal of Extension 52: 205–212. Masaba, S., D.N. Mungai, M. Isabirye, and H. Nsubuga. 2017. Implementation of landslide disaster risk reduction policy in Uganda. International Journal of Disaster Risk Reduction 24: 326–331. Mathers, N., Fox, N., Hunn, A., 2007. Surveys and questionnaires. The NIHR RDS for the East Midlands /Yorkshire & the Humber. Matsuura, S., S. Asano, and T. Okamoto. 2008. The relationship between rain andmeltwater, pore-water pressure and displacement of a reactivated landslide. Engineering Geology 101: 49–59. Mertens, K., L. Jacobs, J. Maes, C. Kabaseke, M. Maertens, and J. Poesen. 2016. The direct impact of landslides on household income in tropical regions. Science of the Total Environment 550: 1032–1043. Mertens, K., K. Mertens, L. Jacobs, J. Maes, C. Kabaseke, M. Maertens, J. Poesen, M. Kervyn, and L. Vranken. 2017. The direct impact of landslides on household income in tropical regions: A case study from the Rwenzori Mountains in Uganda. Science of the Total Environment 550: 1032–1043. MHCPU (Ministry of Housing Construction and Public Utilities)., 1996. Human settlements and shelter sector development in Sri Lanka. National Report for Habitat II Conference: The City Summit. 1–49. MMECP (Ministry of Minor Export Crop Promotion). 2013. Performance Report. Colombo Sri: Lanka. Mohan, S., J.R.R. Alavalapati, and P.K.R. Nair. 2006. Financial analysis of homegardens: A case study from Kerala state, India. Tropical Homegarden 3: 283–296. Msilimba, G.G. 2009. The socioeconomic and environmental effects of the 2003 landslides in the Rumphi and Ntcheu Districts (Malawi). Natural Hazards 53: 347–360. Perera, A.H., and R.M.N. Rajapakse. 1991. A baseline study of Kandyan forest gardens of Sri Lanka: Structure, composition, and utilization. Forest Ecology and Management 45: 269–280. Petley, D.N., S.A. Dunning, K. Rosser, and N.J. Rosser. 2005. The analysis of global landslide risk through the creation of a database of worldwide landslide fatalities. In Landslide Risk Manag 52, ed. O. Hunger, R. Fell, and E. Ebberhardt, 367–373. Proper, C., A. Puissant, J. Malet, and T. Glade. 2014. Analysis of land cover changes in the past and the future asa contribution to landslide risk scenarios. Applied Geography 53: 11–19. Pushpakumara, D.K.N.G., B. Marambe, G.L.L.P. Silva, J. Weerahewa, and V.R. Punyawardena. 2012. A review research on homegardens in Sri Lanka: The status, importance and future perspective. Tropical Agriculturist 160: 55–125. Qiong, H., W. Wenbin, and Y. Qiangyi. 2013. Exploring the use of Google earth imagery and object-based. Remote Sensing 5: 6027–6042. Raghuvanshi, T.K., L. Negassa, and P.M. Kala. 2015. GIS-based grid overlay method versus modeling. The Egyptian Journal of Remote Sensing and Space 18: 235–250. Rathnaweera, A.T.D., and U.P. Nawagamuwa. 2013. Study of the Impact of Rainfall Trends on Landslide Frequencies; Sri Lanka Overview. The Institute of Engineering, Sri Lanka 2: 35–42. Saadv, S.A.A., and A. Adam. 2016. The relationship between household income and educational level. (South Darfur rural areas-Sudan) statistical study. International Journal of Advanced Statistics and Probability 4: 27–30. Sajinkumar, K.S., S. Anbazhagan, A.P. Pradeepkumar, and V.R. Rani. 2011. Weathering and landslide occurrences in parts of Western Ghats, Kerala. Journal of the Geological Society of India 78: 249. Schneider, H., D. Höfer, R. Irmler, G. Daut, and R. Mäusbacher. 2010. Correlation between climate, man and debris flow events -a palynological approach. Geomorphology 120: 48–55. Schuster, R.L., Fleming, W.F., 1986. Economic losses and fatalities due to a landslide. Bulletin of the Association of Engineering Geologist XXIII (1), 11–28. Shum, L.K.W., Lam A.Y.T., 2011. Review of natural terrain landslide risk management practice and mitigation measures (GEO technical note 3/2011). Vergani, C., F. Giadrossich, M. Schwarz, P. Buckley, M. Conedera, M. Pividori, F. Salbitano, H.S. Rauch, and R. Lovreglio. 2017. Root reinforcement dynamics of European coppice woodlands and their effect on shallow landslides: A review. Earth-Science Reviews 167: 88–102. Zillman, J. 1999. The physical impact of the disaster. In Natural disaster management, ed. J. Ingleton, 320. Leicester: Tudor Rose Holding Ltd. We thank Mr. Yatagedara from Devisional Secretariat office Aranayake, to provide real estate values losses of affected household. We also acknowledge the Director General of National Building Research Organization for great support by providing landslide information. We extend our thanks to B.Sc. in Regional Science Planning students at SANASA Campus-Sri Lanka for their contribution to conduct social survey. Especial thanks for Research Council, University of Sri Jayewardenepura and Centre for Forestry and Environment for valuable support. This study was supported by a Faculty of Graduate Studies, University of Sri Jayewardenepura for Ph.D. candidate Mr. E.N.C. Perera. Authors strongly appreciate funding support given by the Research Council, University of Sri Jayewardenepura, Sri Lanka. Institute of Human Resource Advancement, University of Colombo, No: 275, Bauddhaloka Mawatha, Colombo, 7, Sri Lanka E. N. C. Perera Department of Forestry and Environmental Science, Faculty of Applied Science, Gangodawila, University of Sri Jayewardenepura, Nugegoda, Sri Lanka D. T. Jayawardana Landslide Division, National Building Research Organization, Colombo, Sri Lanka P. Jayasinghe & R. M. S. Bandara International Water Management Institute (IWMI), Colombo, Sri Lanka N. Alahakoon P. Jayasinghe R. M. S. Bandara All authors contributed to the database construction and analysis, all read and approved the submitted manuscript. Correspondence to D. T. Jayawardana. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Perera, E.N.C., Jayawardana, D.T., Jayasinghe, P. et al. Direct impacts of landslides on socio-economic systems: a case study from Aranayake, Sri Lanka. Geoenviron Disasters 5, 11 (2018). https://doi.org/10.1186/s40677-018-0104-6 Received: 06 February 2018 Socio-economy Direct loss from the landslide
CommonCrawl
Reference request: Examples of research on a set with interesting properties which turned out to be the empty set I've seen internet jokes (at least more than 1) between mathematicians like this one here about someone studying a set with interesting properties. And then, after a lot of research (presumably after some years of work), find out such set couldn't be other than the empty set, making the work of years useless (or at least disappointing), I guess. Is this something that happens commonly? Do you know any real examples of this? EDIT: I like how someone interpreted this question in the comments as "are there verifiably true examples of this well-known 'urban legend template'?" ho.history-overview 7 revs, 3 users 100% Rodrigo Aldana $\begingroup$ For example, Reinhardt cardinals? (They do not exist due to Kunen inconsistency theorem.) $\endgroup$ – Hanul Jeon $\begingroup$ " making the work of years useless, I guess." Not necessarily. Knowing that something is the empty set might be disappointing, but it is in any case better that having a wrong idea about it. $\endgroup$ – Francesco Polizzi $\begingroup$ The set of all solutions in integers to $a^n + b^n = c^n$, with $n \geq 3$ and $abc \neq 0$, is one example that comes to mind.... $\endgroup$ – Rivers McForge $\begingroup$ Isn't every proof by counter example investigating properties of the empty set? $\endgroup$ – Gordon Royle $\begingroup$ If you work for a while in the belief that an object with certain properties should exist, and eventually learn otherwise, then you will have discovered that the set of such objects is empty. I just now gave an MO answer of this type! mathoverflow.net/a/376107/2926 $\endgroup$ – Todd Trimble ♦ 11 Answers 11 Jonathan Borwein, page 10 of Generalisations, Examples and Counter-examples in Analysis and Optimisation, wrote, Thirty years ago I was the external examiner for a PhD thesis on Pareto optimization by a student in a well-known Business school. It studied infinite dimensional Banach space partial orders with five properties that allowed most finite-dimensional results to be extended. This surprised me and two days later I had proven that those five properties forced the space to have a norm compact unit ball – and so to be finite-dimensional. This discovery gave me an even bigger headache as one chapter was devoted to an infinite dimensional model in portfolio management. The seeming impass took me longer to disentangle. The error was in the first sentence which started "Clearly the infimum is ...". So many errors are buried in "clearly, obviously" or "it is easy to see". Many years ago my then colleague Juan Schäffer told me "if it really is easy to see, it is easy to give the reason." If a routine but not immediate calculation is needed then provide an outline. Authors tend to labour the points they personally had difficulty with; these are often neither the same nor the only places where the reader needs detail! My written report started "There are no objects such as are studied in this thesis." Failure to find a second, even contrived example, might have avoided what was a truly embarrassing thesis defence. 3 revs, 3 users 73% Gerry Myerson $\begingroup$ There was a defence, after all? $\endgroup$ $\begingroup$ @FrancescoPolizzi all I know is what I quoted from Jonathan's essay. $\endgroup$ – Gerry Myerson $\begingroup$ Nice example,Thanks! Hope not to fall into a situation like in my own thesis... $\endgroup$ – FeedbackLooper At the beginning of XX century, Hilbert and his students were actively investigating the properties that a consistent, complete and effective axiomatization of arithmetic should have. As we all know, this line of research was unexpectedly wiped out (at least, in his initial formulation) by Gödel's First Incompleteness Theorem (1931), saying that no such axiomatization can exist. Francesco Polizzi $\begingroup$ Might want to add "recursively enumerable" somewhere in there. $\endgroup$ – Pace Nielsen $\begingroup$ @PaceNielsen: you are of course right, I was being sloppy. I added "effective" in the assumptions about axiomatization (a formal system is said to be effectively axiomatized if its set of theorems is a recursively enumerable set). $\endgroup$ There are no infinite order polynomially complete lattices, after all by Goldstern and Shelah. Mohammad Golshani $\begingroup$ Nice one! I like the "..after all" at the end of the title $\endgroup$ Not exactly the empty set, and not years of work, but Milne told the following story about some research he and a colleague were doing in ring theory. They proved a few theorems; then, they made some assumptions on the ring, and proved some stronger theorems; then, they made some more assumptions on the ring, and proved some even stronger theorems; then, they made a few more assumptions, and were amazed at the strength of the results they were getting – until they realized that any ring satisfying all those assumptions had to be a field. $\begingroup$ This answer would really benefit from some details or a reference - though understandably I imagine you don't have any. $\endgroup$ $\begingroup$ This is the kind of situations I refereed. As @Wojowu said, it would be great if you had a reference for this. Thanks anyway! $\endgroup$ $\begingroup$ @Wojowu correct on both counts. Milne told the story during a lecture to a class in which I was enrolled as a graduate student nearly 45 years ago. I don't know whether he ever put it i writing. I suppose if you really want details, you could write to him to ask for them. $\endgroup$ $\begingroup$ My interpretation of the question was, are there verifiably true examples of this well-known 'urban legend template'? So the details are important. Otherwise, it's easy to find examples, e.g., this one or this one. $\endgroup$ – Timothy Chow $\begingroup$ @Timothy I chose not to post an answer about the Holder-or-Lipschitz continuous function story precisely because I couldn't find any retelling with checkable details. The Milne story is not something I heard from a friend of a friend; I was there, in the classroom, at the time. As I wrote before, if you want details, you could try writing to Milne. $\endgroup$ "The next two chapters [Chapters 9 and 10] show more recent technology which was developed to replace the unproven Riemann hypothesis in applications to the distribution of prime numbers. We are talking about [zero density] estimates for the number of zeros of $L$-functions in vertical strips which are positively distanced from the critical line. Hopefully in a future one will say we were wasting time on studying the empty set." Henryk Iwaniec and Emmanuel Kowalski, Analytic Number Theory, page 2 $\begingroup$ Awesome quote... $\endgroup$ The following two paragraphs are the last footnote on p. 69 of [1]. I found this such good advice that I began the first chapter of my 1993 Ph.D. dissertation, on p. 6, with this quote. [1] William Henry Young, On the distinction of right and left at points of discontinuity, Quarterly Journal of Pure and Applied Mathematics 39 (1908), pp. 67−83. (Also here.) Mark the importance of testing not only the accuracy but also the scope of one's results by constructing examples. To quote an instance which has come under my notice in the course of my present work, Dini (p. 307) states that if a left-hand derivate and a right-hand derivate both exist and are finite and different at every point of an interval $\ldots$ $\ldots$ certain results follow. The reader might well imagine not only that such a case could occur, but that Dini knew of a case where it did occur. As a matter of fact, however, the hypthesis [sic] is an impossible one. In default of an example it could, in such a case, only stimulate research to state that an example had not been found. Incidentally, I don't know whether "p. 307" is for the 1878 Italian original of his real functions book or for the 1892 German translation of his real functions book. Young's previous footnote appears to cite the 1878 Italian original, but p. 307 of the German translation seems more likely (based on math symbols appearing; I can't read German or Italian). For some more context about the fact that no such function exists, see B. S. Thomson's answer to If $f$ is bounded and left-continuous, can $f$ be nowhere continuous? and my answers to A search for theorems which appear to have very few, if any hypotheses and Real-valued function of one variable which is continuous on $[a,b]$ and semi-differentiable on $[a,b)$? Dave L Renfro $\begingroup$ Your intuition is correct. On page 307 in the 1982 version, the letterspaced paragraph is the sought-for "theorem". $\endgroup$ – Mirko $\begingroup$ The corresponding place in the Italian version is on page 224: archive.org/details/fondamentiperla01dinigoog/page/n240/mode/… $\endgroup$ – pavel Arrow's impossibility theorem comes to mind. To quote Wikipedia: In short, the theorem states that no rank-order electoral system can be designed that always satisfies these three "fairness" criteria: If every voter prefers alternative X over alternative Y, then the group prefers X over Y. If every voter's preference between X and Y remains unchanged, then the group's preference between X and Y will also remain unchanged (even if voters' preferences between other pairs like X and Z, Y and Z, or Z and W change). There is no "dictator": no single voter possesses the power to always determine the group's preference. More in the spirit of the question: The set of fair rank-ordered electoral systems is empty. The odd-order theorem states that every finite group of odd order is solvable, and the proof involves developing a very large theory explaining what the smallest counterexample looks like, and to ultimately deduce that it cannot exist. The odd-order theorem has been formalised (pdf) in Coq, a computer theorem prover, and the formalisation is to date one of the largest bodies of formalised mathematics. This makes it appealing to AI researchers, who go and train their deep learning networks using the collection of theorems proved in the formalisation, hoping that one day computers will start to be able to compete with humans in the realm of theorem-proving. I find it amusing that, as a consequence, these networks are being trained to recognise a whole bunch of facts about an object which doesn't exist. Kevin Buzzard I do not know whether this applies to the spirit of the question. However for me one of the high points of an undergraduate algebra class was seeing Witt's elegant proof of Wedderburn's theorem: There are no finite non-commutative division rings. I recall discussing this with a professor in graduate school who expressed slight regret about this theorem. He felt that algebra would be richer if there were finite non-commutative division rings. M. Khan While Siegel zeros are not currently an example, they will hopefully become one in the future. Jakub Konieczny So, I remember my teacher telling the following story: Erik Zeeman was trying, for 7 years, to prove that it was impossible to untie a knot in a 4-sphere. He kept trying and, one day he decided to prove the opposite: That was indeed possible to untie the knot - It took him only a few hours to do so Eduardo Magalhães $\begingroup$ Any idea how old he was? I don't know the history of the result that any 1-knot in 4-space is equivalent to an unknot, but for some reason thought this would have been known for a long time, even predating Zeeman's birth year. $\endgroup$ $\begingroup$ I confess I don't know, My teacher just mentioned it and never laked about it again @ToddTrimble $\endgroup$ – Eduardo Magalhães $\begingroup$ This example shows why mathematicians need to learn to argue both sides of the case at the same time - not a skill that people in other occupations usually have. (I can't avoid thinking of a certain about-to-be-ex president here,) $\endgroup$ – Paul Taylor Mathematical "urban legends" Analogues of P vs. NP in the history of mathematics A search for theorems which appear to have very few, if any hypotheses Non-homeomorphic connected $T_2$-spaces with isomorphic topology poset Great mathematicians born 1850-1920 (ET Bell's book ≲ x ≲ Fields Medalists) Up-to-date version of Principia Mathematica? Is there a reference for "computing $\pi$" using external rays of the Mandelbrot set? Euclid vs Eratosthenes What did Zermelo say he was hoping for on the consistency of set theory? Mathematical research in North Korea -- reference request Advice for researchers outside academia History of (proposal of) set-theoretic foundations
CommonCrawl
Spatiotemporal evolution of Chinese ageing from 1992 to 2015 based on an improved Bayesian space-time model Xiulan Han1, Junming Li ORCID: orcid.org/0000-0002-9008-47671 & Nannan Wang2 Most countries are experiencing growth in the number and proportion of their ageing populations and this issue is posing challenges for economies and societies worldwide. The most populated country in the world, China, is experiencing a dramatic increase in its ageing population. As China is the world's largest developing country, its serious ageing issue may have far-reaching effects not only domestically but also in other countries and even globally. In order to overcome the weaknesses of traditional statistical models and reveal further detail regarding the local area evolution, an improved Bayesian space-time model is presented in this paper and used to estimate the spatiotemporal evolution of Chinese ageing from 1992 to 2015. The six eastern provinces with high levels of ageing have been experiencing an almost steady state, while Jiangsu, Shanghai and Zhejiang have weak increased trends of ageing, and the weak increased trend is decreasing. Although the northern and western provinces belong to the low ageing area, five of them have strong local growth trends and therefore strong potential to exacerbate ageing. Under the background of the "comprehensive two children" policy, the forecast value of China's ageing rate is 13.80% (95% CI:11.24%,18.83% is) in 2030. Considering developments over the past 24 years, it has been determined that the areas of the Chinese mainland that are experiencing the highest levels of growth in ageing populations are the two central provinces, which are connected to seven eastern provinces and five southwestern provinces. High ageing areas are not only concentrated in the eastern provinces, but also include Sichuan and Chongqing in the southwest region and Hubei and Hunan of the central region. The seven provinces (municipalities or autonomous regions) of the central and western regions have both high ageing levels and strong growth rates, but the growth rate is decreasing. The world's population is continually ageing [1]. This increasing number of ageing people will pose major challenges in the future for health-care systems [2] and will have major implications for economies and societies, affecting labour markets, patterns of saving and consumption, social interactions, housing and transportation [3]. As the most populated country in the world, China's total population reached 1.38 billion in 2016, accounting for nearly 19% of the global population [4]; the country has achieved the rank of an ageing society, with the overall population ageing rate reaching 10.8% in 2015 [5]. Because China is the world's largest developing country, its serious ageing issue may have far-reaching effects not only domestically but also in social and economic development in other countries and even globally. The existing literature related to China's ageing issue mainly focuses on two aspects: historical development and regional differences. Historical development studies on China's ageing have revealed the trend of the nation's deepening ageing. Based on population data from 1953 to 1994, Lai (1999) claimed that China's ageing had begun following the comprehensive implementation of the family planning policy in 1974, and was followed by an accelerated trend [6]. Other researchers, in particular Zhang et al. [7], Banister et al. [8] and Zheng and Wei [9] found that China's ageing level continues to increase and the population dividend period is disappearing, and that these trends will negatively impact the domestic and international economies. Studies examining regional differences in China's ageing are related. Through analysis of the fourth, fifth and sixth census data, Wang et al. [10] found that significant regional difference existed in population ageing in China. Before 2000, the ageing population size and growth rate in the coastal areas were higher than those in the central and western regions. After 2000, with coastal areas overtaken by the central and western regions in terms of ageing growth rate, China's population ageing begun spreading from east to west. Xiu-Li and Wang (2008) [11] found that the ageing of the eastern, central and western regions could be characterized as "high, medium and low", respectively, with the differences between regions and provinces in the eastern region demonstrating a decreasing trend, although the overall inter-provincial differences were broadened. Based on the 2000–2010 yearbook data and spatial econometrics method, Ruyu et al. [12] argued that the regional spillover effect of population ageing and spatial heterogeneity in China were very significant, with the higher population ageing areas being mainly concentrated in the Yangtze River Delta and the Circum-Bohai Sea Region, with the overall national ageing growth rate having slowed. In addition, Zhou [13] studied the ageing problem in big cities based on the statistics of the fifth and sixth censuses in China, demonstrating the spatial distribution characteristics of the ageing populations of four mega cities in China (Beijing, Shanghai, Guangzhou, and Wuhan). Mai et al. [14] forecast China's ageing rate and drew the conclusion that more than one-fifth to one-third of China's population would be aged 65 and over by the end of 2050. Zhang et al. [15] shared this opinon and suggested that the population ageing rate would reach 32% in 2050. The previous literature analyzed the ageing issue in China based mainly on census data, and employed the classical statistical model or the econometric model, which are both based on the principle of large sample inference and are too dependent on sample information, so the corresponding parameters are estimated to be biased if the sample is biased or sparse. However, Bayesian statistics make full use of the a-priori information, where the unknown parameters are regarded as random variables and the parameter estimation is given in the form of probability distribution. The Bayesian method takes the uncertainty of reality data into account and has a higher degree of confidence. It is particularly important to note that, for space-time data, there is only one observation at a certain time and space intersection, so it does not meet the large sample requirements of classical statistics. Moreover, due to Tobler's First Law of Geography, there is a certain correlation between the samples in the space-time domain; that is, the space-time data has characteristics of a small sample and autocorrelation (non-independent), which pose serious challenges to the classical statistical model based on large sample inference. The Bayesian statistical model, however, overcame the problem of small sample size and estimates parameters in the form of probability distributions under consideration of various uncertainties, by introducing the a-priori information and taking full advantage of the autocorrelation characteristics of data in the spatiotemporal domain. This paper aims to investigate the situation in China with regard to the evolution of spatio-temporal characteristics of the ageing population, the spatial heterogeneity of the ageing level of the population, and the local trends in population ageing. Consequently, the present study forecasts the value of China's ageing rate against the background of the "comprehensive two children" policy. These results can provide a reference for relevant policy makers, public health managers and demographers. In order to overcome the weakness of traditional statistical models and reveal further detail on local area evolution, the Bayesian space-time model is extended and estimated on the basis of Chinese provincial statistics data (1992–2015). Based on this, China's ageing rate in 2030 is also predicted under the background of the comprehensive "two children" policy. The data used in this paper are provincial population data drawn from the China Demographic and Employment Statistics Yearbook 1993–2016, including the 2000 and 2010 data sets, which are derived from the fifth and sixth national census, covering 31 provincial-level administrative regions in mainland China and excluding the regions of Hong Kong, Macao and Taiwan. The rate of ageing, the key focus of this paper, refers to the proportion of the resident population aged 65 and above compared to the total resident population in a certain region. Bayesian space-time hierarchical model The Bayesian space-time hierarchical model (BSTHM) is a combination of the Bayesian hierarchical model and the spatio-temporal interaction model [16, 17]. The general form is: Data Model: yit~P(y it (θ it , Θ)| Θ) (1). Process Model: θit = S i + Λ t + Ω it + ε it (2). Hyperparameters Model: Θ~P(Θ) (3). where yit denotes the observational value, θit is the space-time dependent variable, and S i and Λ t represent the common spatial state and overall time trend, respectively, Ωit measures the time and space interaction effect, ε it is random noise and is the hyperparameters set. Considering that the ageing population observational data is count data and possibly over-dispersed, the data model this paper adopts is the Poisson-Gamma hybrid model [18]: $$ {\mathrm{y}}_{\mathrm{it}}^{\mathrm{ageing}}\sim Possion\left({n}_{it}{p}_{it}^{aging}{u}_{it}\right) $$ $$ {\mathrm{u}}_{\mathrm{it}}\sim Gamma\left({r}_{it},{r}_{it}\right) $$ Where \( {\mathrm{y}}_{\mathrm{it}}^{\mathrm{ageing}} \) is the population aged 65 and above, n it and \( {p}_{it}^{aging} \)are the total population and the rate of ageing in region i(i = 1, 2, …, 31) in year t, respectively, uit are random effect parameters and r it is the divergence coefficient [18]. In the existing BSTHM [16], the process model is: $$ \ln \left({p}_{it}^{aging}\right)=\upalpha +{\mathrm{s}}_{\mathrm{i}}+\left({b}_0\mathrm{t}+{v}_t\right)+{b}_{1i}t+{\varepsilon}_{it} $$ Where α is the basic fixed constant for national overall ageing, si is the common spatial risk of ageing in the overall province trend, and (b0t + v t ) consists of a linear trend b0t and a random effect, v t , allowing for nonlinear variation. b1i denotes the local tendency in local regions, which is isolated from the overall trend. ε it is a Gaussian random variable. Improved Bayesian space-time model In the existing BSTHM, the non-linear overall trend is considered but the non-linear local trend is neglected. In view of this, this paper improved the BSTHM by adding a quadratic term to the process model, which describes the second-order change of the local trend, that is, acceleration. Thus, more detailed information pertaining to the time and space interaction effect can be abstracted and the elaboration of local trends can be realized. The above formula (6) is revised as: $$ \ln \left({p}_{it}^{aging}\right)=\upalpha +{\mathrm{s}}_{\mathrm{i}}+\left({b}_0\mathrm{t}+{v}_t\right)+\left({b}_{1i}t+\frac{{\mathrm{b}}_{2\mathrm{i}}{\mathrm{t}}^2}{2}\right)+{\varepsilon}_{it} $$ Where b2i is the second-order variation factor of the local tendency in the space-time process, and its physical meaning, acceleration, will be further elaborated with the statistical results. The meanings of the other parameters are the same as in eq. (6). This paper introduces the Besag York Mollie (BYM) model [19] to determine the prior distribution of si, b1i and b2i, using a conditional autoregressive (CAR) normal prior form to express the spatial structured and unstructured random effects. The spatial adjacency matrix adopts the first-order "queen" adjoining form. The prior distribution of time random effects parameters also uses CAR normal prior form, in which time adjacency adopts a one-dimensional first-order adjacency matrix. According to the conclusion of Gelman (2006) [20], the prior distribution of mean square error for all random variables in the model is determined as a strictly positive half-Gaussian distributionN+∞(0, 10). In this paper, Bayesian statistical estimation is achieved by WinBUGS [21] based on the MCMC method. Two MCMC chains are used to ensure the convergence of the model and the number of iterations for each chain is set to 250,000, of which 200,000 are for the burn-in period and 50,000 are for the number of iterations of posterior distribution for parameter estimation. The convergence is evaluated with the Gelman-Rubin statistical parameter; the closer the value is to 1, the better the convergence is [22]. The Gelman-Rubin parameters of all the parameters in this study range from 0.9988 to 1.0013, indicating that the convergence of this statistical result is good. Ageing rate prediction method We study the spatio-temporal evolution of China's total population ageing based on the population data of 1992–2015. The improved Bayesian space-time model is: $$ {\mathrm{Y}}_t^{\mathrm{ageing}}\sim Possion\left({n}_t{P}_t^{ageing}{U}_t\right) $$ $$ {\mathrm{U}}_{\mathrm{t}}\sim Gamma\left({r}_t,{r}_t\right) $$ $$ \ln \left({\mathrm{P}}_{\mathrm{t}}^{\mathrm{ageing}}\right)=\upalpha +\left({\mathrm{b}}_0\mathrm{t}+{\mathrm{V}}_{\mathrm{t}}\right)+\left({\mathrm{b}}_1\mathrm{t}+\frac{{\mathrm{b}}_2}{2}{\mathrm{t}}^2\right)+{\upvarepsilon}_{\mathrm{t}} $$ Where \( {\mathrm{Y}}_t^{\mathrm{ageing}} \), n t and \( {P}_t^{ageing} \)denote the ageing population, total population and ageing rate in year t, respectively, with the meaning of the other parameters being the same as in eq. (7). The spatial statistical unit corresponding to this prediction model is overall and therefore without lower right corner mark i. After estimating the parameters of the above model based on the 1992–2015 data, China's ageing rate in 2030 can be predicted. Considering that the Chinese government began implementing the comprehensive "two children" policy in 2016, this will result in an additional increase in the total population, although it will not alter the size of China's ageing population over the next 30–40 years, thereby slowing the trend of ageing [17]. Therefore, in the context of policy implementation, the above-predicted rate of ageing needs to be corrected. Assuming thatP2030all is the predicted total population by the end of 2030 in China without implementation of the new population planning policy and that P2030upis the predicted increase in the total population by the end of 2030 under the conditions of new policy implementation, in the context of the new comprehensive "two children" policy, the predicted value calculation formula for the ageing rate by the end of 2030 in the Chinese mainland is: $$ {\mathrm{R}}_{2030}^{\mathrm{ageing}}=\frac{{\mathrm{P}}_{2030\mathrm{all}}^{\mathrm{ageing}}{\mathrm{P}}_{2030}^{\mathrm{ageing}}}{{\mathrm{P}}_{2030\mathrm{all}}^{\mathrm{ageing}}+{\mathrm{P}}_{2030\mathrm{up}}^{\mathrm{ageing}}} $$ Descriptive statistical result During the 1992–2015 period, the overall ageing rate in mainland China maintained a sustained rise. The percentage of the population aged 65 and above was 6.08% in 1992 and grew to 10.47% in 2015, with an average annual growth rate of 2.29%. The extent of ageing has been increasing year by year (Fig. 1). The entire process can be divided into three sub-processes: 1992–1999, 2000–2010 and 2010–2015. The increasing tendency of the mean of three stages increased stage by stage, and the high quartile and low quartile also had some degree of increase. Boxplot of the ageing rate in the Chinese mainland from 1992 to 2015 At the same time, the ageing area showed a significant expansion trend. According to the lower limit of the Ageing Social Standard of the United Nation (7.0%) [23], the following five provinces (municipalities) became ageing areas as early as 1992:Shanghai (11.%), Beijing (8.0%), Zhejiang (7.5%), Tianjin (7.4%) and Jiangsu (7.4%). Thirty provinces, excepting Tibet, became ageing areas in 2015. Of these, Chongqing had the highest rate of ageing (13.3%). Figure 2 shows the evolution process of the spatial distribution of the ageing rate of 31 provinces (including some municipalities or autonomous regions; for brevity, the following text only uses the 31 provinces) in China during the study period, illustrating a clear expansion from southeast to northwest, which is consistent with the conclusions of Wang et al. [10] Spatial distribution of the ageing rate in the Chinese mainland from 1992 to 2015. (Map generated with ArcGIS 10.3 by authors) We use Moran's I [24] to measure the spatial difference of the ageing rate; the larger the Moran's I is, the larger the spatial difference is. Figure 3 shows the trend of the spatial variation coefficient of the ageing rate in 31 provinces in mainland China from 1992 to 2015. It can be observed that the differences in ageing demonstrate a general decreasing trend. The difference in degree of ageing among provinces was quite large during 1992–2004, and decreased during 2004–2015, which is contrary to the conclusions of Xiu-Li and Wang. [8] In recent years, as can be seen in Fig. 1, the degree of ageing in each province has been aggravating. All regions attained a high ageing rate, so a smaller spatial difference resulted. Moran's I of the ageing rate in mainland China from 1992 to 2015 Bayesian statistical result In order to explain the spatio-temporal evolution of the ageing rate in mainland China more thoroughly, this paper divides the research period into three stages (1992–1999, 2000–2010 and 2010–2015) based on the above descriptive statistical analysis, and then, using the extended nonlinearity BSTHM, estimates the common spatial pattern, the distribution of local first-order change (speed) and second-order change (acceleration) in the three sub-phases of the ageing rate in China. The common spatial pattern Based on this paper's improved nonlinearity BSTHM, the posterior median estimation of the steady-state spatial relative magnitude of 31 Chinese provinces, exp(Si), in three sub-periods can be obtained; this value measures the relative magnitude of the provinces' ageing levels relative to the overall national level, exp(α). If exp(Si) > 1.0, it indicates that the degree of population ageing in a province is exp(Si) times the overall level, and vice versa. Figure 4 shows the common spatial pattern in the three periods of 1992–1999, 2000–2010 and 2010–2015. The common spatial relative magnitude of the ageing rate from 1992 to 1999 (a), 2000–2009 (b) and 2010–2015 (c), the posterior medians of the parameter exp(si), the values measure the corresponding relative magnitude of the provinces' ageing levels relative to the overall national level, exp(α), in the sub-period. (Map generated with ArcGIS 10.3 by authors) Overall, in the eastern provinces, especially in the six regions of mainland China of Beijing, Tianjin, Shandong, Jiangsu, Shanghai and Zhejiang, the degree of ageing is much higher than the overall level during the three periods. In the northwestern provinces, especially Xinjiang, Qinghai, Gansu, Inner Mongolia, Heilongjiang and Ningxia, the degree of ageing is lower than national level during the three periods. This is basically consistent with the conclusions of Wang et al. [10]、Xiu-Li and Wang [11]. and Ruyu et al. [12] Specifically, certain provinces have unique characteristics during the three stages. Guangdong province, as the most populous province in mainland China (1.04 billion, 2010 population census), had its ageing level show a sequential decrease in three stages: during 1992–1999, the ageing degree was 1.139 times the national overall level, while during 2000–2009 and 2010–2015, the ageing level was at and below the national overall level, respectively, and the spatial relative magnitude of ageing was reduced to 1.012 and 0.920, respectively. Shandong province, as the second-most populous province (95.79 million, 2010 census) maintained a relatively high level of ageing over the three phases, with a spatial relative magnitude of 1.147, 1.143 and 1.129, respectively. Although Sichuan and Chongqing are located in the underdeveloped southwestern region, their population ageing problems were more serious than in the other southwestern provinces, which may be related to population movements. According to previous related studies, in these two regions a large number of young and middle-aged laborers have left in recent years [25], leading to a continuous ageing of the population. However, the ageing in Sichuan and Chongqing had different staged characteristics; the ageing in Sichuan was still on an average level (with a spatial relative degree of 1.015) during 1992–1999, and significantly higher than that of mainland China after 2000. Chongqing's ageing level was higher than that in Sichuan in all three stages, as its ageing spatial relative magnitude was consistently above 1.10 and reached a peak value of 1.187 in 2010–2015. It should be pointed out that, in the three municipalities of Beijing, Tianjin and Shanghai, the ageing of the population was at a high level during the first two sub-periods, but overtaken by Chongqing, Jiangsu and Shandong during the last sub-period. In addition, the level of ageing in Jiangsu Province was higher than the average over the entire study period, while in Zhejiang province it was somewhat lower during the third stage. Ageing in the western and northwestern regions and Yunnan province was always lower than the average during the entire study period. An interesting phenomenon is that following the most recent 24 years of evolution, the high-level ageing areas of mainland China have spatially located in the two central provinces, connecting to seven eastern provinces and five southwestern provinces, as shown in Fig. 4. Although Wang et al. [10], Xiu-Li and Wang [11]. and Ruyu et al. [12] also pointed out that differences existed in inter-provincial ageing, they did not systematically and thoroughly study the relative ageing levels of provinces in various stages. This paper argues that, in the context of ageing rising across all regions, it is more scientific to study the matter from a relative perspective. Local evolution trend Based on the improved BSTHM, the paper achieves a more detailed estimation of the local evolution trend, including the first and second order trend. The former, denoted by b1i in equation (10), is equivalent to the growth rate in physics, whereas the latter, denoted by b2i in equation (10), is equivalent to acceleration and measuring the change of the former. b1i > 0 (b1i < 0) indicates that the province i belongs to the area with strong (weak) ageing growth rate, with the ageing growth rate being stronger (weaker) than the national general growth rate. Different combinations of b1i and b2i mean different local evolutionary characteristics: b1i > 0 and b2i > 0 means that province i's ageing has a strong increase trend and the strong increase will become stronger; b1i > 0 and b2i < 0 indicates that province i's ageing has a strong increase trend but the strong increase will become not strong; b1i < 0 and b2i > 0 means that the ageing in province i was weak, but the weak increase will transform into strong growth; b1i < 0 and b2i < 0 mean that the ageing in province i has a weak growth trend and the weak growth will become weaker. Based on the comprehensive consideration of the common spatial effect, overall time effect and the time-space interaction effect, this paper estimates the local evolution trend of ageing in 31 Chinese provinces in three stages, and for the first time estimates the change in aging growth within each province, thereby providing a detailed description of the spatial and temporal evolution of ageing in mainland China. Figure 5 is a graphical representation of the local evolution characteristics of ageing in 31 provinces in mainland China in the three phases of 1992–1999, 2000–2009 and 2010–2015. Local ageing rate trends from1992–1999 (a), 2000–2009 (b) and 2010–2015 (c), the posterior medians of the parameters, b1i, b2i, estimated using the developed Bayesian space-time model. (Map generated with ArcGIS 10.3 by authors) From 1992 to 1999, the areas of vigorous increasing ageing in mainland China were mainly distributed in the eastern and southern regions. Among them, six provinces (the middle-eastern coastal areas including Shandong, Jiangsu and Fujian and the southwestern provinces including Chongqing, Yunnan and Guangxi) had ageing that showed a tendency to surpass the overall growth rate, while the growth rate of ageing in 7 provinces (Guizhou, Hunan, Guangdong, Jiangxi, Anhui, Zhejiang and Shanghai) was higher than the overall average, although the growth rate was diminishing and approaching the overall trend. Both central and western provinces belong to the weak growth areas, especially in central China, including Hebei, Shanxi, Henan and Hubei, but also in western regions such as Xinjiang and Tibet. In these regions, not only was the ageing growth rate below the overall average, but also the gap was widening. From 2000 to 2009, the areas of strongest increased ageing shifted from the eastern and southern regions to the central and western regions, with 14 strong growth provinces emerging. Among them, nine provinces, including Sichuan, Hubei and Henan, among others, experienced accelerated ageing growth rates, In five provinces (Beijing, Tianjin, Jiangxi, Guizhou and Xinjiang) the ageing growth rate demonstrated a slowing trend. The strong growth of ageing in the eastern region expanded from Heilongjiang to Jilin, with both located within to the accelerated growth area. The eastern and southern provinces, with the exceptions of Beijing and Tianjin, were all transformed from strong growth of ageing areas to weak growth of ageing areas. With the, exceptions of Jiangsu and Fujian, all five provinces experienced acceleration toward the overall trend in ageing growth rate. Compared with the previous two phases, from 2010 to 2015 the number of ageing strong growth provinces increased significantly, reaching 16, which exceeded 50% of the total number of provinces in mainland China. With the exception of Hebei, Shandong, Beijing and Tianjin, which are located in the eastern region, most of the strong growth provinces are located in the central and western regions. Simultaneously, the provinces with accelerated ageing growth rates were Hebei, Inner Mongolia, Shanxi, Gansu, Ningxia and Jiangxi, numbering six in total, fewer than the second stage. According to the common spatial pattern of ageing as shown in Fig. 4, the eastern provinces, with high degrees of ageing, are among the weak ageing growth provinces. Prediction of ageing in mainland China in 2030 Before prediction, the reliability of the model must be tested. In this paper, cross-validation is used for this purpose. Specifically, the data on the ageing rates of the three five-year periods (1995–1999, 2002–2006 and 2010–2014) are extracted separately and used as test truth data only, not in the calculation process. The remaining 19 years of data are used to estimate the rate of ageing of the test year. Then, we calculate the root mean square error (RMSE) between the estimated and observed values. In this paper, the RMSE of the prediction of the ageing rate during three periods is 0.71%, 0.53% and 0.62%, respectively, all less than 1%, so the model prediction error is within a reasonable range. Under the premise of maintaining the "one child" policy unchanged, based on 1992–2015 population data, the population ageing rate in 2030 \( {p}_{2030}^{aging} \) is projected to be 14.76% (95%CI: 12.02%, 20.13%). In the context of the comprehensive "two-child" policy, the total population in 2030 is projected to be 1.45 billion by the National Population Development Plan (2016–2030). According to Zhai et al. [26], the estimated value of the total population increase is 94 million. Therefore, without implementation of the comprehensive "two-child" policy, the predicted total population in 2030 is 1.36 billion. According to the above formula (10), considering the context of the comprehensive "two-child" policy, the prediction value of the ageing rate in 2030 in mainland China is 13.80% (95% CI: 11.24%, 18.83% is). This result means that the rate of ageing in China decreased by 0.96% (95% CI: 0.78%, 1.30%) in 2030 after the implementation of the comprehensive "two-child" policy. This paper focuses on exploring the temporal and spatial evolutionary characteristics of ageing in mainland China. In order to overcome the weaknesses of traditional statistical models and reveal further detail of the local area evolution, the Bayesian space-time model is extended and estimated on the basis of Chinese provincial data (1992–2015). According to the box plots, the entire study period is divided into three stages: 1992–1999, 2000–2009 and 2010–2015. In this paper, an improved BSTHM was used, considering the nonlinear characteristics of local trends and decomposing them into two parts. The first-order (growth rate) and second-order changes (acceleration, measuring the change of the local growth rate) are used to estimate the common spatial pattern and local trend of China's ageing population in three stages. According to the results of this paper, population ageing has become a common problem in 31 provinces in mainland China. During the period from 1992 to 2015, the level of ageing in each province showed a continuous upward trend, but the change in trend was significantly varied, so the differences in the ageing levels among different provinces show a trend of decreasing gradually over the recent 24-year period. At present, the spatial distribution of high-ageing areas consists of 14 provinces, including Shandong, Jiangsu, Shanghai, Anhui, Hubei, Chongqing, Sichuan and Hunan, among others. These provinces are all economically developed (or relatively so) and have large population areas. According to the 2010 census data, the ageing population in these 14 provinces accounted for 63.94% of the ageing population in mainland China. This indicates that both the number of older people and the old-age dependency ratio have been increasing significantly. At the same time, the labour force population, that is, the labour supply, has been decreasing, meaning that the demographic dividend, which is an important factor in maintaining rapid economic development, is fading out. The increasing numbers of older people may result in higher social medical costs, particularly for drugs and the treatment of chronic illnesses. This will lead to a transfer in the social medical source configuration. According to the results of this paper, in addition to the 14 provinces, other provinces, even some in the west, will also face serious ageing issues as these relative low ageing regions have even higher trends in population ageing. It should be pointed out that Guangdong Province, as the largest province in terms of population, has seen its ageing rate continue to increase. However, its ageing rank order in mainland China has been declining continuously and was lower than the overall ageing of the population from 2010 to 2015. This is in contrast to the characteristics of other eastern coastal provinces and should be attributed to large population inflows. Since the inflow population mostly consists of young and middle-aged members of the labor force, the level of ageing of the provinces with net population inflow is thereby reduced. According to a study conducted by Qiao et al. [25], the migrant population flowing into Guangdong accounts for 25.03% of the total floating population in China. The influx of a large number of young people makes the level of ageing in Guangdong lower than the overall national level. In addition, Fujian province, located in the eastern coast of China, has a rate of ageing at a low level within mainland China. This can also be attributed to the inflow of population. Fujian province ranks 17th in total population, but its migrant inflow is sixth in mainland China (2010 Census) [25]. The large inflow of non-ageing population largely dilutes the level of ageing and explains why the level of ageing in Fujian province is also lower than the overall level within mainland China. Although this paper extends the BSTHM and reveals further details regarding the temporal and spatial variation of ageing in mainland China over a recent 24-year period, it still has certain defects. First, the population data were generated two different ways. The 2000 and 2010 datasets, the 5th and 6th national demographic censuses, were collected using a full survey. Datasets for the other years were produced by proportional sampling with multi-hierarchy, multistage and whole group probability. The accuracy of the census data is higher than that of the sampled data. Still, the Bayesian statistical method, BSTHM, which considers greater more uncertainties can to some extent overcome the limitation of the diverse accuracies of the input data. Second, this paper takes the provincial area as the spatial statistical unit, which is not fine enough in terms of spatial granularity. If city or even county data were available and if the spatial statistical unit were to be further subdivided, the problem could be studied in greater detail. Third, this article only discusses the temporal and spatial evolution of the ageing problem itself, but does not study the formation mechanisms and influencing factors. This is the direction in which we will direct future studies. The main findings of this study include the following: (i) After a recent 24-year period of development, the Chinese mainland's high ageing areas have distributed in the central two provinces, connecting to seven eastern provinces and five southwestern provinces. High ageing areas are not only concentrated in the eastern provinces, but also include provinces in southwestern Sichuan and Chongqing as well as central Hubei and Hunan. Chongqing and Sichuan have especially high rates of ageing, with their ageing rates in 2015 being 13.3% and 12.9%, respectively, making them the highest and second highest, respectively, in China; (ii) The high rates of ageing in the eastern provinces have been in an almost steady state, with slightly increased levels and decreased growth rates in the ageing rates in Jiangsu, Shanghai and Zhejiang. The ageing rate in Guangdong and Fujian (two eastern coastal provinces) has been reduced to below that of the overall level of the Chinese mainland; (iii) High ageing areas are not only concentrated in the eastern provinces, but also include Sichuan and Chongqing in the southwest region and Hubei and Hunan of the central region; (iv) The seven provinces (municipalities or autonomous regions) of the central and western regions belong to both the high ageing levels and strong growth rate areas, but the growth rate is decreasing; (v) The northern and western provinces belong to the low ageing area, but five of them have strong local growth trends and so have great potential for ageing to be exacerbated; (vi) With the background of the comprehensive "two children" policy, the forecast value of China's ageing rate is 13.80% (95% CI:11.24%,18.83% is) in 2030. Most countries in the world are facing similar ageing issues. The formulation of strategic policies and the allocation of public resources to deal with ageing problems are urgently required to recognize more detailed characteristics of ageing in time and space. We hope that this paper's research on the spatial differences and trends of local changes in ageing will provide a meaningful reference for other countries attempting to deal with the ageing problem. Based on the results of this study, it is suggested that when formulating strategic policy to deal with ageing, we should take full account of the regional differences and trends of local changes of various regions. Different policies should be formulated based on local conditions in order to scientifically deal with the social and economic problems brought about by ageing. BSTHM: Conditional auto-regression MCMC: Markov Chain Monte Carlo Lutz W, Sanderson W, Scherbov S. The coming acceleration of global population ageing. Nature. 2008;451(7179):716. Christensen K, Doblhammer G, Rau R, Vaupel JW. Ageing populations: the challenges ahead. Lancet. 2009;374:1196–208. Harper S. Economic and social implications of aging societies. Science. 2014;346(6209):587–91. World Population Prospects. The 2015 revision key findings and advance tables. New York: United Nations Department of Economic & Social Affairs; 2015. Ministry of Civil Affairs of the People's Republic of China. http://www.mca.gov.cn/article/sj/tjgb/201607/20160700001136.shtml. Lai D. Statistical analysis on spatial and temporal patterns of the Chinese elderly population. Arch Gerontol Geriatr. 1999;28(1):53–64. Zhang NJ, Guo M, Zheng X. China: awakening giant developing solutions to population aging. Gerontologist. 2012;52(5):589–96. Banister J, Bloom DE, Rosenberg L: Population aging and economic growth in China: ; 2012. Zheng W. Characteristics and trend of population aging in China and its potential impact on economic growth. J Quantitative Technic Econo. (Chinese) 2014;31(8):3–20. http://xueshu.baidu.com/s?wd=paperuri%3A%28d077094808c336fec59c6ab56e4ab350%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fwww.en.cnki.com.cn%2FArticle_en%2FCJFDSLJY201408001.htm&ie=utf-8&sc_us=1696705316427600281. Wang Z, Sun T, Li G. Regional differences and evolutions of population aging in China. Popul Res. (Chinese) 2013;37(1):66–77. http://xueshu.baidu.com/s?wd=paperuri%3A%284f3fa3b4c0cfd5d5a13a77554fa32825%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTOTALRKYZ201301009.htm&ie=utf-8&sc_us=12381010938827304294. Xiu-Li LI, Wang LJ. A study on regional differences and difference decomposition of population ageing in China. Northwest Popul J. (Chinese) 2008;29(6):104–109. http://xueshu.baidu.com/s?wd=paperuri%3A%28d3389e425577e2a71bec600e6f2ef0a0%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTOTAXBRK200806022.htm&ie=utf-8&sc_us=12323063717267462752. Ruyu LC, Zhao ZF, Liu C, Zhang F. Spatial econometric research on regional spillover and distribution difference of population aging in China. Popul Res. (Chinese) 2012;36(2):71–81.http://xueshu.baidu.com/s?wd=paperuri%3A%281d1c99d2bd6b3d4f1fa98f42e1afd9a2%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTOTALRKYZ201202010.htm&ie=utf-8&sc_us=7055554916400306787 Zhou J. Mapping spatial variation of population aging in China's mega cities. J Maps. 2016;12(1):181–92. Mai Y, Peng X, Chen W. How fast is population ageing in China? Cent Pol Studies/impact Cent Working Papers. 2009;9(2):216–39. Zhang RY, Wang AP, Sun GL. Prediction of China's aging population from 2012 to 2050. J Sichuan Univ Scie Engineering. (Chinese) 2013;26(5):82–85. http://xueshu.baidu.com/s?wd=paperuri%3A%281c4512352fa2d3728d4bf9519a40765c%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fwww.en.cnki.com.cn%2FArticle_en%2FCJFDTOTALSCQX201305021.htm&ie=utf-8&sc_us=2651517565082426896. Li G, Haining R, Richardson S, Best N. Space–time variability in burglary risk: a Bayesian spatio-temporal modelling approach. Spatial Statistics. 2014;9:180–91. Bernardinelli L, Clayton D, Pascutto C, Montomoli C, Ghislandi M, Songini M. Bayesian analysis of space—time variation in disease risk. Stat Med. 1995;14(21–22):2433. Ntzoufras I. Bayesian Modeling Using WinBUGS. New Jersey: Wiley; 2009. https://www.wiley.com/en-gb/Bayesian+Modeling+Using+WinBUGS-p-9780470141144. Besag J, York J, Mollié A. Bayesian image restoration, with two applications in spatial statistics. Ann Inst Stat Math. 1991;43(1):1–20. Gelman A. Prior distributions for variance parameters in hierarchical models. Bayesian Anal. 2004;1(EERI_RP_2004_06):515–33. Lunn DJ, Thomas A, Best N, Spiegelhalter D. WinBUGS - a Bayesian modelling framework: concepts, structure, and extensibility. Statistics Computing. 2000;10(4):325–37. Gelman A, Rubin DB. Inference from iterative simulation using multiple sequences. Stat Sci. 1992;7(4):457–72. TNE G. The aging of populations and its economic and social implications. United Nations (population studies no. 26). Hum Biol. 1959;2:203–4. Moran PAP. The interpretation of statistical maps. J R Stat Soc. 1948;10(2):243–51. Qiao XC, Huang YH. Floating populations across provinces in China——analysis based on the sixth census. Popul Develop. (Chinese) 2013;19(1):13–28. http://xueshu.baidu.com/s?wd=paperuri%3A%2820de21a1710e89c311c245046bb8cc37%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTotal-SCRK201301003.htm&ie=utf-8&sc_us=2251733167612741081. Zhai Z, Zhang X, Jin Y. Demographic consequences of an immediate transition to a universal two-child policy. Popul Res. (Chinese) 2014;38(2):3–17. http://xueshu.baidu.com/s?wd=paperuri%3A%28cf3af1c79fd40976d999bb257a8c5c9e%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fen.cnki.com.cn%2FArticle_en%2FCJFDTOTALRKYZ201402001.htm&ie=utf-8&sc_us=5229969352901785878. The authors are grateful to all peer reviewers for their reviews and comments. National Social Science Fund of China (15BTJ012). The source data used in the paper can be drawn from the China Demographic and Employment Statistics Yearbook 1993–2016. The content is solely the responsibility of the authors and does not necessarily represent the official views. School of Statistics, Shanxi University of Finance and Economics, Wucheng Road 696, Taiyuan, 030006, China Xiulan Han & Junming Li Beijing Yihua Record Information Technology Co., Ltd. Hualu Senior Care and Health Management Co., Ltd, 138 Andingmenwai Street, Beijing, China Nannan Wang Xiulan Han Junming Li All authors contributed significantly to the manuscript. JL and XH presented the ideas of the paper and designed the study. XH and JL collected and pre-processed the data. XH and JL revised the manuscript after critical examination of the text. JL and NW conducted the data processing and produced the first draft of the paper. All authors reviewed and contributed to subsequent drafts, and all authors approve the final version for publication. Correspondence to Junming Li. All the data in the paper are drawn from the official and open materials and it is not involved in private or clinical data. Han, X., Li, J. & Wang, N. Spatiotemporal evolution of Chinese ageing from 1992 to 2015 based on an improved Bayesian space-time model. BMC Public Health 18, 502 (2018). https://doi.org/10.1186/s12889-018-5417-6 Bayesian space-time model Spatiotemporal evolution Biostatistics and methods
CommonCrawl
Apparent and standardized ileal nutrient digestibility of broiler diets containing varying levels of raw full-fat soybean and microbial protease Mammo M. Erdaw1,3, Rider A. Perez-Maldonado2 & Paul A. Iji1 Although soybean meal (SBM) is excellent source of protein in diets for poultry, it is sometimes inaccessible, costly and fluctuates in supply. The SBM can partially be replaced by full-fat SBM, but the meals prepared from raw full-fat soybean contain antinutritional factors. To avoid the risk of antinutritional factors, heat treatment is always advisable, but either excessive or under heating the soybean could negatively affect the quality. However, the potential for further improvement of SBM by supplementing with microbial enzymes has been suggested by many researchers. The objective of this study was to evaluate the performance and ileal nutrient digestibility of birds fed on diets containing raw soybeans and supplemented with microbial protease. A 3 × 2 factorial, involving 3 levels of raw full-fat soybean (RFFS; 0, 45 or 75 g/kg of diet) and 2 levels of protease (0 or 15,000 PROT/kg) was used. The birds were raised in a climate-controlled room. A nitrogen-free diet was also offered to a reference group from day 19 to 24 to determine protein and amino acid flow at the terminal ileum and calculate the standardized ileal digestibility of nutrients. On days 10, 24 and 35, body weight and feed leftover were recorded to calculate the body weight gain (BWG), feed intake (FI) and feed conversion ratio (FCR). On day 24, samples of ileal digesta were collected at least from two birds per replicate. When RFFS was increased from 0 to 75 g/kg of diet, the content of trypsin inhibitors was increased from 1747 to 10,193 trypsin inhibitors unit (TIU)/g of diets, and feed consumption of birds was also reduced (P < 0.05). Increasing RFFS level reduced the BWG from hatch 0 to 10 d (P < 0.01) and hatch to 24 d (P < 0.05). The BWG of birds from hatch to 35 was not significantly (P = 0.07) affected. Feed intake was also reduced (P < 0.05) during 0 to 35 d. However, protease supplementation improved (P < 0.05) the BWG and FCR during 0 to 24 d. Rising levels of RFFS increased the weight of pancreas (P < 0.001) and small intestine (P < 0.001) at day 24. Except for methionine, apparent and the corresponding standardized ileal digestibility of CP and AA were reduced (P < 0.01) by increasing levels of RFFS in diets. This study showed that some commercial SBM could be replaced by RFFS in broiler diets, without markedly compromising productivity. The AID and SID of CP and lysine were slightly improved by dietary supplementation of microbial protease. It is well established that commercial soybean meal (SBM) is an excellent source of protein in diets for poultry [1]. However, in addition to the fluctuation in supply and seasonal scarcity in some parts of the world, the price of SBM has been increasing over the years [1, 2]. Poultry producers therefore continuously seek alternative ingredients, including full-fat SBM to replace some or all the commercial SBM in diets for broilers [3]. Full-fat soybean meal is typically made from heat-treated seeds; and it is less common to feed soybean as raw. Such processing plants are lacking in some areas of the world where soybeans are locally produced and poultry producers could save some costs if the raw soybean could be fed. However, raw soybean seeds contain considerable amounts of ANF, particularly trypsin inhibitors (TI), which depress growth in non-ruminant animals [4,5,6,7]. The presence of dietary TI in legumes, such as soybeans causes a substantial reduction in the digestibility (up to 50%) of proteins and AA and protein quality (up to 100%) in non-ruminant animals [8]. Nitrogen (N) retention can also be negatively affected by TI causing an increased endogenous N loss [9, 10]. Although Barth et al. [11] reported that inclusion of RFFS in diets caused loss of endogenous protein, Clarke and Wiseman [12] reported that AA digestibility did not correlate with levels of RFFS supplementation/concentration of TI. Although heating is considered to be the most effective method to eliminate or reduce ANF, some of the ANF in soybeans, such as the Bowman-Birk inhibitors, phytates and oligosaccharides are not heat-labile. These ANF remain a problem in feeds prepared from raw soybean grains and poorly processed commercial meals. Clemente et al. [13] also reported that Bowman-Birk inhibitors exhibit resistance to heat treatment, so that supplementation of such corn-soybean meal diets with microbial enzymes including protease and phytase is necessary [10, 14]. The potential for further improvement in the nutritional value of soybeans using exogenous enzymes has therefore been suggested by many researchers [5, 12, 15, 16]. The negative effects of microbial protease supplementation of poultry diets have been widely documented [17,18,19]. However, the role of such enzymes on RFFS has not been adequately investigated. In recent in vitro and in vivo studies, we reported the positive response of protein and phytate in RFFS to microbial protease and phytase and gross response of poultry on such test diets [20,21,22,23]. The objective of the current study was to assess how these enzymes, particularly protease, affect CP and AA digestibility. Diets, experimental design and animal husbandry The experiment was approved by the University's Animal Ethics Committee (Authority No: AEC15–044) and conducted at the Animal House of the University of New England, Australia. The soybean grain was purchased from a local supplier in northern New South Wales, Australia. After cleaning and hammer-milling the grain to pass through a 2-mm sieve size, the meal was used to partially replace the commercial SBM at 0, 15 or 25%, equivalent to 0, 45 and 75 g/kg diet, respectively (Table 1). Birds were offered corn-soybean meal-based starter (0 to 10 d), grower (10 to 24 d) and finisher (24 to 35 d) diets, which were formulated to the breeder standard for Ross 308 broilers [24]. The nutrient requirement of birds across the dietary treatment was balanced by supplementing with varying levels of canola oil, synthetic AA and the meat meal. The diets were supplemented with phytase (HiPhos) at 2000 phytase activity (FYT)/kg, equivalent to 0.2 g/kg of diet, and fed as such or further supplemented with protease (ProAct) at 15000 PROT/kg (15,000 units of protease/kg diet), the level approved by the European Food Safety Authority [25]. These enzymes were supplied by DSM Animal Nutrition, Asia-Pacific, Singapore. The microbial protease was added prior to pelleting the diets. Titanium dioxide was added to the grower diets to enable assessment of nutrient digestibility. Feed, in the form of crumble (starter) and pellet (grower and finisher periods) was provided ad libitum, and the birds had free access to water. Samples of the diets (with or without protease) were analyzed to evaluate the contents of CP and AA (Table 2). Table 1 Ingredient and composition of starter, grower and finisher basal, and a nitrogen-free diet Table 2 The analysed crude protein (CP) and amino acid composition (g/kg) of the study diets fed in days of 10 to 24 A total of 336 Ross 308 male broiler chicks (43.84 ± 0.18 g) were obtained from a local commercial hatchery (Baiada Poultry Pty. Ltd., Tamworth, Australia). These birds were randomly selected, weighed and randomly allocated into 42 pens at eight chicks per pen. This was a 2 × 3 factorial study, with each treatment being replicated six times and eight birds per replicate. Another lot of 48 birds, in six replicates, were fed on a nitrogen-free diet (NFD). The six replicates were provided with the commercial-types of starter and grower diets before they were transferred to the NFD diet, which was prepared without RFFS or microbial enzymes. The birds were raised in a climate-controlled room on sawdust litter. On day 19, a total of 48, with 8 birds per replicate were transferred to a nitrogen-free diet (NFD) to enable calculation of CP and AA flow at the ileum, and estimate standardized ileal digestibility (SID) of these nutrients. Every pen was equipped with a clean feeder and two nipple drinkers that were daily checked and cleaned. The room temperature was set at 33 °C for the first two days, with a relative humidity of between 49 and 60%. The temperature was gradually reduced to 24 °C at 19 days of age and was maintained for the remaining study period. Lighting was provided for 24 h (20 lx) for the first two days, then reduced to 23 h for the next 6 consecutive days, followed by 20-h light (10 lx) for the remaining days. Mortality of birds was recorded whenever it occurred. On days 10, 24 and 35, the body weight of birds and feed leftover were recorded, to calculate the body weight gain (BWG) and feed intake (FI), from which the feed conversion ratio (FCR) was computed. On day 24, at least two birds per replicate were euthanised by cervical dislocation and excised. Ileal digesta were collected on ice, pooled per replicate, and then transferred to a freezer (−20 °C) until they were analyzed for nutrient composition. Except for birds on NFD, samples of internal organs were also collected at day 24 and weighed. One representative bird per cage was randomly selected, electrically stunned and killed by cervical dislocation. The bird was dissected to obtain internal organs, which were weighed as described by [26]. The remaining birds, (except those allocated to the NFD) were transferred to finisher diets and raised to 35 day of age, with the aim of evaluating the growth performance. Chemical analysis and calculation of nutrient digestibility Sub-samples of the ingredients and test diets were analysed for CP and AA [27, 28], urease activity (UA) [29], nitrogen solubility index (NSI) [30], contents of TI (TIU/g) [31], protein solubility [32], starch [33], total sugars [33], ether extract [34] and crude fiber [35]. According to Cohen and Michaud [36, 37], amino acid contents of ingredients, diets and digesta were analyzed by the Australian Proteome Analysis Facility, Macquarie University, Australia. Amino acid (AA) concentrations were determined using pre-column derivatization AA analysis with 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate followed by separation of the derivatives and quantification by reversed phase high performance liquid chromatography. The concentration of titanium (Ti) in the ileal digesta and diets was determined using the method described by Short et al. [38]. The data on concentrations of nutrients and the Ti marker were used in the following calculations. Ileal AA outflow (IAAF; mg/g intake) and ileal CP outflow (ICPF; mg/g intake) for all treatments (including NFD) were determined against the Ti concentration as follows: IAAF or ICPF = AA or CP in digesta (mg/g)/[Ti in diet (mg/g)/Ti in digesta (mg/g)]. The coefficient of apparent ileal digestibility (AID) and the coefficient of standardized ileal digestibility (SID) of CP and AA were calculated using the following equations: $$ \mathrm{AID}=\left(\mathrm{diet}\ \mathrm{AA}\ \mathrm{or}\ \mathrm{CP}\ \mathrm{intake}\hbox{-} \mathrm{IAAF}\ \mathrm{or}\ \mathrm{ICPF}\right)/\mathrm{Diet}\ \mathrm{AA}\ \mathrm{or}\ \mathrm{CP}\ \mathrm{intake}. $$ $$ \mathrm{SID}=\left(\mathrm{diet}\ \mathrm{AA}\ \mathrm{or}\ \mathrm{CP}\ \mathrm{intake}\hbox{-} \left[\mathrm{IAAF}\ \mathrm{or}\ \mathrm{ICPF}\hbox{-} \mathrm{EIAAF}\ \mathrm{or}\ \mathrm{ECPF}\right]\right)/\mathrm{Diet}\ \mathrm{AA},\mathrm{or}\ \mathrm{CP}\ \mathrm{intake} $$ where, EIAAF is the endogenous ileal amino acid flow, and ECPF is the endogenous crude protein flow calculated using Eq. 1 from the ileal digesta of chicks fed NFD. One-way ANOVA and general linear model (GLM) of Minitab software version 17 [39], were used to analyse the data. The effects of the main factors (RFFS level and enzyme supplementation as well as their interactions) were assessed. Differences between the mean values were separated by the Duncan's multiple range tests and were considered to be significant at P ≤ 0.05. Response to the diets The nutrient composition and quality measures for RFFS and SBM were different (Table 3). For examples, both the ether extracts (147.3 g/kg) and ME/kg (12.6 MJ ME/kg) contents of RFFS were higher than those of SBM, which were 19.2 g/kg and 9.0 MJ ME/kg; the reverse was the case for AA profile while the concentrations of TI and UA were lower in SBM than in the RFFS. Replacing the commercial SBM by RFFS from 0 to 75 g/kg diet resulted in an increase in selected ANF in the diets (Table 4). The TI concentration was increased from 1747.0 to 10,193.4 TIU/g; the NSI increased from 155.3 to 222.9 g/kg, and the UA was raised from 0.16 to 1.53 ∆pH. Table 3 Analysed nutrient composition and quality parameters of raw soybean meal (RFFS) in comparison to the commercial soybean meal (SBM) Table 4 Effects of partially replacing commercial SBM by RFFS on quality of the diets The performance of the birds, in terms of FI, BWG and FCR are presented in Table 5. There were no significant (P > 0.05) interaction effects between RFFS and protease on the FI, BWG or FCR of birds during any of the assessed periods. Table 5 Effects of protease in diets with raw soybean on FI, BWG (g/b) and FCR, in periods of day 0 to10, day 0 to 24 or day 0 to 35 The feed intake of birds was reduced with increase in RFFS inclusion in the diets, particularly affecting birds over the longer rearing period (day 0 to 35) (P < 0.05). Application of protease in these diets appeared not to influence (P > 0.05) the feed consumption. The FI was generally reduced (P < 0.05) during day 0 to 35 because of increasing inclusion of dietary RFFS in diets. The BWG was also decreased during day 0 to10 (P < 0.01), day 0 to 24 (P < 0.05) and day 0 to 35 (P < 0.05). However, protease supplementation improved (P < 0.05) both the BWG and the FCR of birds during the day 0 to 24. Increasing the level of RFFS in diets (without protease supplementation) reduced the feed efficiency by 2.94%, whereas due to supplementation with microbial protease numerically improved the feed efficiency by 3.30%. However, these slight improvements were not statistically significant (P > 0.05). There were no significant (P > 0.05) effects of treatment on mortality. Visceral organ weights As shown in Table 6, the RFFS by protease interaction had no significant (P > 0.05) effects on the weight of any of the internal organs assessed at d 24. Increasing the RFFS inclusion rate in diets significantly increased the weight of the gizzard and proventriculus (P < 0.001), pancreas (P < 0.001), small intestine (jejunum + ileum + duodenum) (P < 0.001), heart (P < 0.001) and spleen (P < 0.05). The weight of the bursa also tended (P = 0.09) to increase at day 24. Protease supplementation significantly increased (P = 0.05) bursa weight but had no significant (P > 0.05) effects on any of the other measured internal organs. Table 6 Effects of supplemental protease in diets containing graded levels of raw soybean on the weights of internal organs (g/ 100 g body weight) at day 24 Ileal digestibility of crude protein and amino acids. The results revealed that basal endogenous loss of ileal CP was significantly (P < 0.001) increased in response to rising the level of RFFS. On average, the basal endogenous loss of ileal AA, except that of methionine at day 24 was significantly increased in birds fed diets containing RFFS (Table 7). Table 7 Effects of protease supplementation of diets containing raw soybean on the ileal flow (g/kg of FI) of undigested crude protein and amino acids (mg/g) at day 24 At day 24 d, increasing the RFFS inclusion rate significantly reduced (P < 0.01) the values of AID and SID for CP, and it also reduced the value of AID and SID of indispensable AA by up to 8.5 and 7.7%, respectively, with the lowest value for methionine and the highest for isoleucine. The AID and SID values of dispensable AA were also reduced by between 5.0 to 8.0 and 4.0 to 7.0%, respectively in line with increase in RFFS (Tables 8 and 9). Table 8 Effects of protease and RFFS supplementations on the coefficient of apparent ileal digestibility of CP and AA of broilers at day 24 Table 9 Effects of protease and RFFS supplementations on the coefficient of standardized ileal digestibility of CP and AA of broilers at day 24 Under microbial protease supplementation, the basal endogenous loss of ileal CP and total AA were reduced by approximately 7.0 and 3.5%, respectively, but the differences were not significant (P > 0.05). The AID and SID of CP measured at day 24 were significantly (P < 0.05) increased when the diets were supplemented with microbial protease, and they were also significantly (P < 0.05) influenced by the interaction effects between protease and RFFS. Protease supplementation had the reverse effect on the AID and SID of CP, resulting into a lack of interaction between the two main factors. Although statistically the same (P > 0.05), the average basal endogenous loss of indispensable and dispensable AA at the ileum, were reduced by approximately 4.5 and 2.0%, respectively when the diets were supplemented with protease. However, supplementation with protease resulted in an increase in the AID and SID values of indispensable AA, which respectively ranged between 0 and 2.0% and 0 and 1.5% more than the non-supplemented diets, but the differences were not statistically significant (P > 0.05). Although the differences were not significant (P > 0.05), the average AID and SID values of dispensable AA at day 24 were 0.78% and 0.56%, respectively greater when the diets were supplemented with microbial protease. The AID (P < 0.5) and SID (P < 0.05) values of lysine were significantly increased due to supplementation of diets with that of microbial protease. Diets, performance parameters and internal organ development The variations in nutrient contents between the samples of RFFS used in this and other studies may be due to various reasons, including crop variety, processing, and geographical origin [40, 41]. The lower contents of AA in the RFFS in the current study may be largely due to the high fat (oil) content of the material. The response of birds in terms of FI, BWG and FCR was affected over the 0 to 35-day period due to the high concentration of TI in the dietary treatments, with one diet close to 10,200 TIU/g, which is beyond the threshold level for non-ruminant animals [42]. However, significant improvements were observed in BWG and FCR during day 0 to 24 due to microbial protease supplementation. The current results are consistent with those of other researchers [1, 43] who reported that protease can break down both the stored proteins and the protein-like anti-nutrients and subsequently improve nutrient digestibility. Although not investigated in the current study, the exogenous protease may have complemented the effects of endogenous enzymes, and altered the digestibility of nutrients and possibly feed passage rate [44, 45]. The weight of most of the internal organs of the birds, including the gizzard and proventriculus, pancreas, small intestine (SI), heart and spleen, was increased by increasing inclusion level of RFFS in diets. This finding partially agrees with those of other researchers [46, 47] who reported that birds fed diets containing RFFS had heavier pancreas and duodenum. The reasons for the increased weights of these internal organs may be a response in form cellular hypertrophy or hyperplasia. The crude fibre content of RFFS was higher than that of commercial SBM and be intact due to lack of processing. In a previous study, we observed an increased weight of the pancreas in birds fed diets containing RFFS [48]. The small intestine, particularly duodenum is anatomically close to the pancreas and it was also similarly affected because of increasing levels of RFFS in diets. This result is supported by reports of other researchers [49]. The relationship between the body weight and visceral organs, in general needs to be considered as the former was reduced by the effect of RFFS, so that the relative weight of the latter became accentuated. Ileal digestibility of amino acids and crude protein Increasing the level of RFFS in diets significantly increased the loss of basal endogenous loss of ileal CP and AA; consequently, reducing the AID and the corresponding SID. These results are inconsistent with those of Clarke and Wiseman [12] who reported that the AID and SID of AA did not correlate with TI levels. The digesta collected from the ileum may contain both dietary undigested materials and endogenous protein and AA [50]. However, the results agree with that of de-Coca-Sinova et al. [51] who reported that the apparent digestibility of N and AA in broilers varies with SBM samples, with greater values corresponding to lower concentration of TI in diets. Moreover, Barth et al. [52] explained that the ingestion of food containing TI influenced N balance by increasing the outflow of amino acids from endogenous secretions rather than through the loss of dietary amino acids. The AID and SID of most dispensable and indispensable amino acids assessed at d 24 were significantly reduced by increasing inclusion level of RFFS. These results are in contrast with those of Frikha et al. [53] who reported that the SID of CP and lysine in broilers was increased at day 21 due to the inclusion of soybeans with high KOH and TIA values. The current results are supported by other researchers [54] who reported a reduction on the apparent digestibility of nutrients when raw soybean meal was fed to broiler chicks. Similarly, Gilani et al. [8] indicated that the high concentrations of ANF in diets from grain legumes are responsible for poor digestibility of protein. The reduction in AID and SID of CP and AA in the current study may be linked to the increased loss of basal endogenous ileal CP and AA. However, when the diets were supplemented with microbial protease, the basal endogenous loss of ileal CP was reduced. This led to increase in the AID and SID of CP although this was not significant. The current finding partially agree with those of previous researchers [55,56,57] who observed an increase in AID of AA in poultry and piglets and health benefits in response to inclusion of microbial protease in the diets. Protease supplementation significantly improved the AID and SID of lysine. This partially agrees with the finding of Liu et al. [58] who observed a 9% improvement in apparent digestibility of AA in broilers fed a corn-sorghum-based diet when supplemented with protease. It is not clear why lysine was the only indispensable AA to significantly respond in the current study but it may be related to its digestibility status in RFFS. No reasons could be proffered for the lack of effect of the test product on the digestibility of methionine, which was not affected. This study showed that some commercial SBM could be replaced (≤25%) by RFFS in broiler diets if the diets are supplemented with the right protease. Body weight gain seemed to be the most affected by the high levels of TI. It is evident from the present study that the test microbial protease could reduce the adverse impact of dietary ANF, particularly TI, on the body weight gain and feed efficiency during up to the end of the grower phase. One major area of action of the protease appears to be the reduction in the basal endogenous loss of CP and AA at the ileum, leading to an increase in the AID and SID of CP and AA. Further studies may be required to establish the direct impact of the test protease on RFFS protein and the differing responses that were observed for methionine and lysine. AA: AEC: Animal Ethics Committee AID: Apparent ileal digestibility ANF: Anti-nutritional factors BWG: Body weight gain ECPF: Endogenous crude protein flow EIAAF: Endogenous ileal amino acid flow FCR: Feed conversion ratio FI: Feed intake GLM: General linear model IAAF: Ileal amino acids outflow ICPF: Ileal crude protein outflow Mega joule NFD: Nitrogen-free diet NSI: Nitrogen solubility index RFFS: Raw full fat soybean SID: Standardized ileal digestibility Ti: Trypsin inhibitors TIU: Trypsin inhibitors unit urease activity Pettersson D, Pontoppidan K. Soybean meal and the potential for upgrading its feeding value by enzyme supplementation. In: El-Shemy A, editor. Soybean - bio-active compounds, pp. Intech, open access Publisher; 2013. p. 288–307. Shi ESR, Lu J, Tong HB, Zou JM, Wang KH. Effects of graded replacement of soybean meal by sunflower seed meal in laying hen diets on hen performance, egg quality, egg fatty acid composition, and cholesterol content. J Appl Poult Res. 2012;21:367–74. Popescu A, Criste R. Using full fat soybean in broiler diets and its effect on the production and economic efficiency of fattening. J Cen Eur Agri. 2003;4:167–74. Liu BL, Rafiq A, Tzeng YM, Rob A. The induction and characterization of phytase and beyond. Enzym Microb Technol. 1998;22:415–24. Newkirk, R. 2010. Soybean. Feed industry guide,1st edition Canadian International Grains Institute, pp.48. Avaiable online: http://www.cigi.ca/feed.htm. Erdaw MM, Perez-Maldonado RA, Bhuiyan M, Iji PA. Physicochemical properties and enzymatic in vitro nutrient digestibility of full-fat soybeans meal. J Food Agri Environ. 2016a;14(8):5–91. Erdaw MM, Perez-Maldonado AR and Iji PA. Physiological and health-related response of broiler chickens fed diets containing raw, full-fat soybean meal supplemented with microbial protease. Anim Physi Anim Nutri. ID: JAPAN-Nov-16-804R4. 2017, accepted. Gilani GS, Xiao CW, Cockell KA. Impact of antinutritional factors in food proteins on the digestibility of protein and the bioavailability of amino acids and on protein quality. Bri J Nutr. 2012;108:S315. Banaszkiewicz T. Nutritional value of soybean meal. In: El-Shemy HA, editor. Soybean and nutrition.. InTech, Rijeka, Croatia; 2011. p. 1–20. Dourado LRB, Pascoal LAF, Sakomura NK, Costa FGP, Biagiot TID. Soybeans (Glycine max) and soybean products in poultry and swine nutrition. In: Dora Krezhova (Ed) recent trends for enhancing the diversity and quality of soybean products. InTech, Rijeka, Croatia; 2011. p. 175–90. Barth CA, Lunding B, Schmitz M, Hagemeister H. Soybean trypsin inhibitor (s) reduce absorption of exogenous and increase loss of endogenous protein in miniature pigs. J Nutr. 1993;123:2195–200. Clarke E, Wiseman J. Effects of variability in trypsin inhibitor content of soya bean meals on true and apparent ileal digestibility of amino acids and pancreas size in broiler chicks. Anim Feed Sci Technol. 2005;121:125–38. Clemente A, Jimenez E, Marin-Manzano MC, Rubio LA. Active Bowman-Birk inhibitors survive gastrointestinal digestion at the terminal ileum of pigs fed chickpea-based diets. J Sci food Agri. 2008;88:513–21. Erdaw MM, Bhuiyan M, Iji PA. Enhancing the nutritional value of soybeans for poultry through supplementation with new-generation feed enzymes. World Poult Sci J. 2016b;72:307–22. Ao T. Using exogenous enzymes to increase the nutritional value of soybean meal in poultry diet. In: El-Shemy H, editor. Soybean and nutrition. InTech, Rijeka, Croatia; 2011. p. 201–14. Bedford MR, Schulze H. Exogenous enzymes for pigs and poultry. Nutr Res Reviews. 1998;11:91–114. Martinez-Amezcua CM, Parsons C, Baker DH. Effect of microbial phytase and citric acid on phosphorus bioavailability, apparent metabolizable energy, and amino acid digestibility in distillers dried grains with solubles in chicks. Poult Sci. 2006;85:470–5. Adeola O, Cowieson A. Board-invited review: opportunities and challenges in using exogenous enzymes to imprve no-nruminant animal production. J Anim Sci. 2011;89:3189–218. Erdaw MM, Perez-Maldonado AR, Bhuiyan M, Iji PA. Effects of diet containing raw full-fat soybean meal and supplemented with high-impact protease on relative weight of pancreas for broilers. In the Proceedings of 31st Biennial (Australian & New-Zealand Societies) Conf. Anim. Prod. 2016c; 4–7 July 2016, Glenelg, Adelaide, Australia. Erdaw MM, Perez-Maldonado AR, Perez-Maldonado RA, Bhuiyan M, Iji PA. Partial replacement of commercial soybean with raw full-fat soybeans meal and supplemented with varying levels of protease in diets of broiler chickens. S African J Anim Sci. 2017a;47:61–71. https://doi.org/10.4314/sajas.v47i1.5 Erdaw MM, Shubiao W, Perez-Maldonado AR, Iji PA. Growth and physiological responses of broiler chickens to diets containing raw full-fat soybean meal and supplemented with a high-impact microbial protease. Asian-Australas J Anim Sci. 2017b; https://doi.org/10.5713/ajas.16.0714. Erdaw MM, Perez-Maldonado RA, Bhuiyan M, Iji PA. Physicochemical properties and enzymatic in vitro nutrient digestibility of full-fat soybeans meal. J Food Agri Environ. 2016d;14(8):5–91. Aviagen Ross 308: Broiler nutrition specifications. Available at: http://ap.aviagen.com/assets/Tech_Center/Ross_Broiler/Ross-308-Broiler-Nutrition-Specs-2014r17-EN.pdf. 2009. AOAC. Official method 990.03. Protein (crude) in animal feed, combustion method, in official methods of analysis of AOAC international. AOAC international. VA USA: Arlington; 2006a. p. 30–1. EFSA (European Feed Safety Authority). Safety and efficacy of Ronozyme® ProAct (serine protease) for use as feed additive for chickens for fattening. EFSA. 2009;1185:1–15. Iji PA, Saki A, Tivey DR. Body and intestinal growth of broiler chicks on a commercial starter diet. 2. Development and characteristics of intestinal enzymes. Bri Poult Sci. 2001;42:514–22. AOAC. Analysis of Amino acid in Animal Feed using Combustion method. Official Int'l method 982, vol. 30. USA: E (a). AOAC International, Arlington, VA; 2006b. AOCS. Urease activity. Official Method Ba 9-58. Official Methods and Recommended Practices of the AOCS. AOCS. 6th ed. USA: Urbana, IL; 2011a. Index AOCSNS. (NSI). Official method Ba 11–54. Official methods and recommended practices of the AOCS, AOCS. 6th ed. Urbana, IL: Second Printing; 2011b. AOCS. Trypsin Inhibitor Activity. Official Method Ba 12-75. Official Methods and recommended Practices of the AOCS. AOCS. 6th ed. Urbana, IL: Second Printing; 2011c. Araba M, Dale N. Evaluation of protein solubility as an indicator of overprocessing soybean meal. Poult Sci. 1990;69:76–83. Cohen SA. Amino acid analysis using precolumn derivatization with 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate. Amino Acid Analysis Protocols. 2000:39–47. Munson LS, Walker PH. Invert sugar in sugars and syrups, general methods. Journal of Society of Chemistry Industry. CAS-8013-17-0 1929; 12: 38. AOAC. Analysis of fat (Crude) or ether extract in animal feed. International official method (6th ed) AOCS, Denver, 920 39. 2009. AOAC. Analysis of fibre in animal feed and pet food. International official method 978, vol. 10. USA: AOAC International, Arlington, VA; 1996. Cohen SA, Michaud DP. Synthesis of a fluorescent derivatizing reagent, 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate, and its application for the analysis of hydrolysate amino acids via high-performance liquid chromatography. Analytical. Biochemist. 1993;211:279–87. Short FJ, Gorton P, Wiseman J, Boorman KN. Determination of titanium dioxide added as an inert marker in chicken digestibility studies. Anim Feed Sci Technol. 1996;9:215–21. Minitab- 17 Statistical Software. [computer software]. State college, PA: Minitab, Inc.2013. Hong KJ, Lee CH, Kim SW. Aspergillus oryzae GB-107 fermentation improves nutritional quality of food soybeans and feed soybean meals. J Medicinal Food. 2004;7:430–5. Swick RA. Selecting soy protein for animal feed. In: 15th Ann ASAIM southeast Asian feed technolo and nutri workshop. Indonesia: Bali. p. 2007. Baker KM, Utterback PL, Parsons CM, Stein HH. Nutritional value of soybean meal produced from conventional, high-protein, or low-oligosaccharide varieties of soybeans and fed to broiler chicks. Poult Sci. 92011:390–5. Barletta A. Introduction: current market and expected developments. In: Bedford MR, PartridgE GG, editors. Enzymes in farm animal nutrition CABI. UK: Wallingford; 2011. p. 1–11. Mayorga ME, Vieira SL, Sorbara JOB. Effects of a mono-component protease in broiler diets with increasing levels of trypsin inhibitors. XXII Latin American Poult Congr, in Buenos Aires, in September. 2011; 6-9, Argentina. https://en.engormix.com/poultryindustry/articles/effects-mono-component-protease-t35031.htm. Onifade AA, Al-Sane NA, Al-Musallam AA, Al-Zarban S. A review: potentials for biotechnological applications of keratin-degrading microorganisms and their enzymes for nutritional improvement of feathers and other keratins as livestock feed resources. Bioresour Technol. 1998;66:1–11. Murugesan GR, Romero LF, Persia ME. Effects of protease, phytase and a bacillus sp. direct-fed microbial on nutrient and energy digestibility, ileal brush border digestive enzyme activity and cecal short-chain fatty acid concentration in broiler chickens. PLoS One. 2014;9:e101888. Mogridge J, Smith T, Sousadias M. Effect of feeding raw soybeans on polyamine metabolism in chicks and the therapeutic effect of exogenous putrescine. J Anim Sci. 1996;74:1897–904. Erdaw MM, Perez-Maldonado AR, Bhuiyan M, Iji PA. Response of broiler chicks to cold- or steam-pelleted diets containing raw full-fat soybeans meal. J Appl Poult Rese. 2017c;26:1–13. https://doi.org/10.3382/japr/pfw070. d-Coca-Sinova A, Valencia DG, Jiménez-Moreno E, Lázaro R, Mateos GG. Apparent ileal digestibility of energy, nitrogen, and amino acids of soybean meals of different origin in broilers. Poult Sci. 2008;87:2613–23. Erdaw MM, Perez-Maldonado AR, Bhuiyan M, Iji PA. Super-dose levels of protease and phytase enable utilization of raw soybean meals in broiler diets. Proce 27th Australian Poult Sci Symp Febr. 2016:14–7. Laplace JP, Darcy-Vrillon B, Duval-Iflah Y, Raibaud P, Bernard F, Calmes R, Guillaume P. Proteins in the digesta of the pig: amino acid composition of endogenous, bacterial and fecal fractions. Reprod Nutri Dével. 1985;25:1083–99. Barth CA, Lunding B, Schmitz M, Hagemeister H. Soybean trypsin inhibitor (s) reduce absorption of exogenous and increase loss of endogenous protein in miniature pigs. J Nutr. 1993;23:2195–200. Frikha M, Serrano MP, Valencia DG, Rebollar PG, Fickler J, Mateos GG. Correlation between ileal digestibility of amino acids and chemical composition of soybean meals in broilers at 21 days of age. Anim Feed Sci Technol. 2012;178:103–14. Rocha C, Durau JF, Barrilli LNE, Dahlke F, Maiorka P, Maiorka A. The effect of raw and roasted soybeans on intestinal health, diet digestibility, and pancreas weight of broilers. J Appl Poult Rese. 2014;23:71–9. Guggenbuhl P, Waché Y, Wilson JW. Effects of dietary supplementation with a protease on the apparent ileal digestibility of the weaned piglet. J Anim Sci. 2012;90:152–4. Romero L, Plumstead P. Bio-efficacy of feed proteases in poultry and their interaction with other feed enzymes. In the proceedings of 24th Ann Australian Poult Sci Symp Sydney, New South Wales. 2013; Feb 17th -20th. Cowieson AJ, Aureli R, Guggenbuhl P, Fru-Nji F. Possible involvement of myo-inositol in the physiological response of broilers to high doses of microbial phytase. Anim Prod Sci. 2015;55:710–9. Liu S, Selle P, Court S, Cowieson A. Protease supplementation of sorghum-based broiler diets enhances amino acid digestibility coefficients in four small intestinal sites and accelerates their rates of digestion. Anim Feed Sci Technol 2003; 183: 175–183. McCleary BV, Solah V, Gibson TS. Quantitative measurement of total starch in cereal flours and products. J Cereal Sci. 1994:51–8. This research was supported by funding from DSM Nutritional Products, Animal Nutrition and Health, Asia-Pacific and the University of New England, Australia. Ethical approval and consent to participate The article does not contain any studies with human subjects performed by the authors. The experiment was approved by the University's Animal Ethics Committee (Authority No: AEC15–044) and conducted at the Animal House of the University of New England, Australia. The datasets used and/or analysed during this study are available from the corresponding author on request. This study was partially funded by DSM Nutritional Products, Animal Nutrition and Health, Asia-Pacific and the University of New England, Australia. School of Environmental and Rural Sciences, University of New England, Armidale, NSW, 2351, Australia Mammo M. Erdaw & Paul A. Iji DSM Nutritional Products, Animal Nutrition and Health, 30 Pasir Panjang Road #13-31 Mapletree, Business City, 117440, Singapore Rider A. Perez-Maldonado Ethiopian Institute of Agricultural Research, Addis Ababa, Ethiopia Mammo M. Erdaw Paul A. Iji ME, as the lead author was in charge of all research work, including designing the protocol, carrying out the experiment and writing the manuscript. RM participated in the acquisition of data and analysis of. PI, as a supervisor to the lead author, was involved in design and execution of the study, and approved the final manuscript. All authors read and approved the final manuscript. Correspondence to Mammo M. Erdaw or Paul A. Iji. Erdaw, M.M., Perez-Maldonado, R.A. & Iji, P.A. Apparent and standardized ileal nutrient digestibility of broiler diets containing varying levels of raw full-fat soybean and microbial protease. J Anim Sci Technol 59, 23 (2017). https://doi.org/10.1186/s40781-017-0148-2 Antinutritional factors Ileal digestibility Microbial protease
CommonCrawl
首页 > 书院学术 > 至美数学 > 公开课 BIMSA秋学期公开课即将开始! 时间:9月13日开始 地点:线上+线下 组织者:北京雁栖湖应用数学研究院 北京雁栖湖应用数学研究院2022年秋学期公开课将于9月13日开始,44门课程信息一次看到爽!课程大都在线直播,欢迎报名参加。 BIMSA2022秋学期 公开课总课表 扫描以下二维码注册报名 Registration for WeChat users Registration for Google users 欢迎加入课程微信群 微信群、课程详细信息链接如下:(复制链接到浏览器中打开,如提示不安全,可点击高级->继续访问。) https://www.bimsa.net:10000/course.html BIMSA2022秋学期 公开课信息一览 String Theory I 弦理论 I Lecturer Sergio Cecotti (Research Fellow) Time 10:40 - 12:15, Mon,Wed Online Zoom: 928 682 9093 PW:BIMSA Level Graduate Record Yes Basic mathematics (calculus, algebra, etc.); a textbook knowledge of General Relativity and Quantum Field Theory (including the very basics facts about supersymmetry); the elementary notions of differential geometry (Riemannian geometry, bundles, connections, etc.), Lie groups, and algebraic topology (homology & cohomology groups,etc.). A comprehsive course in String Theory. String Theory I covers the foundations of the theory, the construction of the various string theories, with emphasis on the supersymmetric ones, and the basic computational techniques. The emphasis is on SUSY string theories, the bosonic string is used mainly as a didactical laboratory where we introduce basic ideas and techniques in the simplest possible context. While the approach is meant to be didactical, one tries to be mathematically precise, and self-contained. Conformal field theory in two-dimensions is introduced from scratch, with the topics relevant to string theory discussed in detail. String Theory II covers the physics of the supersymmetric strings, from the perturbative regime to the non-perturbative one. The theory of supergravity and anomalies are reviewed from a geometric perspective. BPS objects and non-renormalization theorems are described from different viewpoints. Calabi-Yau compactifications are discussed in detail. The course covers also some advanced stuff such as the basic ideas of M- and F-theory, black hole entropy, and so on. Lecturer Intro. Sergio Cecotti graduated in physics at the University of Pisa in 1979 and has worked at the Harvard University, the University of Pisa, CERN, etc.. In 2014, he became a full professor at SISSA. He has published about 100 papers with total number of citations 7260. His h-index is 44. Introduction to Tensor Network Algorithms 张量网络算法简介 Lecturer Song Cheng (Assistant Research Fellow) Time 15:20 - 18:40, Tue Venue 1129B Language Chinese Prerequisite Quantum Mechanics This course will focus on the basic concepts and representative algorithms of tensor networks. For 1D tensor networks, we will introduce the Matrix Product State(MPS) and its Density Matrix Renormalization Group algorithm(DMRG), Time Evolution Block Decimation algorithm(TEBD), etc. For high-dimensional tensor networks, we will include the Projected Entangled Pair States (PEPS) and its various Tensor Renormalization Group algorithms, as well as the Corner Transfer Matrix Renormalization Group algorithm, variational algorithms, etc. We will also involve some machine learning algorithms and quantum simulation algorithms based on tensor networks if time permits. 程嵩,现任北京雁栖湖应用数学研究院助理研究员,曾任鹏城实验室量子计算中心助理研究员,博士毕业于中科院物理所理论物理专业。他的研究方向是张量网络算法,研究兴趣主要集中于开发张量网络在凝聚态物理,机器学习,量子计算等方向的新算法。 Modern Cryptography 现代密码学 Lecturer Jintai Ding (Research Fellow) Venue JCY-1 This will be on Tuesday night where we give an introduction class for the basics of modern cryptography including symmetric cryptography, public key cryptography and post-quantum cryptography. Jintai Ding received a Ph.D. degree from Yale University and used to work as a professor of William Taft at the University of Cincinnati. Currently, he is a double-appointed professor at Yau Mathematical Sciences Center, Tsinghua University, and Yanqi Lake Beijing Institute of Mathematical Sciences and Applications (BIMSA). In the early days, he was engaged in the research of quantum affine algebra and representation theory, while his current research direction is post-quantum cryptography. Moreover, he has served as the co-chairman of the International Conference on Post-Quantum Cryptography for three times and is one of the prominent scholars of multivariate cryptography in the world. Quiver Approaches to Machine Learning 机器学习的箭图方法 Lecturer William Donovan (Associate Professor) Time 13:30 - 15:05, Tue,Thu Prerequisite Some representation theory or algebraic geometry would be helpful. Quiver representations are a useful tool in diverse areas of algebra and geometry. Recently, they have furthermore been used to describe and analyze neural networks. I will introduce quivers, their representations, and a range of applications, including to the theory of machine learning. Will Donovan joined Yau MSC, Tsinghua U in 2018. Since 2021 he is an Associate Professor, and Adjunct Associate Professor at BIMSA. His focus is geometry, in particular applying ideas from physics and noncommutative algebra to study varieties, using tools of homological algebra and category theory. He studied at Cambridge U, completed his PhD at Imperial College London, and was postdoctoral researcher at Edinburgh U, UK. From 2014-18 he was research fellow at Kavli IPMU, U Tokyo, where he is now Visiting Associate Scientist. His work is published in journals including Communications in Mathematical Physics and Duke Mathematical Journal. He is supported by China Thousand Talents Plan, and received a Japan Society for Promotion of Science Young Scientist grant award. Introduction to Quantum Computation 量子计算介绍 Lecturer Babak Haghighat (Associate Professor) Time 10:40 - 12:15, Fri Quantum Mechanics, Linear Algebra In this course, I present a self-consistent review of fault-tolerant quantum computation using the surface code. The course covers everything required to understand topological fault-tolerant quantum computation, ranging from the definition of the surface code to topological quantum error correction and topological operations on the surface code. In the process, basic concepts and powerful tools, such as universal quantum computation, quantum algorithms, stabilizer formalism, and measurement-based quantum computation, are also introduced. Associate Professor, research interests are string theory, quantum field theory and topological field theory. Macro Economics 宏观经济学 Lecturer Liyan Han (Research Fellow) Time 13:30 - 15:05, Wed,Fri 如何完善宏观经济指导与调控、优化宏观经济政策和市场资源配置?如何形成可持续经济增长和高质量发展的创新能力?如何稳步地消除贫富差距,而实现社会均富?《宏观经济学十二讲:中国情景》是一本中级宏观经济学教材。本书结合中国特色社会主义市场经济建设实践,以及西方宏观经济学理论,从国际比较的视角,系统、全面地构建了宏观经济学的知识体系与理论框架。书中凝结了两位作者数十年教学、理论研究成果,在介绍宏观经济学理论的同时,融入大量中国及其他经济体的发展实例,从实践出发辩证地解读宏观经济学理论。本书可作为本科、硕士和MBA的宏观经济学课程教材,也是大众读者全面理解国际主流宏观经济学理论及其在中国的思辨、实践的绝佳选择。 Dr. Han Liyan, Research fellow at Beijing Yanqi Lake Institute of Applied Mathematics, Lab of Digital Economy. He once worked as a chief professor of economics in Beihang University for 20 years. He was awarded as Beijing Renowned Teacher, Distinguished Fellow in Chinese Quantitative Economics, and Special government allowances of the State Council. His doctorate research focused on fuzzy information and knowledge engineering in 1990s, and now his research interests focus on fintech, foreign exchange rate combined with monetary policy, and green finance as well. Mathematical History Course: From a Geometric Perspective 数学史课程:从几何视角 Lecturer Lynn Heller (Research Fellow) Time 19:20 - 20:55, Wed In this course we look at the life and work of some of the most influential Mathematicians including David Hilbert, Felix Klein, Emmy Noether and Bernhard Riemann. The aim is twofold. On the one hand, we would like to introduce the historical figures and convey the flavor of how Mathematics were conducted at that time, on the other hand, I would also like to show how (in a possibly modern interpretation) their work continue to be influencial today. Along the way we will also be encountering some specifics of the German culture and in particular the different educational and academic system. I was born in Wuhan and grew up in the little German town Göttingen, which was home to an extraordinary amount of great Mathematicians (and Nobel prize winners) including all the Mathematicians discussed in the course. I studied economics at the FU Berlin and Mathematics at TU Berlin from 2003-2007 and obtained my PhD from Eberhard Karls University Tübingen in 2012. Thereafter, I stayed in Tübingen as a Postdoc till I got a Juniorprofessorship in 2017 at the Leibniz University in Hannover. Minimal Surfaces 极小曲面 Lecturer Sebastian Heller (Research Fellow) It is necessary to be familiar with the basic concepts of linear algebra and calculus. In order to be able to follow the course throughout, it is beneficial to have some basic knowledge about differential geometry or manifolds, and to be familiar with some complex analysis. However, it is also possible to make up for this within the course. The investigation and construction of surfaces with special geometric properties has always been an important subject in differential geometry. Of particular interest are minimal surfaces and constant mean curvature (CMC) surfaces in space forms. Global properties surfaces were first considered by Hopf, showing that all CMC spheres are round. This result was generalized by Alexandrov [2] in the 1950s, who showed that the round spheres are the only embedded compact CMC surfaces in $\mathbb R^3$, while there do not exist any compact minimal surfaces in euclidean space and hyperbolic 3-space due to the maximum principle. In contrast, there are many compact and embedded minimal surfaces in the 3-sphere, the best known examples being the Clifford torus and the Lawson surfaces, in addition to the totally geodesic 2-sphere, and a full classification is beyond the current knowledge. One reason for the beauty and depth of minimal surface theory is that there are many different methods and tools with which to construct, study, and classify these surfaces. In this course we will first derive basic properties of minimal surfaces in $\mathbb R^3$ and introduce some general techniques, and then move on to CMC surfaces in $\mathbb R^3$ and minimal surfaces in $\mathbb S^3.$ We mainly use differential geometric and complex analytical methods in this course. PhD in 2008, Humboldt Universitat Berlin, Germany Habilitation in 2014, Universitat Tubingen, Germany Researcher at Universities of Heidelberg, Hamburg, Hannover, 2014-2022 Research Fellow at Beijing Institute of Mathematical Sciences and Applications from September 2022 on Research interests: minimal surfaces, harmonic maps, Riemann surfaces, Higgs bundles, moduli spaces, visualisation and experimental mathematics Introduction to Hochschild (co)homology Hochschild(上)同调简介 Lecturer Chuangqiang Hu (Assistant Research Fellow) Venue Sanya Prerequisite homological algebra This lecture explores Hochschild cohomology as a Gerstenhaber algebra in detail, the notions of smoothness and duality, algebraic deformation theory, infinity structures, and connections to the Hochschild-Kostant-Rosenberg decomposition. Useful homological algebra background is provided as well. Hu chuangqiang joined Bimsa in the autumn of 2021. The main research fields include: coding theory, function field and number theory, singularity theory. In recent years, he has made a series of academic achievements in the research of quantum codes, algebraic geometric codes, Drinfeld modules, elliptic singular points, Yau Lie algebras and other studies. He has published 13 papers in famous academic journals such as IEEE Trans. on IT., Final Fields and their Applications, Designs, Codes and Cryptography. He has been invited to attend domestic and international academic conferences for many times and made conference reports. TBA ~ 2022-12-09 Introduction to Homological Algebra 同调代数导论 Lecturer Sergei Ivanov (Research Fellow) Time 10:40 - 12:15, Tue,Fri Basic theory of rings and modules, basic group theory, basic category theory Such concepts as homology and cohomology of algebraic objects, spaces and varieties have become an integral part of modern mathematics. This course is dedicated to introducing this range of ideas from the side of algebra. Graph algorithm 图算法 Lecturer Hanru Jiang (Assistant Research Fellow) Prerequisite Discrete math. In this course, we introduce basic concepts in graph theory and complexity theory, then study graph algorithms with a focus on matching and network flows. 蒋瀚如于2019年在中国科学技术大学取得计算机科学与技术博士学位,2019-2020年在鹏城实验室量子计算研究中心担任助理研究员,2020年加入BIMSA任助理研究员。他的主要研究方向为程序语言理论、编译器的形式化验证和量子计算中的程序语言问题。作为并发程序分离编译验证工作CASCompCert的主要完成人,获得程序语言领域顶级会议PLDI 2019的Distinguished Paper Award。 Introduction to Exceptional Geometry 例外几何介绍 Lecturer Kotaro Kawai (Associate Research Fellow) Prerequisite Linear algebra, basics of Riemmanian geometry The classification of Riemannian manifolds with special holonomy contains two "exceptional" cases: G2 and Spin(7). Manifolds with holonomy contained in G2 or Spin(7) are called G2-manifolds or Spin(7)-manifolds, respectively. In this course, I will introduce various topics of G2 and Spin(7) geometry, mainly focusing on the G2 case. We start from the linear algebra in G2 geometry. Then we study topics such as the structure of a G2-manifold and calibrated geometry/ gauge theory/mirror symmetry on a G2-manifold. Kotaro Kawai got a bachelor's degree and a master's degree from the university of Tokyo, and received his Ph.D from Tohoku university in 2013. He was an assistant professor at Gakushuin university in Japan, then he moved to BIMSA this year. Gakushuin University was established as an educational institution for the imperial family and peers, and even today, some members of the imperial family attend this university. There are many eminent professors and Kunihiko Kodaira worked at this university. He majors in differential geometry, focusing on manifolds with exceptional holonomy. These manifolds are considered to be analogues of Calabi-Yau manifolds, and higher dimensional analogues of gauge theory are expected on these manifolds. This topic is also related to physics, and he think that this is an exciting research field. Comparison Theorems in Riemannian Geometry 黎曼几何中的比较定理 Lecturer Pengyu Le (Assistant Research Fellow) Prerequisite Multivariable Calculus, Ordinary Differential Equation, Basics of Riemannian Geometry (optional) The course will cover various comparison theorems under different curvature conditions in Riemannian geometry. Fibre Bundle Theory 纤维丛理论 Lecturer Jingyan Li (Assistant Research Fellow) Prerequisite Simplicial complex theory and simplicial set theory. The notion of fibre bundle first arose out of questions posed in the 1930s on the topology and geometry of manifolds. By the year 1950, the definition of fibre bundle had been clearly formulated, the homotopy classification of fibre bundles achieved, and the theory of characteristic classes of fibre bundles developed by several mathematicians: Chern, Pontrjagin, Stiefel, and Whitney. Fiber bundle theory is not only important in topology and differential geometry, but also is widely used in other branches of mathematics and physics. This course mainly introduces some basic concepts of fiber bundles and the application of fiber bundles in homology theory in comparison with the geometric level and the simple level. Assistant Reserch fellow Jingyan Li received a PhD degree from the Department of Mathematics of Hebei Normal University in 2007. Before joining BIMSA in September 2021, she has taught in the Department of Mathematics and Physics of Shijiazhuang Railway University and the School of Mathematical Sciences of Hebei Normal University as an associate professor. Her research interests include topology data analysis and simplicial homology and homotopy. Finance 金融学 Lecturer Zhen Li (Assistant Research Fellow) Prerequisite Political economy, Macroeconomics, Microeconomics, Accounting, etc Facing the complicated global economic and financial changes, this course applies the modern economics theory to tell the history of financial development in China, integrates the basic principles of finance and the financial practice of China, and rationally integrates micro-finance and macro-finance, forming a systematic and complete logical framework and knowledge system. It mainly includes five parts: currency, credit and finance; financial intermediary and financial market; monetary equilibrium and macro-policy; micro-mechanism of financial operation; financial development and stability mechanism. This course is intended for college students who need to understand and master financial knowledge systematically. Dr. Zhen Li is an assistant research fellow at the BIMSA. He is also a visiting research fellow at the Chongyang Institute for Financial Studies, Renmin University of China, the International Monetary Institute, Renmin University of China, and the Alibaba Research Center for Rural Dynamics. He was a postdoctoral fellow from 2020-2022 at the School of Data Science, Fudan University, and an assistant research fellow from 2013-2016 at the Chongyang Institute for Financial Studies, Renmin University of China. Dr. Li received his Ph.D. in Finance, M.A. in Statistics, and B.A. in Finance from the Renmin University of China, and B.E. in Software Engineering from the Nanjing University. He was a visiting scholar from 2019-2020 at the School of Business, George Washington University. His current research focuses on FinTech, commercial banking, and financial risk management, and other fields. Dr. Li has published more than 30 articles in some important academic journals such as Journal of Financial Research, Statistical Research, China Economic Review , etc. Special Topics in Cryptography 密码学专题 Lecturer Bei Liang (Assistant Research Fellow) Time 18:40 - 20:55, Thu This class will be on Thursday night and we will cover advanced topics including Secure Multi-Party Computation, Fully Homomorphic Encryption, etc. It will also bring in speakers, including students and postdocs, to give lectures on their current research topics and results, and it encourages students to give presentations in any interesting topics they choose. Dr. Bei Liang received Ph.D. in information security from the Institute of Information Engineering, Chinese Academy of Sciences. She was a postdoc researcher at Chalmers University of Technology, Sweden. Currently, she is an assistant researcher at the BIMSA. Her main research interest is theoretical cryptography. She has published more than 20 papers in international journals or conferences, and won the ISC2019 Best Paper Award and the ProvSec2015 Best Student Paper Award. She participated in two National Key Research and Development Programs and undertook one Beijing Natural Science Foundation. Quantitative Risk Management 量化风险管理 Lecturer Qingfu Liu (Professor) 在监管新规下,为实现对金融风险的有效管理,本课程在总结经典风险管理建模方法的基础上,从大数据、人工智能和区块链出发,系统给出金融市场风险、信用风险、操作风险和流动性风险的先进风险管理方法及其工具。课程内容包括但不限于金融监管新规与风险内涵、特性与分类,金融监管大数据及其挖掘技术,金融市场风险管理论及其测度方法,金融信用风险管测度方法与技术,基于人工智能技术的智能风控,异常交易的监控技术与市场化管理工具,以及风险管理策略及其模型优化。为加深理解,课程将增设国内外经典案例讲授与软件编程实现。课程一般适用于拥有一定数理基础的高年级本科生、硕士和博士研究生。 Qingfu Liu, the professor and doctoral supervisor at School of Economics, Fudan University, was awarded as Shanghai Pujiang Scholar. Prof. Liu obtained a doctorate in management science and engineering from Southeast University, was a postdoctoral fellow at Fudan University, and also a visiting scholar at Stanford University. Prof. Liu is now the executive dean of Fudan-Stanford Institute for China Financial Technology and Risk Analytics, the academic vice dean of Fudan-Zhongzhi Institute for Big Data Finance and Investment, and the vice dean of Shanghai Big Data Joint Innovation Lab. Prof. Liu's research interests mainly include financial derivatives, big data finance, quantitative investment, RegTech, green finance and non-performing asset disposal. He has published more than 80 papers in the Journal of economics, Journal of International Money and Finance, Journal of Management Sciences in China and other important journals at home and abroad, published three monographs, and presided over more than 20 national and provincial research projects. He is currently an associate editor at Digital Finance and an editor at World Economic Papers. An Introduction to Ergodic Theory 遍历理论导论 Lecturer Sixu Liu (Assistant Research Fellow) Time 15:20 - 18:40, Mon Prerequisite basic knowledge of measure theory, stochastic process, functional analysis and topology We will introduce basis knowledge of ergodic theory, and its relationship with dynamical systems. This course includes topics on measure-preserving transformation, entropy and topological entropy. 刘思序于2019年获得北京大学博士学位,之后在清华大学任博士后,并于2022年加入BIMSA任助理研究员。主要研究方向为动力系统与遍历论,统计试验设计。 Introduction to Prismatic cohomology 棱镜上同调导论 Lecturer Yong Suk Moon (Assistant Research Fellow) Prerequisite Algebraic geometry (background in algebraic number theory will be helpful) Prismatic cohomology, which is developed in a recent work of Bhatt-Scholze, is a cohomology theory for schemes over p-adic rings. It is considered to be an overarching cohomology theory in p-adic geometry, unifying etale, de Rham, and crystalline cohomology. Due to wide-ranging applications in p-adic Hodge theory and p-adic Galois representations, it is one of the central topics of active research. In this course, we will start by going over motivational background in p-adic cohomology theories, and then give a rough overview of main ideas and results in the paper "Prisms and prismatic cohomology" by Bhatt-Scholze. If time permits, we will also briefly discuss how the prismatic theory may reveal a deeper understanding of p-adic Galois representations. Yong Suk Moon joined BIMSA in 2022 fall as an assistant research fellow. His research area is number theory and arithmetic geometry. More specifically, his current research focuses on p-adic Hodge theory, Fontaine-Mazur conjecture, and p-adic Langlands program. He completed his Ph.D at Harvard University in 2016, and was a Golomb visiting assistant professor at Purdue University (2016-19) and a postdoctoral researcher at University of Arizona (2019 - 22). On Fusion Categories III 融合范畴 III Lecturer Sebastien Palcoux (Assistant Research Fellow) Time 15:20 - 16:55, Mon,Tue Prerequisite Category theory This is the sequel of the course "On Fusion Categories II" given last semester.It introduces to the notion fusion category, which can be seen as a representation theory of the (finite) quantum symmetries. The notes and videos of the first and second parts are available at: Part I: http://www.bimsa.cn/newsinfo/526244.html Part II: https://www.bimsa.cn/newsinfo/601271.html 2010年于 Institut de Mathématiques de Marseille (I2M)取得数学博士学位 2014-2016 于 Institute of Mathematical Sciences (IMSc)任博士后 2020-至今于北京雁栖湖应用数学研究院任助理研究员 主要研究兴趣:量子代数,量子对称,子因子平面代数和融合范畴。在Advances in Mathematics, Quantum Topology等多个杂志上发表学术论文。 Markov chains 马尔可夫链 Lecturer Yuval Peres (Research Fellow) Undergraduate Probability and Linear Algebra, some measure theoretic probability: Conditional expectation, Laws of large numbers. The first chapter of Durrett's graduate text or taking the course "Probability 1" given by Prof. Hao Wu at Tsinghua will provide ample background. That course can be taken in parallel with this course. The textbook for the course is "Markov chains and mixing times", second edition, PDF is available from https://www.yuval-peres-books.com/markov-chains-and-mixing-times/ In the last part of the course, we will see new material that is not in that book, and explore open problems and directions of current research. Background: The modern theory of Markov chain mixing is the result of the convergence, in the 1980's and 1990's, of several threads: For statistical physicists, Markov chains become useful in Monte Carlo simulation The mixing time determines the running time for simulation. Deep connections were found between rapid mixing and spatial properties of spin systems. In theoretical computer science, Markov chains play a key role in sampling and approximate counting algorithms. At the same time, mathematicians were intensively studying random walks on groups. Both spectral methods and probabilistic techniques, such as coupling, played important roles. Ingenious constructions of expander graphs (on which random walks mix especially fast) were found. The connection between eigenvalues and expansion properties was first discovered in differential geometry, but then became central to the study of Markov chain mixing. Yuval Peres obtained his PhD in 1990 from the Hebrew University, Jerusalem. He was a postdoctoral fellow at Stanford and Yale, and was then a Professor of Mathematics and Statistics in Jerusalem and in Berkeley. Later, he was a Principal researcher at Microsoft. Yuval has published more than 350 papers in most areas of probability theory, including random walks, Brownian motion, percolation, and random graphs. He has co-authored books on Markov chains, probability on graphs, game theory and Brownian motion, which can be found at https://www.yuval-peres-books.com/ . His presentations are available at https://yuval-peres-presentations.com/ Dr. Peres is a recipient of the Rollo Davidson prize and the Loeve prize. He has mentored 21 PhD students including Elchanan Mossel (MIT, AMS fellow), Jian Ding (PKU, ICCM gold medal and Rollo Davidson prize), Balint Virag and Gabor Pete (Rollo Davidson prize). Dr. Peres was an invited speaker at the 2002 International Congress of Mathematicians in Beijing, at the 2008 European congress of Math, and at the 2017 Math Congress of the Americas. In 2016, he was elected to the US National Academy of Science. Geometric Numerical Methods for Dynamical Systems II 动力系统几何数值算法 II Lecturer Zaijiu Shang (Professor) This course is a continuation of last semester and will cover the following topics: 1) Normal forms of Hamiltonian systems and bifurcation theory; 2) Averaging methods of classical perturbation theory; 3) KAM stability of Hamiltonian systems; 4) Effective stability of nearly integrable systems; 5) Numerical stability of symplectic geometric methods. Zaijiu Shang is a Professor of the Academy of Mathematics and Systems Science, Chinese Academy of Sciences, and a Post Teacher at the University of Chinese Academy of Sciences (2015-). He was the deputy director (2003-2011) and the director (2012-2016) of the Institute of Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences. He has been served as a member of editorial boards of Acta Math. Appl. Sinica (2007-), Acta Math. Sinica (2009-), Science China: Mathematics (2013-), and Applied Mathematics (HUST 2013-). He is working in the fields of dynamical systems and geometrical numerical methods. He won the second prize in "the Science and Technology Progress Award of the State Education Commission (1993)". He was one of the core members of the project "Symplectic Geometric Algorithms of Hamiltonian Systems" which won the first prize of the National Natural Science Awards (Kang Feng etc., 1997), and his representative achievements include stability theory of symplectic algorithms and volume-preserving algorithms for source-free systems. Derived Algebraic/Differential Geometry 导出代数和微分几何 Lecturer Artan Sheshmani (Research Fellow) Venue Online Prerequisite Commutative Algebra (Atiyah McDonald or Rottman), Algebraic Geometry (Hartshorne or Grothendieck's EGA/SGA) Derived Algebraic Geometry is a machinery regarded as an extension of algebraic geometry, whose goal is to study exotic geometric settings and situations that might occur in algebraic geometry where algebraic geometry might not be able to rigorously study. Take for instance the example of intersection of two subvarieties, X, Y, in a fixed ambient smooth algebraic variety Z. We have the notion of "nice intersection" (or generic intersection) of X and Y in Z, which is equivalent to transverse intersection of the X and Y. In this case the span of the tangent space generated by tangent spaces of X and Y is equal to the tangent space of Z and their locus of intersection X∩Y will be of expected dimension. Now take for instance a "bad intersection" of X and Y, that is non-generic or non-transverse intersection of X and Y in Z. The latter situation might occur if X and Y intersect over points or loci with higher multiplicities or when their locus of intersection is not of expected dimension. Certainly such situation has been addressed in algebraic geometry, using cohomology theory, that is, one realizes the locus of intersection, X∩Y, as a cohomology class in the ambient cohomology theory of Z (for instance an element in de Rham cohomology of Z, or in complex cobordism ring of Z, or an element of the K-theory in the intersection ring of Z) and studies the intersection of X and Y cohomologically. The drawback of this approach is that X ∩Y is realized only as a cohomology class and not as a geometric object any more. The Derived Algebraic Geometry allows one to construct a geometric object associated to the non-generic locus of intersection of X and Y, which is called the "Derived Scheme". It is roughly speaking the homotipical perturbation of the naive locus of intersection of X and Y, and contains the data of higher multiplicity components of X∩Y or components with defects of expected dimension. The machinery of homotopy theory in derived algebraic geometry enables one to identify the points in derived intersection of X and Y as points which lie in X and Y respectively, together with certain continuous homotopy maps between them (as opposed to generic intersection of X and Y where points in X∩Y are given by points which lie both in X and Y simultaneously). Similarly taking "bad" quotients of algebraic schemes/varieties by non-proper or non-free actions is yet another example which can be modeled geometrically and rigorously by derived algebraic geometry. In usual setting the points in quotient of a variety X by the action of a free proper group G are the ones that lie in the orbit space of elements of G. However in instances where the group G is acting non-freely on X, the derived algebraic geometry enables one to realize the points in the "bad quotient" as points which lie in the orbit of elements of G up to homotopy, that is two points, a and b, lie in the orbit of an element of G if they are related to each other by a homotopy path or a path with homotopical structure in that orbit. The course sets foundations to such theory of derived schemes, and follows by discussing the derived moduli spaces, and specially the derived structure of the moduli spaces of coherent sheaves, and some applications of derived geometry in enumerative geometry (specially the Donaldson-Thomas theory) of Calabi-Yau 3 folds and 4 folds will be discussed at the end. I am a Professor of pure Mathematics, specialized in Algebraic geometry, Differential Geometry and Mathematics of String Theory. I am a Full Research Fellow (equivalent to Full Professor in the US and Europe) at Yanqi Lake Beijing Institute of Mathematical Sciences and Applications in Beijing, as well as a senior member at Harvard University CMSA, and a visiting professor at Institute for the Mathematical Sciences of the Americas at University of Miami. During the past 5 years I have been a senior personnel at Simons Collaboration Program for Homological Mirror Symmetry at Harvard University Center for Mathematical Sciences and Applications (CMSA), as well as Harvard Physics department (2020-2021), and an Associate Professor of Mathematics at Institut for Mathematik and Center for Quantum Geometry of Moduli Spaces at Aarhus University in Denmark. My work is mainly focused on Gromov Witten theory, Donaldson Thomas theory, Calabi-Yau geometries, and mathematical aspects of String theory. I study geometry of moduli spaces of sheaves and curves on Calabi Yau spaces, some of which arise in the study of mathematics of string theory. In my research I have worked on understanding dualities between geometry of such moduli spaces over complex varieties of dimension 2,3,4 and currently I am working on extension of these projects from derived geometry and geometric representation theory point of view. In joint work with Shing-Tung Yau (BIMSA, YMSC, Tsinghua, Harvard Math, Harvard CMSA, and Harvard Physics departments), Cody Long (Harvard Physics), and Cumrun Vafa (Harvard Math and Physics departments) I worked on geometry moduli spaces of sheaves with non-homolomorphic support and their associated non-BPS (non-holomorphic) counting invariants. In 2019 I recieved IRFD "Research Leader" grant (approx $1M) on my project "Embedded surfaces, dualities and quantum number theory". The project has additionally been co-financed by Harvard University CMSA (Approx total. $400K). Detail of IRFD "Research Leader" grant ($1M). --The grant has recently been extended until 2027! First Steps of Statistical Learning 统计学习初步 Lecturer Congwei Song (Assistant Research Fellow) Prerequisite 线性代数、实分析、概率论、统计学 统计学习是一种主流机器学习范式,它主张一切机器学习模型都派生自统计模型。介绍统计学习基础知识、常用模型和应用。配合以算法实现、作品展示、论文解读。还会简单讲解统计学习相关计算机语言和包的使用。希望大家在学习过程中有所收获,有所领悟。 Congwei Song received the master degree in applied mathematics from the Institute of Science in Zhejiang University of Technology, and the Ph.D. degree in basic mathematics from the Department of Mathematics, Zhejiang University, worked in Zhijiang College of Zhejiang University of Technology as an assistant from 2014 to 2021, from 2021 on, worked in BIMSA as asistant researcher. His research interests include machine learning, as well as wavelet analysis and harmonic analysis. Graph Theory 图论 Lecturer Benjamin Sudakov (Professor) The goal of this course is to give students an overview over the most fundamental concepts and results in modern graph theory. The topics which we plan to cover include: Basic notions: graphs, graph isomorphism, adjacency matrix, paths, cycles, connectivity Trees, spanning trees, Cayley's formula, Vertex and edge connectivity, 2-connectivity, Mader's theorem, Menger's theorem Eulerian graphs, Hamilton cycle, Dirac's theorem Matchings, Hall's theorem, Kőnig's theorem, Tutte's condition Planar graphs, Euler's formula, basic non-planar graphs, platonic solids Graph colourings, greedy colourings, Brooks' theorem, 5-colourings of planar graphs, Gallai-Roy theorem Large girth and large chromatic number, edge colourings, Vizing's theorem, list colourings Matrix-tree theorem, Cauchy-Binet formula Hamiltonicity: Chvátal-Erdős theorem, Pósa's lemma, tournaments Ramsey theory Turán's theorem, Kővári-Sós-Turán theorem Benny Sudakov received his PhD from Tel Aviv University in 1999. He had appointments in Princeton University, the Institute for Advanced Studies and in University of California at Los Angeles. Sudakov is currently professor of mathematics in ETH, Zurich. He is the recipient of a Sloan Fellowship, NSF CAREER Award, Humboldt Research Award, is Fellow of the American Math. Society and was invited speaker at the 2010 International Congress of Mathematicians. He authored more than 300 scientific publications and is on the editorial board of 14 research journals. His main scientific interests are combinatorics and its applications to other areas of mathematics and computer science. Interacting Particle Systems and Their Large Scale Behavior 相互作用粒子系统及其大尺度行为 Lecturer Funaki Tadahisa (Research Fellow) It is desirable that the audience is familiar with Modern Probability Theory and some tools in Stochastic Analysis such as martingales and stochastic differential equations. But I will try to briefly explain these in my course. For example, Parts I and II of my course given at Yau Mathematical Sciences Center from March to June, 2022 fit to this purpose; see slides of Lect-1 to Lect-20 posted on the web page of YMSC. I explain our recent results on the derivation of interface motion such as motion by mean curvature or free boundary problem from particle systems in some details. The core of these results was presented in my talk at vICM2022 [5], [6]. Funaki Tadahisa was a professor at University of Tokyo and then at Waseda University in Japan. His research subject is probability theory mostly related to statistical physics, specifically interacting systems and stochastic PDEs, whose importance increases as several Fields medals are given to this area. Logic and Computation I 逻辑和计算 I Lecturer Kazuyuki Tanaka (Research Fellow) Completion of undergraduate course on logic, set theory or automata theory is recommended. But all interested students are welcome. This is an advanced undergraduate and graduate-level course in mathematical logic and theory of computation. Topics to be presented in the first semester include: computable functions, undecidability, propositional logic, NP-completeness, first-order logic, Goedel's completeness theorem, Ehrenfeucht-Fraisse games, Presburger arithmetic. In the second semester, we will move on to Goedel's incompleteness theorems, second-order logic, infinite automata, determinacy of infinite games, etc. Kazuyuki Tanaka received his Ph.D. from U.C. Berkeley. Before joining BIMSA in 2022, he taught at Tokyo Inst. Tech and Tohoku University, and supervised fifteen Ph.D. students. He is most known for his works on second-order arithmetic and reverse mathematics, e.g., Tanaka's embedding theorem for WKLo and the Tanaka formulas for conservation results. For more details: https://sendailogic.com/tanaka/ Financial Engineering and Derivatives Markets 金融工程与衍生品市场 Lecturer Ke Tang (Professor) Prerequisite Probability theory 随着金融市场的发展,传统金融产品无法满足日益增加的金融需求,衍生品市场(如期货、期权)在我国发展非常迅速。数理工具在新型金融产品设计及新型设施开发中发挥了巨大作用,以衍生品为基础的金融工程也成为金融学中重要的学科分支。本课程将介绍金融工程中核心的数理基础知识并系统总结衍生品市场中的金融理论与模型。课程内容包括套利、对冲及B-S模型等基础衍生品理论,和泊松市场模型、随机波动率模型以及随机过程的基础理论等更深入的衍生品模型。本课程旨在让同学们系统了解金融工程中运用的基本数理工具,以及它们在衍生品市场中如何发挥作用。课程适用于学习过概率论的高年级本科生、硕士和博士研究生。 Ke Tang is a professor and director of the Institute of economics, School of Social Sciences, Tsinghua University. His main research includes commodity market (including digital assets), financial technology and digital economy. He has published many papers on top English journals such as Journal of Finance, Review of Financial Studies, Management Science, and currently serves as an executive editor of Quantitative Finance, and an associate editor of Journal of Commodity Markets. His research has been reported by the CFTC, the United Nations Commodity Report etc. He is in the Elsevier highly cited scholar list of China in 2020 and 2021. Topics on KZ equations and KZB equations KZ方程和KZB方程选题 Lecturer Xinxing Tang (Assistant Research Fellow) representation theory of simple Lie algebra, basic knowledge of differential geometry and algebraic topology We continue the related topics of KZ equations in the last semseter, including the various invariants, the Gaudin model and the Bethe Ansatz. Then we switch to the genus 1 case, and talk about the corresponding results of KZB equations. We try to collect and discuss the well-known results and recent developments. 2009-2013 四川大学数学学院基础数学 本科 2013-2018 北京大学北京国际数学研究中心 博士 2018-2021 清华大学丘成桐数学科学中心 博士后 2021- 北京怀柔应用数学研究院 助理研究员 研究兴趣:1. 可积系统,特别是GW理论、LG理论中出现的无穷维可积系统,兴趣在于理解其中的无穷个对称性的代数结构和相关计算。2. 其他兴趣:mixed Hodge structure,quantum group and KZ equation, W-algebra and W-symmetry, augmentation representation Post-Quantum Cryptography II 后量子密码 II Lecturer Chengdong Tao (Assistant Research Fellow) Prerequisite Mordern Cryptography, Coding Theory, C Programming Language The most widely used nowadays are the number theoretical based cryptosystems such as RSA, DSA, and ECC. However, due to Peter Shor's Algorithm, such cryptosystems would become insecure if a large Quantum computer is built. We need to develop a new family of cryptosystems that can resist quantum computers attacks. Researchers usually use Post-Quantum Cryptography (PQC) to denote this new family. In the course, we will talk about Post-Quantum Cryptography. Dr. Chengdong Tao got his Ph.D from South China University of Technology in 2015. Then he became an engineer at Shenzhen Huawei Technology Co., Ltd., and Executive Director and General Manager at Guangzhou Liangjian Technology Co., Ltd.. From 2020, he has been an Assistant Research Fellow of Yanqi Lake Beijing Institute of Mathematical Sciences and Applications. Research Interests include Computational Algebra, Post-quantum Cryptography, Fast Implementation. Constructions in Hyperkahler Geometry 超Kaehler几何的构造 Lecturer Arnav Tripathy (Assistant Research Fellow) Prerequisite Differential geometry, basic complex analysis. Hyperkahler manifolds form an extremely symmetric class of manifolds of great importance throughout mathematics and physics. I'll introduce their basic properties before indicating many examples of known constructions. Finally, I hope to detail Gaiotto-Moore-Neitzke's conjectural description of Hitchin moduli spaces. Dr. Tripathy is a researcher interested in the geometry of string theories and supersymmetric field theories. Quantum Groups 量子群 Lecturer Bart Vlaar (Associate Research Fellow) Dr. Bart Vlaar has joined BIMSA in September 2022 as an Associate Research Fellow. His research interests are in algebra and representation theory and applications in mathematical physics. He obtained a PhD in Mathematics from the University of Glasgow. Previously, he has held postdoctoral positions in Amsterdam, Nottingham, York and Heriot-Watt University. Before coming to BIMSA he visited the Max Planck Institute of Mathematics in Bonn.Dr. Bart Vlaar has joined BIMSA in September 2022 as an Associate Research Fellow. His research interests are in algebra and representation theory and applications in mathematical physics. He obtained a PhD in Mathematics from the University of Glasgow. Previously, he has held postdoctoral positions in Amsterdam, Nottingham, York and Heriot-Watt University. Before coming to BIMSA he visited the Max Planck Institute of Mathematics in Bonn. Introduction to Bordism 配边理论介绍 Lecturer Zheyan Wan (Assistant Research Fellow) Prerequisite basic algebraic topology The first half covers some classical topics in bordism, including spectra and Pontryagin-Thom isomorphism. The second half covers the computation of bordism groups using the Adams spectral sequence. 本科和博士毕业于中国科学技术大学,之前是清华大学丘成桐数学科学中心博士后,现在是北京雁栖湖应用数学研究院助理研究员,研究兴趣是用拓扑方法(cobordism)研究理论物理(anomaly)。 Hopf Algebras and Tensor Categories Hopf代数与张量范畴 Lecturer Yilong Wang (Assistant Research Fellow) Prerequisite Graduate level algebra. In this course, we give a brief introduction to Hopf algebras and their representation categories. We will then generalize such categories and study the abstract theory of tensor categories. Yilong Wang graduated from The Ohio State University in 2018. After working in Louisiana State University as a postdoc researcher, he joined BIMSA as an assistant research fellow in 2021. His research interests include modular tensor categories and topological quantum field theories. Introduction to Quantum Information and Computation II 量子信息和量子计算介绍--下 Lecturer Yu Wang (Assistant Research Fellow) Advanced algebra, complex analysis, functional analysis, probability theory, quantum mechanics In this semester, we will introduce how to represent and transmit information by quantum states. We will also introduce how to characterize the process with quantum channel in open quantum system. Several special quantum noise channel will be discussed and the some error correction methods will be given. Then we will discuss the measure of the information similarity of two quantum states. We will also introduce the differences of quantum and classical entropy, and related study. Finally, we will introduce some research directions in quantum information. Yu Wang received his PhD degree in computer software and theory from the Academy of Mathematics and Systems Sciences, Chinese Academy of Sciences in 2019. After graduation, he worked at Pengcheng Laboratory in Shenzhen. In December 2020, he joined the Yanqi Lake Beijing Institute of Mathematical Science and Applications. The main research area is about quantum information and quantum computation. Specifically, the current research is focuses on quantum state tomography, in order to optimize the measurement and computation resouce to read out the unknown quantum states. Besides, it is also studied to design new quantum communication protocols by different quantum walk models. Topological Approaches for Data Science I 数据科学中的拓扑方法 I Lecturer Jie Wu (Research Fellow) Prerequisite Algebraic Topology Topological data analysis is a new-born research area that explores topological approaches in data science, where persistent homology has been proved as an effective mathematical tool in data analytics with various successful applications. This course will discuss the mathematical foundations of (higher) topological structures on graphs, aiming to explore new topological approaches beyond the classical persistent homology. Jie Wu received a Ph.D. degree in Mathematics from the University of Rochester and worked as a postdoc at Mathematical Sciences Research Institute (MSRI), University of California, Berkeley. He was a former tenured professor at the Department of Mathematics, National University of Singapore. In December 2021, he joined the Yanqi Lake Beijing Institute of Mathematical Sciences and Applications (BIMSA). His research interests are algebraic topology and applied topology. The main achievements in algebraic topology are to establish the fundamental relations between homotopy groups and the theory of braids, and the fundamental relations between loop spaces and modular representation theory of symmetric groups. In terms of applied topology, he has obtained various important results on topological approaches to data science. He has published more than 90 academic papers in top mathematics journals such as "the Journal of American Mathematical Society", "Advances in Mathematics", etc. In 2007, he won"the Singapore National Science Award". In 2014, his project was funded by the "Overseas Joint Fund of National Natural Science Foundation" (Jieqing B). Quantum Fourier Analysis II 量子傅里叶分析 II Lecturer Jinsong Wu (Research Fellow) Prerequisite Functional Analysis We will continue talking about quantum Fourier analysis on many finite dimensional quantum symmetries such as fusion rings, fusion categories, subfactors, planar algebras and their connections. Many applications are also involved. Jinsong Wu, a research fellow at the Yanqi Lake Beijing Institute of Mathematical Sciences and Applications. He obtained Ph. D. from Academy of Mathematics and Systems Sciences, CAS. He is interested in Functional Analysis, Operator Algebra and their applications in Quantum Information. He together with his collaborators initiate Quantum Fourier analysis for quantum symmetries. The subject provides mathematical tool for quantum information. He was PI for grants from NSFC and his work was published in top journals such as PNAS, Adv Math, Comm Math Phys, J Func Anal, Sci China Math. Algorithms in Natural Language Processing 自然语言处理的算法分析 Lecturer Haihua Xie (Assistant Research Fellow) Prerequisite Computer Science, Machine Learning, Python Natural Language Processing (NLP) is an important research area in Artificial Intelligence. NLP mainly studys how to use computer technology to process linguistic texts. The specific research problems in NLP includes recognition, classification, extraction, transformation and generation of lexical, syntactic, semantic and pragmatic information. This course will introduce the basic concepts and methods in NLP, as well as some classical algorithms and models. Dr. Haihua Xie receives a Ph.D. in Computer Science at Iowa State University in 2015. Before joining BIMSA in Oct. 2021, Dr. Xie worked in the State Key Lab of Digital Publishing Technology for many years. His research interests include Natural Language Processing and Knowledge Service. He published more than 20 papers and obtained 5 invention patents. In 2018, Dr. Xie was selected in the 13th batch of overseas high-level talents in Beijing and was hornored as a "Beijing Distinguished Expert". Statistical Theory 统计理论 Lecturer (female) Fan Yang (Research Fellow) Topics in Random Matrix Theory 随机矩阵理论选题 Lecturer (male) Fan Yang (Associate Research Fellow) Online Tencent: 615 0642 7295 PW: Fan Yang is an Associate Professor of YMSC at Tsinghua University and an Associate Research Fellow at BIMSA. Prior to joining YMSC and BIMSA, he was a postdoctoral researcher with the Department of Statistics and Data Science at the University of Pennsylvania from 2019 to 2022. He received the Ph.D. degree in mathematics from the University of California, Los Angeles in 2019, the Ph.D. degree in physics from the Chinese University of Hong Kong in 2014, and the Bachelor degree from Tsinghua University in 2009. His research interests include probability and statistics, with a focus on random matrix theory and its applications to mathematical physics, high-dimensional statistics, and machine learning. He has published multiple papers in leading journals in mathematics and statistics, including Communications in Mathematical Physics, Probability Theory and Related Fields, Annals of Statistics, and IEEE Transactions on Information Theory. Steenrod Operations: From Classical to Motivic 斯廷罗德运算:从经典到原相 Lecturer Nanjun Yang (Assistant Research Fellow) Algebraic topology and algebraic geometry Steenrod operations are natural transformations of cohomology groups of spaces being compatible with suspensions, which play an important role especially in homotopy theory, for example, the Hopf invariant one problem and Adams spectral sequence. In this course, we introduce the Steenrod operations for both singular cohomologies and motivic cohomologies, focusing on constructions and basic properties. In particular, we prove the Adem relations and give the Hopf algebra (algebroid) structure of Steenrod algebra and its dual. Nanjun Yang got my doctor and master degree in University of Grenoble-Alpes, advised by Jean Fasel, and bachelor degree in Beihang University. Currently he is a assistant researcher in BIMSA. His research interests are motivic cohomology and Chow-Witt ring. He proposed the theory of split Milnor-Witt motives, which applies to the computation of the Chow-Witt ring of fiber bundles. The corresponding results have been published independently on journals such as Camb. J. Math and Doc. Math. The Application of Machine Learning Methods to the Solution of Partial Differential Equations II 求解微分方程的机器学习方法 II Lecturer Xiaoming Zhang (Research Fellow) Prerequisite Basic knowledge on numerical methods for partial differential equations and machine learning methods This course reviews the publications of the recent years on using machine learning methods to solve partial differential equations, such as Physics Informed Neural Network (PINN). The course will cover the materials on forward method, inverse method, reduced order modeling, and the assimilation of observational data to the scientific principles. Dr. Zhang Xiaoming, a native of Zhejiang province, is a national specially-appointed expert. He received his bachelor's, master's, and doctor's degrees from Zhejiang University, Peking University, and Massachusetts Institute of technology. At present, he is an industrial and application researcher of Beijing Institute of Mathematical Sciences and Applications and the head of the artificial intelligence and big data team. Before returning to China, Dr. Zhang served as a technology executive of several well-known American enterprises. After returning to China, he served as director of the Textile Industry Big Data Center of Zhejiang China Light Textile City Group Co., Ltd., researcher of Data Science Research Institute of Tsinghua University, visiting professor of East China Normal University, and part-time professor of Zhejiang University of Technology. As a Principal Investigator, he has won and presided over 20 research projects funded and awarded by the governments and major corporations. He has published more than 30 papers in international professional journals, of which two have been published in Nature subsidiary journals. He led his group won the first place in the Micro Array Quality Control (MAQC) competition hosted by the US Food and Drug Administration, twice the first places in the Alibaba Qianlima Industrial Big Data Competition, the second place and the third place in the First and Second China Industrial Internet Competitions respectively. Dr. Zhang led the development of the digital and intelligent service platform "Printing and Dyeing Brain", and is known as the "Godfather of intelligent printing and dyeing" in the industry. Dr. Zhang has been engaged in the research, development and application of artificial intelligence methods to the prediction and resource optimization and allocation for a long time. At present, his work focuses on the research and development of digital twins and process twins in the industrial field, which are used to optimize the process flow, help enterprises improve the yield and production efficiency, reduce energy consumption and pollution, improve product grade and increase their core competitiveness. Computational Discrete Global Geometric Structures 可计算离散整体几何结构 Lecturer Hui Zhao (Assistant Research Fellow) Level Graduate & Undergraduate Prerequisite Differentials Geometry, Linear Algebra, Topology This course introduces the algorithms and philosophy of Computational Discrete Global Geometric Structures. There are six components in this course: 1) the mathematical theory of global geometric structures on smooth manifolds; 2) their discrete counterparts; 3) the algorithms to compute them; 4) the programming and coding techniques for the implementaitons of the algorithms; 5) the visualization techniques to demonstrate these geometry structures; 6) besides computer graphcis, the possible discussion of their potential applications in other inter-discipline fields, and so on. Computational discrete global geometric structures(CDGGS) is pioneered by Prof. Shing-Tung Yau and Prof Xianfeng Gu with the classic Gu-Yau algorithm to compute discrete harmonic one-form around 2000. After twenty year's research by mathematicians and computer scientists, currently, CDGGS can handle many other discrete global geometric structures, such as holomorphic one-form, foliation, holomorphic quadratic differentials, conformal structures, and so on, and play an important role in the application of geometry modeling and meshing. Hui Zhao is an assistant research fellow in Yanxi Lake Beijing institute of mathematical sciences and applications, he had been a visiting scholar in Harvard university and YMSC, Tsinghua university. His research focuses on the theory and algorithms of computational conformal geometry, computational discrete global geometric structures, computer graphics, geometry modeling and their applications in medical images, materials, CAD, virtual reality, Metaverse, and so on. He published fives computer graphics books and dozens of papers in related journals. BIMSA秋学期新增两门线上公开课,欢迎注册! 北京雁栖湖应用数学研究院2022年秋学期公开课已开课,详情课程信息可点击链接跳转。同时新增加两门线上公开课,欢迎注册选课。Registration for WeChat usersRegistration for Google users*已注册的同学可直接增选课程(注意课程列表的变化)欢迎加入课程微信群微信群、课程详细信息链接如下:(复制链接到浏览器中打开,如提示不安全,可点击高级->继续访问。)https://www.bimsa.net:10000/course.html BIMSA2022秋学期 新增... BIMSA一周学术活动 09.26-09.30 公开课/COURSE讨论班/SEMINAR2022-09-26Monodromy and period map of the Winger PencilBIMSA Topology Seminar报告人 自云鹏 时 间 15:30-17:00 Mon地 点 1129BZoom 537 192 5549(PW: BIMSA)摘要We investigate the moduli space of genus 10 curves that are endowed with a faithful action of the icosahedral group $\Acal_5$. We will show among other things that this moduli space essentially consists two copies...
CommonCrawl
TECHNICAL ADVANCE Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier Jocelyn Barbosa1,2, Kyubum Lee1, Sunwon Lee1, Bilal Lodhi1, Jae-Gu Cho4, Woo-Keun Seo3 & Jaewoo Kang1 Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region. Facial nerve palsy is a loss of the voluntary muscles movement in one side of the human face. It is frequently encountered in clinical practices which can be classified into two categories: peripheral and central facial palsy. Peripheral facial palsy is the result of a nerve dysfunction in the pons of the brainstem where the upper, middle and lower one side of facial muscles are affected while central facial palsy is the result of nerve function disturbances in the cortical areas where the lower half of one side of the face is affected but the forehead and eyes are spared, unlike in peripheral FP (Fig. 1) [1, 2]. a Right-sided central palsy. b Right-sided peripheral palsy Facial paralysis (FP) afflicted individuals suffer from inability to mimic facial expressions. This symptom creates not only dysfunctions in facial expression but also some difficulties in communication. It often causes patients to be introverted and eventually suffer from social and psychological distress, which can be even more severe than the physical disability [3]. This scenario has led to greater interest to researchers and to clinicians in this field, and consequently, to the development of grading facial functions and methods in monitoring the effect of medical, rehabilitation or surgical treatment. There has been considerable body of work developed to assess facial paralysis. Some of the latest and widely used subjective methods are Nottingham system [4], Toronto facial grading system (TFGS) [5, 6], 1inear measurement index (LMI) [7], House-Brackmann (H-B) [8] and Sunnybrook grading system [9]. However, traditional grading systems are highly dependent to clinician's subjective observation and judgment; thus, suffer from inherent drawback of being prone to intra and inter-rater variability [4, 6, 10, 11]. Moreover, these methods have issues in integration, feasibility, accuracy and reliability and in general are not commonly employed in practice [9]. Hence, an objective grading system becomes apparently invaluable for physicians to begin the rehabilitation process. Such grading system can be very helpful in discriminating between peripheral and central facial palsy as well as predicting the degree of severity. Moreover, it may assist the physicians to effectively monitor the progress of the patient in subsequent sessions. In response to the need for objective grading system, many computer-aided analysis systems have been created to measure dysfunction of one part of the face and the level of severity, but none of them tackles the facial paralysis type as the classification problem. Classifying each case of facial nerve palsy into central or peripheral plays a significant role rather than just assessing the degree of the FP. This is to assist the physicians to decide for the most appropriate treatment scheme to use. Furthermore, most of the image processing methods used are labor-intensive, if not; suffer from the sensitivity to the extrinsic facial asymmetry caused by orientation, illumination and shadows. Thus, to create a clinically usable and reliable method is challenging and still in progress [1]. We proposed a novel method that enables quantitative assessment of facial paralysis that tackles classification problem of facial paralysis type and degree of severity. Maximum static response assay (MSRA) [12] assesses facial function by measuring the displacement of standard reference points of the face. It compares facial photographs taken at rest and at maximum contraction. The method was labor-intensive and time-consuming [13]. Watchman et al. [14, 15] measured facial paralysis by examining the facial asymmetry on static images. Their approach is sensitive to the extrinsic facial asymmetry caused by orientation, illumination and shadows [16]. Wang et al. [17] used salient regions and eigen-based method to measure the asymmetry between the two sides of face and compare the expression variations between the abnormal and normal sides. SVM is employed to produce the degree of paralysis. Anguraj, K. et al. [18] utilize canny edge detection technique to evaluate the level of facial palsy clinical symptoms (i.e. normal, mild or severe). Nevertheless, canny edge detection is very vulnerable to noise disturbances. Input facial images may contain noise disturbances such us wrinkles or excessive mustache that may also result to many false edges detected. On the other hand, Dong, J. et al. [19] utilize salient point detection and Susan edge detection algorithm as the basis for quantitative assessment of patient's facial nerve palsy. They apply K-means clustering to determine 14 key points. However, this falls short when this technique is applied to elder patients, in which exact points can be difficult to find [20]. Most of these works are solely based on finding salient points on human's face with the use of the standard edge detection tool (e.g. Canny, Sobel, SUSAN) for image segmentation. Canny edge detection may result in inaccuracy of edge detection and influences a connected edge points since this algorithm compares the adjacent pixels on the gradient direction to determine if the current pixel has local maximum. This may in turn result to improper generation of key points. Another method [20] was proposed based on the comparison of multiple regions on human face, where they compare the two sides of the face and calculate four ratios, which is used to represent the paralysis degree. Nevertheless, this method suffers from the influence of uneven illumination. A technique that generate closed contours for separating outer boundaries of an object from background such as LAC model for feature extraction may reasonably reduce these drawbacks. In this study, we make three main contributions. First, we present a novel approach for efficient quantitative assessment of facial paralysis classification and grading. Second, we provide an efficient way for detecting the landmark points of the human face through our improved LAC-based key point detection. Third, we study in depth the effect of combining the iris behavior and the facial key point-based symmetry features on facial paralysis classification. In our proposed system, we leverage the localization of active contour (LAC) model [21] to extract facial movement features. However, to improve the segmentation performance of LAC, we present a method that automatically selects appropriate parameters of initial evolving curve for each facial feature; thereby improving the key points detection. We also provide an optimized Daugman's algorithm for efficient iris segmentation. To the best of our knowledge, our work is the first to address facial palsy classification and grading using the combination of iris segmentation and key-point detection. Proposed facial paralysis assessment: an overview Our work evaluates facial paralysis by asymmetry identification in both sides of the human face. We capture facial images (i.e. still photos) of the patients with a front-view face and with reasonable illumination of lights so that each side of the face achieves roughly similar amount of lighting. The patient is requested to perform 'at rest' face position and four voluntary facial movements that include raising of eyebrows, closing of eyes gently, screwing-up of nose, and showing of teeth or smiling. The photo taking procedure starts with the patient at rest, followed by the four movements. A general overview of the proposed system is presented in Fig. 2. Facial images of a patient, which are taken while being requested to perform some facial expressions, are stored in the image database. Framework of the proposed facial paralysis assessment The process starts by taking the raw image from the database. This is followed by face dimension alignment. At this step, we find the face region as our region of interest by performing face detection algorithm. As a result, we only keep the face region and all other parts of the captured images are removed. Preprocessing of images for contrast enhancement and noise removal is then performed. But firstly, the images or region of interest (ROI) have to be converted to a grayscale form. Median filtering technique and histogram equalization operation are then applied to remove noise and to obtain satisfactory contrast, respectively. Further image enhancement is achieved by applying the log transformation technique, which expands values of dark pixels and compresses values of bright pixels, essential for subsequent processes. Figure 3 shows an illustrative example of these pre-processing steps. Pre-processing results. a original ROI, (b)–(c) median filter and histogram equalization results, respectively, (d) log transformation result with c = 0.1 This is followed by facial features detection (e.g. eyes, nose, and mouth) and feature extraction. Features are extracted from the detected iris region and the key points. We then calculate the differences between two sides of the face. The symmetry of facial movement is measured by the ratio of the iris exposure as well as the vertical distances between key points from the two sides of the face. The ratios generated are stored in a feature set vector, which are trained by classifiers. Six classifiers (i.e. using rule-based and regularized logistic regression) were trained, one for healthy or unhealthy discrimination, one for facial palsy classification and another four classifiers for the facial grading based on House-Brackmann (H-B) scale. Feature extraction with optimized Daugman's integro-differential operator and localized active contour Face region detection The facial images sometimes do not only include the face region only. Captured images may also include other parts such as the shoulder, neck, ears, hair or even background. Since we are only interested in the face region, it is our objective to keep this region and remove unnecessary parts of the captured images. To achieve this aim, we apply facial feature detection using Haar classifiers [22]. To detect human facial features, such as the mouth, eyes and nose, Haar classifier cascades are first to be trained. In order to train the classifier, AdaBoost algorithm and Haar feature algorithm were implemented. The Haar cascade classifier makes use of the integral and rotated images. Integral image [23] is an intermediate representation of an image and using this, the simple rectangular features of a certain image are calculated. Integral image is an array that contains the sums of the pixels' intensity values located directly to the left of a pixel and directly above the pixel at location (x, y) inclusive. Thus, on the assumption that G[x, y] is the pre-specified image and GI[x, y] is the integral image then the formula for computing the integral image is as follows: $$ GI\left[ {x,y} \right] = \sum \limits_{{\mathrm{x'}} \le {\mathrm{x}},{\mathrm{y'}} \le y} G\left({\mathrm{x'},{\mathrm{y'}}} \right) $$ ((1)) The integral image is rotated and is calculated at a forty five degree angle to the left and above for the x value and below for the y value. If GR[x, y] is the rotated integral image then the formula for computing the rotated integral image is as follows: $$ GR\left[ {x,y} \right] = \sum \limits_{{\mathrm{x'}} \le {\mathrm{x}},{\mathrm{x'}} \le {\mathrm{x}} - \left| {{\mathrm{y}} - {\mathrm{y'}}} \right|}^{} G\left({\mathrm{x'},{\mathrm{y'}}} \right) $$ Using the appropriate integral image and taking the difference between six to eight array elements forming two or three connected rectangles, a feature of any scale can be computed. This technique can be adapted to accurately detect facial features. However, the area of the image that is subject for analysis has to be regionalized to the location with the highest probability of having the feature. To regionalize the detection area, regularization [22] is applied. By regionalizing the detection area, false positives are eliminated and the detection latency is decreased due to the reduction of the region examined. Feature extraction process Once the facial regions are detected, feature extraction process takes place. This process involves detections of key points and iris/sclera boundaries. Figure 4 shows the flow on how features are extracted. Flow of feature extraction process Feature extraction start from the preprocessing of the input image and facial region detection. To extract the geometric-based features, parameters of the initial evolving curve of each facial feature (e.g. eyes, eyebrows and lip) are first automatically selected. These parameters are then used as inputs to localized active contour model [21] for proper segmentation of each facial feature. This step is followed by the landmarks or key point detection process. We also apply Scale-invariant feature transform (SIFT) [24] to find the common interesting point of two images (i.e. at rest position and eye brows lifting). The points generated by SIFT are useful for determining the capability of the patients to do facial motions by comparing the two facial images that includes at rest position and lifting of eyebrows. Region-based features extraction involves detection of iris/sclera boundary using Daugman's Integro-Differential Operation [25]. All features are stored in a feature vector. Table 1 shows the list of asymmetrical features we used in this paper. Labeled parts of facial features are shown in the subsection that tackles the key points detection. Table 1 List of features Key points detection The detection of key points includes initialization and contour extraction phases for each facial feature we used in this paper. The goal is to find the 10 key points on edges of facial features as shown in Fig. 5a and b. a Labeled parts of facial features, (b) key points Overview of localized region-based active contour model (LACM) This section provides the overview of the primary framework of LAC [21] model, which establishes an assumption that the foreground and background regions would be locally different. The statistical analysis of local regions leads to the construction of a family of local energies in every point along the evolving curve. In order to optimize these local energies, each point is considered individually, and moves to minimize (or maximize) the energy computed in its own local region. To calculate these local energies, local neighborhoods are split into local interior and local exterior region by the evolving curve. In this paper, we let I be a specified image on the domain Ω, and C be a closed contour represented as the zero level set of a signed distance function ϕ, whose value can be given as: C ={w|ϕ(w)} [26]. The interior of C is specified by the following approximation of the smoothed Heaviside function: $$ H\phi (w) = \left\{ {\begin{array}{lc} \quad\;\, 1,\qquad \qquad\qquad\quad\quad\; \phi (w) < - \varepsilon\\ \quad\;\;\; 0,\qquad\qquad\qquad\quad\quad\; \phi (w) < \varepsilon\\ {\frac{1}{2}\left\{ {1 + \frac{\phi }{\varepsilon} + \frac{1}{\pi }\sin \left({\frac{{\pi \phi (w)}}{\varepsilon }} \right)} \right\},otherwise.} \end{array}} \right. $$ Similarly, the exterior C can be defined as 1- Hϕ(w). The epsilon ε is the parameter in the definition of smooth Dirac function having a default value of 1.5. The area just adjacent to the curve is specified by finding the derivative of Hϕ(w), a smooth version of the Dirac delta denoted as $$ \delta \phi (w) = \left\{ {\begin{array}{lc} \quad\;\; 1,\qquad\qquad\qquad {\phi (w) = \varepsilon }\\ \quad\; 0,\qquad\qquad\qquad {\left| {\phi (w)} \right| < \varepsilon }\\ {\frac{1}{{2\varepsilon }}\left\{ {1 + \cos \left({\frac{{\pi \phi (w)}}{\varepsilon}} \right)} \right\},otherwise.} \end{array}} \right. $$ Parameters w and x are expressed as independent spatial variables. Each of these parameters represents a single pointing, respectively. Using this notation, the characteristic function β(w,x) in terms of a radius parameter r can be written as follows: $$ \beta (w,x) = \left\{\begin{array}{cc} {1,}& \quad {\left\| {w - x} \right\| < r}\\ {0,}& \quad {otherwise.} \end{array} \right. $$ β(w,x) is then utilized to mask local regions. Therefore, a localized region-based energy formed from the global energy by substituting local means for global ones is shown below [27]: $$ F = - {\left({ u_{w} - v_{w}} \right)^{2}}, $$ $$ u_{w} = \frac{{\int_{\Omega_{x}} {\beta (w,x) \cdot (H\phi (x)) \cdot I(x)dx} }}{{\int_{\Omega_{x}} {\beta (w,x) \cdot (H\phi (x))dx} }} $$ $$ v_{w} = \frac{{\int_{\Omega_{x}} {\beta (w,x) \cdot (1 - H\phi (x)) \cdot I(x)dx} }}{{\int_{\Omega_{x}} {\beta (w,x) \cdot (1 - H\phi (x))dx} }} $$ where the localized versions of the means u w and v w represent the intensity mean in the interior and exterior regions of the contour, which is localized by β(w,x) at point x. By ignoring the image irregularity that may arise outside the local region, we only consider the contributions from the points within the radius r. Also, a regularization term is added to maintain the smoothness of the curve. Additionally, the arc length of the curve is penalized and weighted by a parameter λ and the final energy E (ϕ) is given as follows: $$ \begin{aligned} E(\phi) &= \int_{\Omega_{w}} {\delta \phi (w)} \int_{\Omega_{w}} {\beta (w,x) \cdot F(I(x),\phi (x))dxdw}\\&\quad + {\lambda} \int_{\Omega_{w}} {\delta \phi (w)} \left\| {\nabla (w)} \right\|dw \end{aligned} $$ By taking the first variation of this energy with respect to ϕ, the following evolution equation is obtained: $$ \begin{aligned} \frac{{\partial \phi }}{{\partial t}}(w) &= \delta \phi (w)\int_{\Omega_{x}} {\beta (w,x) \cdot \nabla_{\phi (x)}} F(I(x),\phi (x))dx\\&\quad + \lambda \delta \phi (w)div\left({\frac{{\nabla \phi (w)}}{{\left| {\nabla \phi (w)} \right|}}} \right)\left\| {\nabla \phi (w)} \right\|. \end{aligned} $$ It is worth note taking that this ensures that nearly all region-based segmentation energy can be put into this framework. In localized active contour approach, analysis of the local regions paves the way for the construction of local energies at each point along the curve. For the optimization of these local energies each point is considered separately, and moves to minimize the energy computed in its own local neighborhoods into local interior and local exterior by the evolving curve. This approach generally gives a satisfactory result in segmenting objects. However, such localization has an inherent trade-off because of its greater sensitivity to initialization [21]. Proper parameters (e.g. enough amount of the radius, distance from the evolving curve, etc.) have to be determined before fitting it in the localized active contour (LAC) model for correct segmentation. For finding the minimum-bound rectangular form of the eyes and eyebrows with proper parameters, we develop our own approach and the steps are summarized as follows: We choose the region of interest (ROI) based on the detected facial feature by Haar algorithm [22, 23]. For each ROI, we apply pre-processing for image improvement, which is to suppress unwanted distortions or enhances some image features essential for subsequent processes. This can be achieved by applying median filter, histogram equalization and log transformation techniques. To find the threshold that maximizes the between-class variance and transform the graylevel to a binary one, we choose OTSU's algorithm [28]. To further remove the noise, we applied the 8-neighborhood rule implementation, which removes all connected components that have fewer than P pixels (e.g. 1 % of the area of an image used for cluster, i.e. image size * 0.01) from the binary image. From this result, we define a window kernel m x n in two forms: vertical and horizontal form. A window kernel, in this context, is a binary block of m x n size where m and n are the number of rows and columns respectively (i.e. all having a pixel value of 1). For example, to find the x-axis boundary of the initial evolving curve c, k is run from center point going to the left, then to the right until convergence. In each step, we calculate the sum of the product of each pixel of k and the pixel of the binary image. The algorithm has converged when the computed sum yields 0 (i.e. with a background set to 0 - black), which means that the kernel hits a certain blocks of an image. Similarly, to find the y-axis boundary of c, we run the same kernel k up then down and stop until convergence. The intuition is that the operation stops when the kernel finds a matching shape. Sample output is shown Fig. 6. The size of the kernel depends on the result after 8-neighborhood rule implementation. For automatic selection of parameters for initial evolving curve of lip facial feature, we apply the algorithm in [29]. The procedure of finding the minimum-bounding rectangle: (a) eye image, (b) result of our algorithm, (c)–(d) minimum-bounding rectangular form Segmentation using Localized Active Contour Model (LACM) and feature point location When the minimum- bounding rectangle of each region is successfully identified, this rectangle shape is considered as the evolving curve that represents the zero level set C, as described in key point detection subsection, which can be fitted well in the LACM. Then the local neighborhoods of the points can be subsequently split by the evolving curve into two local regions: the local interior and local exterior region. The basic idea is to allow a contour to deform in order to minimize a given energy functional so that the desired contour extraction is achieved. By computing the local energies at each point along the curve, the evolving curve will deform by minimizing the local energies. The steps of facial feature contour extraction are as follows: Locate the eyes, eyebrows and lip region; Preprocess; Obtain the minimum-bounding rectangular form; Evolve with iteration; Extract the eyes, eyebrows and lip contours. We are interested of the following key points: two corners of the mouth, the supra orbital (SO), infra orbital (IO), inner canthus (IC) and outer canthus (OC). In this paper, we identify these key points from the segmented facial features generated by our improved LAC model. We take the segmented object (i.e. in binary form) and utilize the idea of the distance transform technique. With this technique, each pixel of the binary image is assigned a number that is the distance between that pixel and the nearest nonzero pixel of the binary image. For example, to get the left corner of the mouth, we calculate the distance transform of the first half of the binary image; and the first coordinates that has an assigned value equal to 0 would be the left corner. Figure 7 shows the experimental samples of the facial features contour extraction and the key points generated by the proposed approach. The procedure for facial feature contour extraction and key points detection: (a) pre-processing result, (b)–(c) results of our method for finding the minimum-bounding rectangular form, (d) the segmented object after 100 iterations with parameter λ at 0.3, (e) detected key-points, (f) more sample outputs of key points generated from the segmented region performed by a Localized Active Contour Model (LACM) Iris detection A person having paralysis in one side of his face would likely to have asymmetric distance between the upper and lower eyelid while performing facial movements. Intuitively, they may also have uneven iris exposure when performing different such voluntary movements (e.g. screwing of nose, showing of teeth or smiling). We apply Daugman's algorithm [25] and LACM to detect the iris boundary. From the detected face region, we need to determine the parameters of the eye region as input to Daugman's algorithm. As such, we utilize the detected 4 key points that includes the upper eyelid, IO, IC, and OC as shown in Fig. 8. Eye edges and corners Daugman's algorithm Daugman's algorithm is by far the most cited method in the iris recognition literature. It was proposed in 1993 and was implemented effectively in a working biometric system [30]. In this method, the author assumes that both pupil and iris have circular form and the integro-differential operator. After automatic parameter selection (i.e. rectangular boundary) of each eye, we implemented two pre-process operations for image contrast enhancement purposes. First, we use histogram equalization to improve the contrast between each eye's regions, which potentially facilitate the segmentation task. Second, we apply binarization based on a threshold, which is commonly used to maximize the separability of the iris regions from the rest of the eye region. However, this process has one major drawback of being highly dependent of the chosen threshold, and as image characteristics change, the results may seriously deteriorate [30]. Moreover, the binarization that compromises one of the Daugman's algorithm is based on applying an integro-differential operator to find the iris and pupil contour. We find the equation below, $$ max\left({r,{x_{o}},{y_{o}}} \right)\left| {{G_{\sigma} }\left(r \right)*{\partial \over {\partial r}} \oint \limits_{r,{x_{o}},{y_{o}}}^{} {{I\left({x,y} \right)} \over {2\pi r}}ds} \right| $$ where X0, Y0, r: the centre and radius of the circle (for each of pupil and iris); G σ(r): Gaussian function; δr: the radius range; I(X, Y): the original iris image. G σ(r) is a smoothing function, the smoothed image is then scanned for a circle that has a maximum gradient change, which indicates an edge. The above algorithm is done twice, first is the iris contour extraction then the pupil contour extraction. It is worth note taking that the problem is that the illumination inside the pupil is a perfect circle with very high intensity level, i.e. almost pure white. Therefore, the problem of sticking to the illumination as the maximum gradient circle has to be addressed. Optimized Daugman's algorithm To alleviate this problem, modification to the integro-differential operator is necessary to ignore all circles if any pixel on this circle has a value higher than a certain threshold. We apply the method proposed by P. Verma [30], where this threshold is determined to be 200 for the grayscale image. This is to ensure that only the bright spots, i.e. values usually higher than 245 will be rejected. On the other hand, iris is normally occluded by eyelid and eyelashes. For eyelid and eyelashes detection, P. Verma et al. [30] applied Sobel edge detection to the search regions to detect the eyelids. The eyelids are detected using linear Hough Transform method. The total number of edge points in every horizontal row inside the search region is calculated. In this paper, since we are only interested with the iris/sclera boundary, we simply apply the LAC model with our automatic parameter selection of the initial evolving curve to segment the eyes and get the portion of the eyelids. We then get the intersection of it with the detected iris/sclera boundary using Daugman's algorithm [25]. In what follows; we describe our approach for detecting the iris: Detect iris/sclera boundary. (Figures 9a and b) Extracted iris using optimized Daugman's algorithm: (a)–(b) result of Daugman's Integro-Differential operator iris detection. c–d result of optimized Daugman's algorithm Do binarization. We let this as vector A. Take a fraction of the radius of the detected iris to create a rectangular bound making it as the initial evolving contour and fit it to LAC model. Segment iris using LAC model. Do binarization of the result of the active contour segmentation. We call it vector B. Find intersections of two vectors A and B to get the values that are common to both vectors A and B. In set theoretic terms, we represent this as A B.This will return the values common to both A and B. This was followed by applying morphological operations: erosion followed by dilation. Take note that in iris segmentation in step 4, we tune the parameters: λ=0.1, r=12, and iteration=100; where λ is the relative weighting of curve smoothness, usually between [0 1], r is the radius of the ball used for localization and iterations is the number of iterations to run. Figures 9 and 10 depict the experimental samples of the iris segmentation by our optimized Daugman's algorithm. More sample results of optimized Daugman's algorithm. a Peripheral palsy subjects (b) central palsy subjects Facial paralysis classification and grading Symmetry measurement by iris and key points In this study, we measure the symmetry of both sides of the face using the ratios we obtained from extracting iris exposure and the vertical distances between the key points on each side and store them in a feature vector. We capture a set of five facial images of each patient performing 'at rest' position and some voluntary movements that includes, raising of eyebrows (while looking upward), closing of eyes, screwing of nose and showing of teeth or smiling. Then we calculate the area of the extracted iris as well as the distances between the key points: P1P2,P5P6,P2P9 and P6P10 (see Fig. 5b) and calculate the ratio between two sides. We find the expression below: irisA= A l /A r if A l <A r ; irisA = A r /A l if A r <A l ; dRatio = Dist l /Dist r if Dist l <Dist r ; dRatio = Dist r /Dist l if Dist r <Dist l . where A l and A r are the computed area or amount of iris exposure at the left and right side respectively; Dist l and Dist r are the computed distance of the specified points of each half of the face. irisA and dRatio are the ratio of the iris area and vertical distances respectively. Another important feature for symmetry measurement is the capability of the patients to raise the eyebrows (i.e. rate of movement feature in Table 1), by comparing the two facial images, the 'at rest position' and 'raising of eyebrows' as shown in Fig. 11. First, we pick one of the common points generated by SIFT algorithm, which are located below the eyes. We denote it as PSIFT. Then, we compute the vertical distances x1 and y1 (Fig. 11a), where x1 and y1 are the distances from P1 to PSIFT and P1 to P2 of the right eye, respectively. We then compute the ratio of x1 and y1. Similarly, for the second image (Fig. 11b), we calculate x2 and y2 as well as the ratio. We get the difference of these two ratios (i.e. difference between y1/ x1 and y2/ x2) and denote it as rMovement. The same procedure is applied to the two images for finding the ratio difference for the left eye (i.e. difference between y3/ x3 and y4/ x4) and denote it as lMovement. Intuitively, the difference of these two ratios for FP patients would likely to have a smaller value (usually approaching to 0, which implies inability to perform) than those of the normal subjects. Thus, the rate of movement can be computed by finding the ratio between rMovement and lMovement. The higher the value of this ratio, the higher possibility that the patient is able to raise his eyebrows, thereby signifying the ability to perform such activity. Symmetry measurement based on one of the common points (PSIFT) generated by Scale-invariant feature transform (SIFT) Facial palsy classification Classification of facial paralysis type involves two tasks: discriminating normal from abnormal subjects and the proper facial palsy classification. In this context, we need two classifiers to be trained, one for healthy and unhealthy discrimination (0-healthy, 1-unhealthy) and another one for facial palsy type classification, i.e. 0-peripheral palsy(PP), 1-central palsy(CP). For each classifier, we consider Regularized Logistic Regression (RLR), Support Vector Machine, Decision Tree (DT), and naïve bayes (NB) as appropriate classification methods as they have been used successfully for pattern recognition and classification on datasets with realistic size. In addition to these classification methods, we also consider a hybrid classifier (i.e. rule-based + RLR) as appropriate for carrying out the classification task. We model the mapping of symmetry features (i.e. f1, f2, …, f10 as described earlier) into each of these tasks as a binomial classification problem. Although Machine learning (ML) approach has been proven to yield quite accurate classification results, our objective is to first find a classifier with high precision given a training data size that is not very large. Also, given the intuition that normal subjects would likely to have an average measurement ratio closer to 1.0 and central palsy patients would likely to have a distance from SO to IO and iris exposure ratio nearly close to 1, applying rule-based approach prior to employing ML method would be appropriate in our work. Hence, a hybrid classifier that combines a rule-based expert system and machine learning was applied to both tasks. This process is presented in Fig. 12. If rule number 1 is satisfied, the algorithm continues to move to the case path (i.e. for the second task), making a test if rule number 2 is also satisfied; otherwise, it performs a machine learning task, such as RLR, SVM, and NB. It is worth note taking that the rules are generated after fitting the training set to the DT model. For example, rule 1 may have conditions, like if f10 < 0.95 and f8 < 0.95 (where f10 and f8 are two of the predictors used - see Table 1), then the subject is most likely to be diagnosed with facial paralysis, and therefore can proceed for rule no. 2 (i.e. to predict the FP type); otherwise, it performs a machine learning task. If the classifier returns 0, the algorithm simply exits from the whole process (i.e. control as depicted in the figure) as this implies that the subject is classified as normal/healthy; otherwise, it goes to the case path (i.e. the 2nd task - facial palsy classification) for performing test for rule number 2. Hybrid classifier If rule number 2 is met, the system returns 1 (i.e. 0-PP; 1-CP), otherwise the feature set is fed to another classifier, which could return either 0 or 1. Similar to rule number 1, rule number 2 is also generated by DT model. For example, rule 2 may have conditions like, if f1 > 0.95 and f4 > 0.95, then it is most like to be diagnosed as having central palsy (CP), otherwise, the feature set is fed to another classifier, which could return either 0 or 1 (i.e. 0-PP; 1-CP). For whichever type of facial paralysis the system returns, the algorithm continues to feed the features to assess the degree of paralysis in each region (detailed explanation is given in the next subsection). Quantitative assessment To assess the degree of paralysis in every region, we utilize the regional H-B scale that starts from grade I (normal) to grade VI (total paralysis). We model the mapping of the predictors into a regional H-B grade as a multi classification problem. Three hybrid classifiers are trained for the region grading: one classifier for the mouth region, one for the forehead region, and another one of the eye region. Finally, another hybrid classifier is employed for the overall FP grading. For region grading, we have the following features: forehead region utilizes f1, f2, f3, f4 and f5, mouth uses features f7, f8 and f9 and finally the eye region use f1 and f6. The next section presents the detailed steps for rule extraction, which is necessary in formulating our hybrid model. This enables us to get the grade of each region. Finally, to determine the degree of severity or overall grade, we utilize the H-B score [8]. We test each of the region grades (e.g. if mouthGrade =2, foreheadGrade =2 and eyeGrade =2, using to H-B scale [8], the overall grade is 3 - moderate). If the conditions are not satisfied the feature set is fitted to a multinomial logistic regression (MNLR) to get the overall grade. In our experiments, 325 facial images were taken from 65 subjects that include 50 patients and 15 healthy subjects. 50 patients consist of 29 males and 21 females, whereas, healthy subjects contains 5 males and 10 females. Subjects come from different races that include Koreans and Filipinos, with age ranges from 19 to 82. From the 50 unhealthy subjects, 40 of which have peripheral palsy (PP) cases and 10 have central palsy (CP) cases. We used 70 % of the dataset as the training set and 30 % as the test set. For example, in discriminating healthy from unhealthy subjects, we used 45 subjects (i.e. 35 patients plus 10 normal subjects) as the training set and 20 subjects (i.e. 15 patients plus 5 normal) as the test set. While in FP type classification problem 35 unhealthy cases (i.e. 28 PP and 7 CP) as our training set and 15 (i.e. 12 PP and 3 CP) as our test set. Subjects with facial palsy symptoms like Bell's palsy, left parotid tumor, Ramsay-Hunt syndrome were taken from Korea University, Guro Hospital. This study was approved by the Institution Review Board (IRB) of Korea University, Guro Hospital (with reference number MD14041-002). The board permitted not taking an informed consent due to the retrospective design of this study. Each subject performs 5 facial movements in 2048 × 1152 resolutions, which are converted to 960 × 720 pixels during image processing. Their facial palsy type and the overall H-B grading were evaluated by the clinicians. We calculate the area of the extracted iris and the vertical distances between the key points: P1P2,P5P6,P2P9 and P6P10. Overall, we utilize 10 features to train the classifier. Few sample results are presented in the Table 2. The samples are categorized into three parts: peripheral palsy (rows 1–4); central palsy (rows 5–8); and healthy (rows 9–12) cases. Notice that healthy subjects have very minimal asymmetry in both sides of the face yielding a ratio that approaches to 1. Table 2 Some results after extracting features Facial palsy classification and quantitative assessment of overall paralysis SVM, regularized logistic regression (RLR), naïve bayes (NB), and classification tree (DT) were also utilized to compare with our hybrid classifiers. Since the size of the dataset was not huge, we adopt the k-fold cross-validation test scheme.The procedure involves 2 phases: rule extraction and hybrid model formation. Phase 1: rule extraction Given the dataset D = (x1,y1,…,(x n ,y n )), we hold out 30 % of D and use it as a test set T = ((x1,y1,…,(x t ,y t )), leaving 70 % as our new dataset D'. We adopt k-fold cross-validation test scheme over the new dataset D', i.e. with k = 9. For example, if N = 45 samples, each fold have 5 samples. In each fold, we leave one fold out as our validation set and use the remaining 8 folds as our training set (e.g. in the first round, fold 1 is the validation set, in the second round fold 2 is the validation set and so on and so forth). In each fold, we train the 8 folds to learn a model (e.g. extract rules). Since we have 9 folds, we do this procedure for 9 repetitions.We extract rules by fitting the training set to a DT model. Phase 2: hybrid model formation In this phase, a hybrid model is formed by combining the rules extracted in each fold and the ML classifier, followed by the testing out of each model over the validation set using different parameters (e.g. lambda for logistic regression and gamma for SVM). For example, to form the first hybrid model, we combine the rule extracted from the first fold and a logistic regression model (i.e. rule + LR) and test out its performance over the validation set (left-out fold) while applying it to each of the 10 parameters. Thus, for each fold, we get 10 performance measures. We repeat this process for the succeeding folds, which means performing the steps for k times (i.e. with k = 9 as we are using 9-fold cross validation) will give us 90 performance measures. We compute the average performance across the folds. This will give us 10 average performance measures (i.e. for each parameter n) each corresponding to one specific hybrid model. Then we choose the best hybrid model (rule-based + regularized logistic regression), i.e. with lambda that gives a maximum average or the one that minimizes errors. We retrain using selected best hybrid model on all of D' and test this out over the hidden test set T = ((x1,y1,…,(x t ,y t )), i.e. 30 % of the dataset D and report the performance of the hybrid model. We evaluate the classifiers by their average performance for 20 repetitions of k-fold cross-validation using k = 9. We repeat the process for evaluation of other hybrid model (e.g. rule-based + SVM, rule-based + NB, etc.) and finally choose which hybrid model performs best. The hybrid classifiers, SVMs, RLR, DT and NB were tested and our experiments demonstrate that hybrid classifier rule-based + RLR (hDT_RLR) achieves best performance for discriminating healthy from unhealthy (i.e. with paralysis) subjects. Similarly, for the palsy type classification: central palsy (CP) and peripheral palsy (PP), hDT_RLR hybrid classifier outperformed other classifiers used in the experiments. Figure 13 presents a graphical comparison of the average performance of our hybrid classifier, RLR, SVM, DT and NB based on our proposed approach. For healthy and unhealthy discrimination (Fig. 13a), our hDT_RLR hybrid classifier achieves a better performance on harmonic mean of almost 2.0 % higher than RLR, SVM, DT and NB. Similarly, for facial palsy classification, hDT_RLR is at least 4.2 % higher than the other classification methods as in Fig. 13b. Experiments reveal that hDT_RLR hybrid classifier yields more stable results. Comparison of the performance of RLR, SVM, DT, NB and Hybrid classifier hDT_RLR. a healthy and unhealthy classification (b) CP and PP type classification For the quantitative assessment of the regional paralysis and overall FP grade, we consider Multinomial Logistic Regression (MNLR), SVM, Decision Tree (DT), naïve bayes (NB) and hybrid classifiers (e.g. rule-based + MNLR) as appropriate classification methods. Hybrid classifier hMNLR achieves best performance for facial palsy grading. Figure 14 presents a graphical comparison of the average performance of hMNLR, RLR, SVM, DT and NB based on the combined iris and key point-based approach. The accuracy of grading by hMNLR is at least 4 % higher than the average performance of the other five classification methods. Comparison of the performance of RLR, SVM, DT, NB and Hybrid classifier hDT_RLR for region grading and overall H-B grade Tables 3 and 4 present the comparison of the classifier performance between the key point-based method using improved LAC model and the proposed approach combining iris and key point-based features. Overall, our proposed method outperforms the key point-based features in harmonic mean by at least 7.6 %; as well as the sensitivity and specificity with the improvement of 5.6 – 9.6 % for discriminating healthy from unhealthy subjects. Similarly, experiments show that our proposed approach yields better performance for classifying central and peripheral palsy particularly in sensitivity and specificity with an improvement of 5.7 – 6.2 %. Table 3 Comparison of the performance of the two methods for healthy and unhealthy discrimination Table 4 Comparison of the performance of the two methods for facial palsy classification Some detailed comparisons of the performance of each of the classification methods including hybrid classifiers are presented in Tables 5 and 6. Table 5 Comparison of the performance of different classifiers for facial palsy classification Table 6 Comparison of the performance of different classifiers for healthy and unhealthy discrimination Hybrid classifiers noticeably reveal a significant improvement of the performance rather than using the classification methods individually. Although other hybrid classifiers also yield a good result as with the hDT_RLR, we need a hybrid classifier that provides stable results in terms of sensitivity performance measurement as it is more important in our study but of course without sacrificing the specificity; thus we chose hDT_RLR. A comparison of the performance of the proposed method based on iris-key-point-based feature extraction and the key-point-based approach for facial palsy grading is presented in Table 7. The proposed approach outperforms the key-point-based approach, most particularly in the regions of forehead and eye with the improvement of 6.0 and 5.2 % respectively. Most importantly an accuracy of almost 94 % for the overall H-B grading is achieved by our proposed iris-key-point-based approach. Table 7 Comparison of the performance of the proposed method based iris segmentation and LAC model and the key point-based detection using LAC The results demonstrate that the better performance is in the forehead and mouth region. Although eye region is noticeably having the lowest accuracy among other regions, but our proposed approach reveals a very significant improvement from using the key-point based approach. The approach based on the combined iris and LAC-based key point detection yields a better performance than the solely key point-based approach, most particularly in the forehead and eye region, which involves not only the key-point based features but as well as the iris behavior. Empirical statistics and methods have found that active contour approach does have a very appealing quality that generates closed contours, which can be very useful in separating the outer boundaries of an object from the background than Canny and SUSAN [29, 31]. Therefore, it is presumed that localizing active contour (LAC) yields superior results than the standard edge detection tools used in the previous methods for detecting the landmark points of the human face. Good enough, our experiments show that LAC-based key point detection works well. However, combining key point-based detection and iris segmentation for comprehensive facial paralysis assessment has yet more to offer as it outperformed the method that solely uses key points for feature extraction. Additionally, the eye region is full of wrinkles, especially the facial images of elderly people. The edges and eyelids ridge vary irregularly that sometimes deceived the system to generate asymmetrical results even for normal subjects. Features that are solely based on standard-edge-detection-tools-generated key points are not robust to model these subtle characteristics of eye surroundings. Furthermore, our proposed approach to combine iris segmentation and the LAC-based key point detection for feature extraction provides a better discrimination of central and peripheral palsy most especially in 'raising of eyebrows' and 'screwing of nose' movements. It shows changes of the structure on edges of the eye, i.e., the significant difference between the normal side and the palsy side for some facial movements (e.g. eyebrow lifting, nose screwing, and showing of teeth). Also, features based on the combination of iris and key points generated by LAC can model the typical changes in the eye region. A closer look at the performance in the eye region, as shown in Tables 2, 3, 4 and 7 reveal interesting statistics in terms of the specific abilities of the two methods. Our method proves to have significant contribution in discriminating central from peripheral palsy patients and healthy from facial palsy subjects. The combination of iris segmentation and LAC-based key point approach is more suitable for this operation. The system 'fpQAS' is implemented in matlab. The executable file is available for download on our website: http://infos.korea.ac.kr/fpQAS/. In this paper, we present a novel approach to quantitatively classify and assess facial paralysis in facial images. Iris segmentation and LAC-based key point detection are employed to extract the key features. The symmetry of facial images is measured by the ratio of the iris area and the vertical distances between key points in both sides of the face. Our Hybrid classifier provides an efficient quantitative assessment of the facial paralysis. One limitation of the proposed method is that it has a greater sensitivity to facial images, having significant natural bilateral asymmetry. However, our iris segmentation and key point-based method has several merits that are essential for our real application. Specifically, iris features describe the changes of the iris exposure while performing some facial expressions. They reveal the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region. Furthermore, iris-based features combined with key point-based features are insensitive to the illumination as our proposed method utilizes the key advantage of both localized active contour and the most cited algorithm for iris detection, the Daugman's integro-differential operator. Kanerva M. Peripheral facial palsy grading, etiology, and melkersson–rosenthal syndrome. PhD thesis, University of Helsinki, Finland, Department of Otorhinolaryngology. 2008. May M. Anatomy for the clinician In: May M, Schaitkin B, editors. In The Facial Nerve, May's Second Edition. 2nd edn., New York: Thieme, New York: 2000. p. 19–56. Peitersen E. Bell's palsy: the spontaneous course of 2,500 peripheral facialnerve palsies of different etiologies. Acta Otolaryngol. 2002; 122(7):4–30. Murty G, Diver JP, Kelly PJ. The nottingham system: objective assessment of facial nerve function in the clinic. Otolaryngol Head Neck Surg. 1994; 110:156–61. Fields M. Facial nerve function index: a clinical measurement of facial nerve activity in patients with facial nerve palsies. Oral Med Oral Pathol Oral Surg. 1990; 69:681–682. Ross BG, Fradet G, Nedzelski JM. Development of a sensitive clinical facial grading system. Otolaryngol Head Neck Surg. 1996; 114:380–6. Burres S, Fisch U. The comparison of facial grading system. Arch Otolaryngol Head Neck Surg. 1986; 112:753–8. House JW, Brackmann DE. Facial. nerve grading system. Otolaryngol Head Neck Surg. 1985; 93:146–7. Neely J, Cherian N, Dickerson C, Nedzelski J. Sunnybrook facial grading system: Reliability and criteria for grading. Laryngoscope. 2010; 120:1038–1045. Vrabec J, Backous D, Djalilian H, Gidley P, Leonetti J, Marzo S, et al. Facial nerve grading system 2.0. Otolaryngol Head Neck Surg. 2009; 140(4):445–50. Rickenmann J, Jaquenod C, Cerenko D, Fisch U. Comparative value of facial nerve grading systems. Otolaryngol Head Neck Surg. 1997; 117(4):322–5. Johnson PC, Brown H, Kuzon WM, Balliet R, Garrison JL, Campbell J. Simultaneous quantification of facial movements: the maximal response array of facial nerve function. Ann Plast Surg. 1994; 32:171–9. Samsudin W, Sundaraj K. Image processing on facial paralysis for facial rehabilitation system: A review. In: IEEE International Conference on Control System, Computing and Engineering. Malaysia: IEEE: 2012. p. 259–263. Wachtman GS, Liu Y, Zhao T, Cohn J, Schmidt K, Henkelmann TC, et al. Measurement of asymmetry in persons with facial paralysis. In: Combined Annu. Conf. Robert H. Ivy. 2002. Ohio Valley: Soc. Plastic Reconstr. Surg. Liu Y, Schmidt K, Cohn J, Mitra S. Facial asymmetry quantification for expression invariant human identification. Comput Vis Image Understanding J. 2003; 91:138–59. He S, Soraghan J, O'Reilly B, Xing D. Quantitative analysis of facial paralysis using local binary patterns in biomedical videos. IEEE Trans Biomed Eng. 2009; 56:1864–1870. Wang S, Li H, Qi F, Zhao Y. Objective facial paralysis grading based on pface and eigenflow. Med Biol Eng Comput. 2004; 42:598–603. Anguraj K, Padma S. Analysis of facial paralysis disease using image processing technique. Int J Comput Appl. 2012; 54(0975–8887):1–4. Dong J, Ma L, Li W, Wang S, Liu L, Lin Y, et al. An approach for quantitative evaluation of the degree of facial paralysis based on salient point detection. In: International Symposium on Intelligent Information Technology Application Workshops. IEEE: 2008. Liu L, Cheng G, Dong J, Qu H. Evaluation of facial paralysis degree based on regions. In: Proceedings of the 2010 Third International Conference onKnowledge Discovery and Data Mining. IEEE Computer ScienceSociety: Washington: 2010. p. 514–7. Lankton S, Tannenbaum A. Localizing region-based active contours. IEEE Trans Image Process. 2008; 17:2029–039. Wilson P, Fernandez J. Facial feature detection using haar classifier. Journal of Computing Sciences in Colleges. 2006; 21:127–133. Viola P, Jones M. Rapid object detection using boosted cascade of simple features. In: IEEE Conference on Computer Vision and Pattern Recognition, Proceedings of the 2001 IEEE Computer Society Conference. USA: IEEE Computer Society: 2001. Lowe DG. Object recognition from local scale-invariant features. In: Proceedings of the International Conference on Computer Vision,1999. p. 1150–1157. Daugman J. How iris recognition works. IEEE Transactions on Circuits and Systems for Video Technology. 2004; 14:21–30. Tsai R, Osher S. Level set methods and their applications in image science. Commun Math Sci. 2003; 1:623–56. Yezzi A, Tsai A, Willsky A. A fully global approach to image segmentation via coupled curve evolution equations. J Vis Commun Image Represent. 2002; 13:195–216. Otsu N. A threshold selection method from gray-level histogram. IEEE Trans Syst Man Cybern. 1979; 9:62–66. Liu X, Cheung Y, Li M, Liu H. A lip contour extraction method using localized active contour model with automatic parameter selection. In: International Conference on Pattern Recognition.Istanbul: IEEE: 2010. Verma P, Dubey M, Verma P, Basu S. Daughman's algorithm method for iris recognition — a biometric approach. Int J Emerg Technol Adv Eng. 2012; 2:177–185. Weeratunga SK, Kamath C. An investigation of implicit active contours for scientific image segmentation. In: Visual Communications and Image Processing ConferenceIS&T SPIE Symposium Electronic Imaging: San Jose, CA: 2004. This work was supported by the National Research Foundation of Korea (NRF) grant (Nos. 2014R1A2A1A10051238 and 2014M3C9A3063543) and research scholarship granted by the National Institute for International Education (NIIED), Ministry of Education, South Korea. Department of Computer Science and Engineering, Korea University, Seoul, South Korea Jocelyn Barbosa, Kyubum Lee, Sunwon Lee, Bilal Lodhi & Jaewoo Kang Department of Information Technology, Mindanao University of Science and Technology, Cagayan de Oro (on-study leave), Philippines Jocelyn Barbosa Department of Neurology, College of Medicine, Korea University Guro Hospital, Seoul, South Korea Woo-Keun Seo Department of Otorhinolaryngology-Head and Neck Surgery, Korea University Guro Hospital, Seoul, South Korea Jae-Gu Cho Kyubum Lee Sunwon Lee Bilal Lodhi Jaewoo Kang Correspondence to Woo-Keun Seo or Jaewoo Kang. Conceived and designed the methodology and experiments: JB, JK, WS, KL, SL. Collected images and performed manual evaluation: WS, JC. Performed the experiments: JB, BL. Analyzed the data: JB, JK, WS. All authors wrote the manuscript. All authors read and approved the final manuscript. Barbosa, J., Lee, K., Lee, S. et al. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier. BMC Med Imaging 16, 23 (2016). https://doi.org/10.1186/s12880-016-0117-0 Facial image analysis Facial paralysis measurement Iris segmentation Key point detection Localized active contour
CommonCrawl
Cambridge Archaeological Journal The Earliest Balance Weights in the West: Towards an... The identification of balance weights: a multifaceted methodological problem The metrological problem: theoretical and methodological framework Weighing equipment in Bronze Age Europe: state of the art The Aeolian setting in the Bronze Age Rectangular weights Lenticular weights Sphendonoid weights The Earliest Balance Weights in the West: Towards an Independent Metrology for Bronze Age Europe Published online by Cambridge University Press: 07 September 2018 Nicola Ialongo [Opens in a new window] Nicola Ialongo Georg-August-Universität Göttingen, Seminar für Ur- und Frühgeschichte, Nikolausberger Weg 15, D-37073 Göttingen, Germany Email: [email protected] [email protected] Save pdf (1 mb) Save to Dropbox Save to Kindle Weighing devices are the earliest material correlates of the rational quantification of economic value, and they yield great potential in the study of trade in pre-literate societies. However, the knowledge of European Bronze Age metrology is still underdeveloped in comparison to Eastern Mediterranean regions, mostly due to the lack of a proper scientific debate. This paper introduces a theoretical and methodological framework for the study of standard weight-systems in pre-literate societies, and tests it on a large sample of potential balance weights distributed between Southern Italy and Central Europe during the Bronze Age (second–early first millennium bc). A set of experimental expectations is defined on the basis of comparisons with ancient texts, archaeological cases and modern behaviour. Concurrent typological, use-wear, statistical and contextual analyses allow to cross-check the evidence against the expectations, and to validate the balance-weight hypothesis for the sample under analysis. The paper urges a reappraisal of an independent weight metrology for Bronze Age Europe, based on adequate methodologies and a critical perspective. Cambridge Archaeological Journal , Volume 29 , Issue 1 , February 2019 , pp. 103 - 124 This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright © McDonald Institute for Archaeological Research 2018 The spread of weighing devices in pre-literate Bronze Age Europe (Fig. 1) is generally viewed as the technological correlate of a cognitive shift towards the rational quantification of economic value (Pare Reference Pare, Fokkens and Harding2013; Peroni Reference Peroni2006; Rahmstorf Reference Rahmstorf, Morley and Renfrew2010; Renfrew Reference Renfrew, Papadopoulos and Urton2008). Whereas the origin of weight-systems is intimately correlated to the need of calculating incomes and expenditures, negotiating purchase-prices and assessing profit (Powell Reference Powell1977; Reference Powell1996), the very existence of a weight-based exchange presupposes 'some generally accepted index of value together with a certain amount of haggling over price' (Powell Reference Powell, Powell and Sack1979, 89), regardless of whether it is based on currency or barter. Figure 1. Distribution of weighing equipment in Bronze Age Europe. A: rectangular and lenticular weights. The numbers indicate the provenance of the unpublished material, previously unknown in archaeological literature. (1) Nuraghe Sant'Imbenia; (2) Nuraghe Palmavera; (3) Sa Tanca 'e sa Idda; (4) Nuraghe Sa Mandra Manna; (5) Nuraghe Santu Antine; (6) Nuraghe Talei; (7) Serra Orrios; (8) Monte Croce-Guardia; (9) Oratino; (10) Coppa Nevigata; (11) Aeolian Islands. B: other published weights and balance beams. (1) Potterne (Lawson Reference Lawson2000, 40); (2) Cliffs End Farm (Schuster Reference Schuster, McKinley, Leivers, Schuster, Marshall, Barclay and Stoodley2014); (3) Fort Harrouard (Mohen & Bailloud Reference Mohen and Bailloud1987, pl. 85.8); (4) Marolles-sur-Seine (2 exemplars: Mordant & Mordant Reference Mordant and Mordant1970, fig. 31.16; Pare Reference Pare1999, fig. 22.1); (5) Migennes (2 exemplars: Roscio et al. Reference Roscio, Delor and Muller2011, fig. 2.35, 5.13); (6) Monéteau (Joly Reference Joly1965, fig. 21); (7) Agris, Grotte de Perrats (Peake et al. Reference Peake, Séguier and Gomez de Soto1999, fig. 1.2); (8) Vilhonneur, Grotte de la Cave Chaude (Peake et al. Reference Peake, Séguier and Gomez de Soto1999, fig. 1.3); (9) Bordjoš (Medović Reference Medović and Hänsel1995, fig. 4). C: bone and antler balance beams; the numbers correspond to the sites on the map. However, weighing equipment and weight systems are still poorly understood in the framework of Bronze Age Europe, outside of Greece. Despite the widespread distribution of balance beams and weights, only a few specialist studies have been published so far on the subject, and a European metrology is still yet to be acknowledged as a proper research field. The main obstacle is represented by the lack of focus on methodologies that allow us to quantify confidence of the positive identification of potential balance weights in pre-literate societies. Starting from the most relevant literature in the field, this article outlines an extensive theoretical and methodological framework—based on testable hypotheses, reproducible experiments and clear expectations for experimental results—and tests it on a large sample collected from a vast territory, spanning southern Italy, central Europe and the Atlantic façade. The research is grounded on the following assumption: the rise of a global network—seamlessly connecting Europe, the Mediterranean and the Near East in a constant flow of ideas, people and commodities (Harding Reference Harding, Fokkens and Harding2013; Vandkilde Reference Vandkilde2016)—implies the existence of widely shared means of quantifying, negotiating and communicating economic value. Since eastern Mediterranean trade was largely based on weight-based quantification, and part of pre-literate Europe was actively engaged in long-distance trade with the east, balance weights must have been used systematically in Bronze Age Europe as well, and especially in those regions where contacts with eastern civilizations are more frequent. The actual distribution of weighing equipment shows that balance weights are well attested in Central Europe and Northern Italy—with minor concentrations in Portugal, Sardinia and eastern Europe (Fig. 1B)—but almost absent from southern Italy, where direct contacts with Aegean and eastern traders are most attested. In order to make a testable case for the base assumption, the research aimed at assessing whether balance weights are systematically attested in alleged trade hubs in southern Italy. This article focuses on unpublished materials from the Aeolian Islands (Sicily). A European weight system is defined, largely independent from other Mediterranean units. The study leads to the identification of three types of potential balance weights, discusses their typological and metrological affinities with similar objects distributed in Italy, central Europe and the Mediterranean, addresses the relationship between European and Mediterranean weight systems and analyses their contextual associations. Being concentrated in levels dating to the first half of the second millennium bc, the Aeolian finds represent the earliest balance weights documented so far in Europe, west of Greece. The broad culture-historical focus of this study roughly corresponds to the modern definition of Europe, with the exception of Greece. The absence of writing and centralized institutions sets the historical development of Bronze Age [hereafter BA] Europe slightly apart from the Aegean, as interpretive syntheses often remark (e.g. Fokkens & Harding Reference Fokkens, Harding, Fokkens and Harding2013; Harding Reference Harding2000, 4). This is even truer in the metrological field, where the lack of economic texts and inscribed weights urges a rethink of some aspects of the methodological framework. The confident identification of balance weights is the main obstacle for an independent metrology for pre-literate BA Europe. Unfortunately, balance weights are generally rather unremarkable objects, and thus they are seldom taken into account in prehistoric research (e.g. Kulakoğlu Reference Kulakoğlu, Çiğdem, Horowitz and Gilbert2017; Michailidou Reference Michailidou, Alberti, Ascalone and Peyronel2006; Petruso Reference Petruso1992; Pulak Reference Pulak1996; Rahmstorf Reference Rahmstorf, Alberti, Ascalone and Peyronel2006b). This may potentially have led to a large amount of evidence being ignored, misinterpreted as some kind of working tools, or even discarded during excavations. Nevertheless, balance weights do possess recurrent shapes, and the construction of a knowledge for pre-literate BA Europe must start from a systematic typological appraisal. So far, specialist studies on pre-literate Europe have focused on specific regions, and typological connections between different areas have not been explored. This study is concerned, first, with the identification of widely spread formal types of artefacts that do not present clearly functional features—such as sharp edges, points, sockets, or tangs—but still present a recurrent, albeit simple, shape. This criterion, for example, allows the discard of many types of hammers, axes, or chisels, and also very simple tools, such as polishers and scrapers, that are often obtained from natural pebbles. There is indeed some ground to claim that, in some cases, natural pebbles were used as balance weights (Medović Reference Medović and Hänsel1995; Rahmstorf Reference Rahmstorf, Nessel, Heske and Brandherm2014a), but their identification is too problematic and it will not be addressed here. The typological selection provides a first appraisal of potential balance weights, to be tested through further criteria. Use wear represents a problematic aspect of the identification process (Rahmstorf Reference Rahmstorf, Morley and Renfrew2010). Balance weights are tools, and since they were frequently manipulated one can expect, for instance, polishing from frequent use and accidental damage in the form of scratches and chipping. Technological traces deriving from the manufacture of the object itself must also be expected: this aspect is seldom taken into account in use-wear studies on polished-stone tools from the Bronze Age (e.g. Delgado Raack & Risch Reference Delgado Raack, Risch, Longo and Skakun2008; Iaia Reference Iaia2014) while it is a focal point of research for earlier periods (e.g. Breglia et al. Reference Breglia, Caricola and Larocca2016; Yerkes et al. Reference Yerkes, Khalaily and Barkai2012). This means, in turn, that several objects that are usually interpreted as 'hard' tools might actually be subject to different interpretations. Use wear from secondary use is often documented as well (e.g. Rahmstorf Reference Rahmstorf2006a). Residuals of external substances are also expected. At least one of the different modes of weighing documented in Near Eastern texts of the third and second millennia bc can be expected to cause extensive use wear and residual traces on balance weights: using a two-arm balance, when a quantity of raw material that is being assessed does not match the standard weight lying on the opposite pan, smaller balance weights can be added to the quantity being measured, until the scale reaches the point of equilibrium (Peyronel Reference Peyronel, Ascalone and Peyronel2011). Thus, for instance, the act of frequently laying a balance weight on top of or within a heap of metal ingots or scraps can produce scratches and residuals on its surface. To sum up, use wear may or may not be present on a single balance weight without substantially affecting its interpretation. The perspective shifts slightly, however, if we look at a whole category of potential balance weights. If use wear is not 'systematically' present on every object pertaining to a potential type of balance weights—i.e. if at least some of them do not show use wear—then one can conclude that 'hard use' is not a defining property of that type, thus opening the way to different interpretations. Contexts represent a further criterion. Associations are a fundamental aspect, since they help define the context of use of potential balance weights. In the Aegean and in the Near East, occurrence in administrative quarters can support identification (e.g. Ascalone & Peyronel Reference Ascalone and Peyronel2006; Michailidou Reference Michailidou, Alberti, Ascalone and Peyronel2006; Rahmstorf Reference Rahmstorf, Morley and Renfrew2010; Schon Reference Schon2015). This criterion, however, is clearly useless for pre-literate BA Europe, where administrative institutions simply never existed. On the other hand, weighing equipment is already attested in the early third millennium bc in Greece and western Anatolia, before any evidence of centralized administrations (Rahmstorf Reference Rahmstorf and Molloy2016). It is also attested in private contexts in the ancient Near East already in the third millennium bc (e.g. Hafford Reference Hafford2005; Rahmstorf Reference Rahmstorf, Bieliński, Gawlikowski, Koliński, Ławecka, Sołtysiak and Wygnańska2014b), and in association with private book-keeping at least since the second millennium bc (Kulakoğlu Reference Kulakoğlu, Çiğdem, Horowitz and Gilbert2017). Balance weights are also well documented in private houses in Greece, during the second millennium bc (Petruso Reference Petruso1992, 35–6; Rahmstorf Reference Rahmstorf, Foster and Laffineur2003). Furthermore, the widespread presence of balance scales and weights in Continental Europe proves that the existence of central administrations is not even a requisite for the existence of weight systems. This urges focus on the 'private' sphere and on the material correlates of those economic activities that are most likely to rely on weight-based exchange. Based on both archaeological and textual evidence from the Near East and the Aegean, weight-based quantification is commonly associated, since the third millennium bc, with wool (e.g. Biga Reference Biga, Ascalone and Peyronel2011; Breniquet Reference Breniquet2008, 274–8; Liverani Reference Liverani1998, 52–8) and metals (e.g. Archi Reference Archi1988; Petruso Reference Petruso1992, 35–6; Powell Reference Powell1996; Rahmstorf Reference Rahmstorf, Bieliński, Gawlikowski, Koliński, Ławecka, Sołtysiak and Wygnańska2014b), while it is common opinion that weight systems came into use, in pre-literate Europe, as a consequence of the spread of metallurgy (e.g. Lenerz-de Wilde Reference Lenerz-de Wilde1995; Pare Reference Pare1999; Reference Pare, Fokkens and Harding2013; Peroni Reference Peroni and Hänsel1998; Primas Reference Primas1997; Renfrew Reference Renfrew, Papadopoulos and Urton2008). Therefore, the contextual analysis is aimed at assessing whether there are significant patterns of association between potential balance weights, textile production, metallurgy and metal trade. Statistical analysis of the metrological properties of potential balance weights represents the last criterion. Other criteria provide circumstantial support, but the regularity of mass values is the single property that entirely subsumes the function of balance weights, and hence it is the only mandatory aspect for a positive identification. The identification of balance weights is a process of hypothesis testing, and as such, aims primarily at excluding unlikely alternatives. In this respect, the positive outcome of statistical tests on potential balance weights is the evidence that most effectively makes any other possible interpretation less likely. The metrological problem, however, is rather complex in itself, and will be treated separately. Units of measurement between norm and practice Units of measurement are pure theoretical concepts, whose function is to provide a frame of reference to comply with a norm. In the Bronze Age of the Near East and the Aegean, the wealth of cross-checked archaeological and textual evidence provides an ideal ground to explore how official, theoretically exact systems were organized and how they were reciprocally connected (Parise Reference Parise1971; Alberti et al. Reference Alberti, Ascalone, Parise, Peyronel, Alberti, Ascalone and Peyronel2006). However, any given physical unit becomes 'exact' only as soon as its equivalent value is formally fixed and written down in official accounts: everyday practice was—and still is (Ialongo & Vanzetti Reference Ialongo, Vanzetti, Biagetti and Lugli2016)—far from exact, and deviation from the norm was the norm itself (Chambon Reference Chambon2011). Approximate practice determined significant statistical dispersion in samples of balance weights, thus making excessive reliance on supposedly exact units often misleading. However, several studies have shown that advanced statistical methods yield great potential in the empirical evaluation of weight systems, even without relying on exact units as ultimate principles (e.g. Hafford Reference Hafford2005; Reference Hafford2012; Ialongo et al. Reference Ialongo, Vacca and Peyronel2018a; Pakkanen Reference Pakkanen and Brysbaert2011; Petruso Reference Petruso1992; Rahmstorf Reference Rahmstorf, Morley and Renfrew2010). This study proceeds under the assumption that local weight systems, regardless of theoretical units, tend to normalize within large-scale trade networks (Ialongo et al. Reference Ialongo, Vacca and Peyronel2018a). The existence of standard weight systems implies compliance with a norm, but a self-regulated network based on customary commercial relationships can enforce such a norm effectively, even in the absence of centralized regulatory authorities (Chambon Reference Chambon, Alberti, Ascalone and Peyronel2006; Ialongo et al. Reference Ialongo, Vacca, Vanzetti, Brandherm, Heymans and Hofmann2018b; Rahmstorf Reference Rahmstorf, Morley and Renfrew2010). The success of a large-scale network does not even require complying with a single weight system: the existence of different units in the Aegean, Anatolia, the Levant and Mesopotamia in the Bronze Age, for example, did not hamper trade between these regions in any noticeable way. Comparative metrology and 'imported' units: the pitfalls of an ill-posed problem Research on balance weights of pre-literate Bronze Age Europe is traditionally based on the quest for exact units (e.g. Cardarelli et al. Reference Cardarelli, Pacciarelli, Pallante, Bernabò Brea, Cardarelli and Cremaschi1997; Lo Schiavo Reference Lo Schiavo, Alberti, Ascalone and Peyronel2006; Pare Reference Pare1999; Vilaça Reference Vilaça, Aubet Semmler and Sureda Torres2013), firmly rooted in the assumption that European systems must be based on (or entirely derived from) Aegean or Near Eastern units. The analytical praxis of comparing different systems, under the assumption that they are structurally connected, is defined as 'comparative metrology' (Chambon Reference Chambon2011, 28–38; Powell Reference Powell, Powell and Sack1979). Hence, every supposed 'unit' resulting from the study of balance weights is equated to the most similar one among those that have been already suggested for eastern systems. Already in the early 1900s (Viedebantt Reference Viedebantt1917; Weissbach Reference Weissbach1916), sharp critiques of the 'comparative approach' were published, which came to the conclusion that 'comparative metrology could be of value only after the specialized metrologies had created a more secure basis for comparison' (Powell Reference Powell, Powell and Sack1979, 76). Moreover, this approach overlooks the massive trade network that connected pre-literate Europe in the Bronze Age (e.g. Earle et al. Reference Earle, Ling, Uhnèr, Stos-Gale and Melheim2015; Harding Reference Harding, Fokkens and Harding2013; Pare Reference Pare, Fokkens and Harding2013; Renfrew Reference Renfrew, Papadopoulos and Urton2008) that may have prompted the formation of independent systems of measurement. Different Mediterranean units have been 'identified' for potential balance weights in central and western Europe, while no local system was ever acknowledged: an 'Ugaritic' unit in Portugal (9.3–9.4 g: Vilaça Reference Vilaça, Aubet Semmler and Sureda Torres2013), a 'Microasiatic' unit in Sardinia (11.75 g: Lo Schiavo Reference Lo Schiavo, Alberti, Ascalone and Peyronel2006) and an 'Aegean' unit between northern Italy and central Europe (c. 6.1–6.7 g: Cardarelli et al. Reference Cardarelli, Pacciarelli, Pallante, Bellintani, Corti and Giordani2001; Feth Reference Feth, Nessel, Heske and Brandherm2014; Pare Reference Pare1999). However, none of these studies makes use of statistical techniques that allow testing the significance of their samples. A further problem is overconfident reliance on the identification of foreign units whose definition is often still debated. The very existence of the alleged Aegean unit of c. 6.1–6.7 g (Zaccagnini Reference Zaccagnini1999–Reference Zaccagnini2001), for example, is highly uncertain. Such a light unit is derived from a heavier one of c. 58–65 g; while the heavy unit is strongly supported by both inscribed weights and statistical tests (Petruso Reference Petruso1992), there is no clear support for a light unit of c. 1/10 of its value (Hafford Reference Hafford2012). Finally, excessive focus on exactitude determines a lack of attention towards issues of approximation and statistical dispersion (Hafford Reference Hafford2012; Ialongo et al. Reference Ialongo, Vacca, Vanzetti, Brandherm, Heymans and Hofmann2018b; Lo Schiavo Reference Lo Schiavo, Lo Schiavo, Muhly, Maddin and Giumlia-Mair2009; Petruso Reference Petruso1992, 4–7). When we try to identify a unit, we must always bear in mind that that unit simply represents a theoretical 'mode' of a statistical dispersion that normally falls within a range of ±5 per cent (sometimes even more: Hafford Reference Hafford2012) in terms of relative standard deviation (i.e. no less than ±10 per cent, if we consider a 2σ distribution). It can even happen that two similar, but distinct, theoretical units are so close that the respective statistical dispersions overlap to a point where they are almost impossible to discern (Hafford Reference Hafford2012): this is the case, for example, of the 'Syrian' (7.8 g) and the 'Mesopotamian' (8.4 g) shekels, whose respective error distributions significantly overlap at a standard deviation of ±5 per cent (Ialongo et al. Reference Ialongo, Vacca and Peyronel2018a). All in all, this means that, when we think we recognize a close similarity between two supposed units, we might be looking, in fact, at two distinct systems that just happen to be similar enough to be confused. The existence of a weight-system indeed implies the existence of units of measurement, but the absence of texts and quantity marks urges postponement of the quest for exact units until a more solid metrological framework is available for pre-literate BA Europe. Frequency Distribution Analysis (FDA) and Cosine Quantogram Analysis (CQA) are the most used statistical methods. The aim of FDA is to identify significant clusters of weight-values; once they are located (if they are present at all) the analysis follows up with checking whether the mode of each cluster may correspond to approximate multiples of a same basic value. CQA is a more advanced method, introduced in contemporary weight metrology by Petruso (Reference Petruso1992). In physics, a quantum is the minimal amount of any physical entity employed in an interaction; in weight metrology, the same term defines the amount of mass that 'fits' the largest possible amount of measurements in a sample. CQA was devised by Kendall (Reference Kendall1974) to test whether an observed measurement X is an integer multiple of a 'quantum' q plus a small error component ε. X is divided for q and the remainder (ε) is tested. Positive results occur when ε is close to either 0 or q, i.e. when X is (close to) an integer multiple of q, where N is the sample size: $$\begin{equation*} \phi \left( q \right) = \ \sqrt {2/N} \ \mathop \sum \limits_{i = 1}^n \ {\rm{cos}}\left( {\frac{{2\pi {\varepsilon _i}}}{q}} \right) \end{equation*}$$ Plotted in a graph, the results show high positive peaks where a quantum gives a high positive value for ϕ(q). The advantage of CQA over FDA mainly consists in the fact that the former provides an estimation of likely quanta, while in the latter the quanta must be calculated separately, in the absence of a strict framework. A spreadsheet for the calculation of CQA is appended to this paper online as downloadable supplementary material. CQA is affected by several potential sources of bias (e.g. small sample size, inaccuracy of measurement, coexistence of different unit systems) and its results should be tested for statistical significance (Kendall Reference Kendall1974; Pakkanen Reference Pakkanen and Brysbaert2011). Monte Carlo simulations were executed (based on Kendall Reference Kendall1974) under the null-hypothesis that the sample of potential balance weights is not 'quantally configured', i.e. that the observed probability distribution is due to chance. The samples were randomized by adding a random fraction of ±15 per cent to each measurement. The simulation was applied 100 times and each generated dataset was analysed through CQA. The aim of the test is to observe whether a random dataset with similar distribution can produce values for ϕ(q) equal to or higher than those obtained for the real dataset, for the same range of quanta. If randomized samples can consistently score higher values than the real sample, it means that we cannot exclude that the probability distribution of the latter is simply due to chance. Since the sample of this study was collected from a very wide area, from different publications with different levels of weighing accuracy, the alpha level is set to 0.05, i.e. equal or higher results must not occur in more than 5 per cent of the iterations in order for the null-hypothesis to be rejected. CQA is not expected to show a single 'peak', but a series of peaks that are related to a consistent sequence of multiples and fractions. When this happens, and when at least one of the peaks is statistically significant, then the sample is said to be 'quantally configured', i.e. the quanta indicated by the analysis are good descriptors of the variability of the sample (Kendall Reference Kendall1974). It is important to clarify the real capabilities of CQA: in the absence of texts and inscribed weights, it will never be possible to identify which one of the peaks is the actual unit. For a perfectly quantal sample (i.e. a sample made entirely of multiples of the same exact number), the CQA will produce peaks of the same height for every single logical fraction or multiple of the unit itself. The example in Figure 2 shows the results for a perfectly quantal set of observations, corresponding to the nominal weights written on the labels of packaged goods in modern supermarkets in Italy (Ialongo & Vanzetti Reference Ialongo, Vanzetti, Biagetti and Lugli2016). The Quantogram shows a series of equally high peaks at the values of 1 g, 2 g, 2.5 g, 5 g, 10 g, 12.5 g, 25 g and 50 g: if we did not know that the unit of the Decimal System is '1', we would never be able to figure it out, not even with a perfectly quantal dataset. 'The unit' is merely a theoretical concept, and cannot be translated into practice without knowing the underlying normative system. Figure 2. Quantogram of a perfectly quantal sample: weight values written on the labels of packaged goods in Italian supermarkets. (Ialongo & Vanzetti Reference Ialongo, Vanzetti, Biagetti and Lugli2016.) Similar cases are documented in archaeological contexts. For instance, the balance weights of the city of Larsa (southern Mesopotamia, second millennium bc) consistently produce high peaks around 5.6 g, corresponding to two-thirds of the 'Mesopotamian unit' of 8.4 g (Ascalone & Peyronel Reference Ascalone and Peyronel2006, 451–64; Ialongo et al. Reference Ialongo, Vacca and Peyronel2018a). Moreover, the inscribed weights from Ayia Irini (Crete) clearly indicate a unit between c. 58 and 65 g (Petruso Reference Petruso1992, 61); however, while the CQA shows very good 'peaks' for the complete array of logical fractions of the unit, it does not indicate any positive result for the unit itself (see further, Fig. 10). In both cases, a normative unit does indeed exist, but, if we could not reconstruct its actual value through texts and inscribed weights, we could conclude—erroneously—that 'the unit' is the value suggested by the statistical analysis. From a statistical point of view, therefore, a significant result for a series of logical multiples is enough to validate the quantal hypothesis, whereas its historical interpretation must still be evaluated against other sources of evidence. 3D reconstruction of chipped weights The chipped objects that were documented directly during this research were subject to 3D scanning and digitally reconstructed (Fig. 3). The volume before and after the reconstruction was measured and the original mass was calculated based on density. Density (d) is a function of volume (v) and mass (m) (d = m/v), and the reconstruction is based on the assumption that, whatever the material employed, every object has an approximately uniform density. Hence, the reconstructed mass (m1) is obtained from a reconstructed volume (v1), given its density (m1 = d* v1). Obviously, this method is only valid for those objects whose original shape can be easily reconstructed (like the example in Figure 3). Figure 3. 3D reconstruction of chipped weights. Long after an early appraisal of the problem (Forrer Reference Forrer1906), research on weighing equipment in pre-literate Bronze Age Europe has seen substantial advancements only in the last 20 years or so (Fig. 1B). A study of stone weights in Northern Italy (Cardarelli et al. Reference Cardarelli, Pacciarelli, Pallante, Bernabò Brea, Cardarelli and Cremaschi1997; Reference Cardarelli, Pacciarelli, Pallante, Bellintani, Corti and Giordani2001; Reference Cardarelli, Pacciarelli, Pallante, De Sena and Dessales2004) was shortly followed by the identification of a class of rectangular weights, widely attested in central Europe (Pare Reference Pare1999). Surveys of Portuguese (Vilaça Reference Vilaça2003, Reference Vilaça, Aubet Semmler and Sureda Torres2013), Sardinian (Ialongo Reference Ialongo2011; Lo Schiavo Reference Lo Schiavo, Alberti, Ascalone and Peyronel2006) and Alpine (Feth Reference Feth, Nessel, Heske and Brandherm2014) contexts led to the identification of several types of potential balance weights. Apart from a few objects from northern Italy, dating to around 1500 bc (Cardarelli et al. Reference Cardarelli, Pacciarelli, Pallante, Bellintani, Corti and Giordani2001), the materials date to no earlier than 1400–1350 bc. Solid evidence for the existence of weight systems is also provided by the widespread attestation of balance scales. At least 11 bone/antler balance beams are attested in the Late Bronze Age (Fig. 1B–C), and several other doubtful exemplars (Cardarelli et al. Reference Cardarelli, Pacciarelli, Pallante, Bellintani, Corti and Giordani2001; Rahmstorf Reference Rahmstorf, Nessel, Heske and Brandherm2014a). The European evidence is rather exceptional: in the Near East, for example, only one exemplar of a balance beam is known for the whole Bronze Age (Genz Reference Genz2011; Peyronel Reference Peyronel, Ascalone and Peyronel2011), despite thousands of balance weights being attested in dozens of different sites. Balance scales are somewhat more common in Greece and Cyprus, balance pans being usually the only part preserved (Pare Reference Pare1999). This suggests that many more balances must have existed that were mainly realized in wood, as is documented, for example, in Sumerian texts of the third millennium bc (Peyronel Reference Peyronel, Ascalone and Peyronel2011). The Aeolian Islands are a small volcanic archipelago, located off the northeastern coast of Sicily. Between the 1950s and 1980s, the archipelago was the object of an extraordinary research programme, leading to the extensive excavation of several settlements and cemeteries, spanning the entire arc of the Bronze Age (c. 2300–950 bc, in Italian chronology) (Bernabò Brea & Cavalier Reference Bernabò Brea and Cavalier1968; Reference Bernabò Brea and Cavalier1980; Reference Bernabò Brea and Cavalier1991) (Fig. 4). Figure 4. Aeolian Islands, distribution of sites with potential balance weights. (1) Filicudi–Capo Graziano; (2) Salina–Portella; (3) Lipari–Acropolis. For the entire duration of the BA, the Aeolian Islands are fully integrated in Mediterranean networks. Imported Aegean vessels are attested from at least the Capo Graziano 2 phase (c. 1700–1500 bc) until the Ausonio II phase (c. 1200–950 bc) (Jones et al. Reference Jones, Levi and Bettelli2014, 50–54). Cypriot materials occur in layers dating to c. 1500–1350 bc (Martinelli Reference Martinelli2005, 255–60). Proofs of external contacts also include metal and amber, distributed throughout the entire sequence, and the exceptional recovery of a tin ingot (c. 1500–1350 bc: Bettelli & Cardarelli Reference Bettelli, Cardarelli and Martinelli2010). Finally, impasto vessels of Aeolian production, dating to the first half of the second millennium bc, were recovered on the island of Vivara (Naples), some 260 km to the north (Cazzella et al. Reference Cazzella, Levi and Williams1997). All the stone objects from Bernabò Brea's excavations (currently preserved in the Bernabò Brea Museum in Lipari) were sorted through, with the exception of flint and obsidian tools. The objects identified as potential balance weights are polished, stone parallelepipeds (Fig. 5), ranging between 6.66 g and 469.41 g. Figure 5. Potential balance weights from the Aeolian Islands. A: rectangular weights; B: lenticular weights; C: sphendonoid weight. While these objects pertain to a formal type that is commonly classified as 'whetstone', they show no clear traces of use wear; furthermore, most of them are realized in soft stones such as schist, limestone, steatite and pyroclastic material, unsuitable for hard use. The objects were not subject to a microscopic analysis by a specialist. However, detailed 3D models, possessing a level of detail of the order of c. 1/10 mm, aided observation. None of the objects shows dense patterns of parallel or cross-cutting lines compatible with sharpening, and most of them possess uniform textures that do not show any localized smooth patches, grooves from rubbing, or percussion traces. Twenty objects were identified in total: 16 are plain parallelepipeds, with straight or convex sides (Fig. 5.1–13, 16–18). Three objects present a hole towards the top end (Fig. 5.14, 19–20). The heaviest one (Fig. 5.15) presents a rounded end and a circular hollow, possibly an aborted perforation or some sort of identification mark. Four more objects of this type are described in the publication (Bernabò Brea & Cavalier Reference Bernabò Brea and Cavalier1980), but could not be found in the storerooms. Rectangular weights are attested throughout the entire BA sequence, in three different settlements: Lipari-Acropolis, Salina-Portella and Filicudi-Capo Graziano (Fig. 4). Fifteen exemplars come from layers dated to the 'Capo Graziano' phase (c. 2300–1500 bc), two from the 'Milazzese' phase (c. 1500–1350 bc) and three from the 'Ausonio II' phase (c. 1200–950 bc). The majority of the finds belongs to the Capo Graziano phase, the earliest of the Aeolian BA sequence. The term 'Capo Graziano' (from the eponymous village on the island of Filicudi) identifies ceramic assemblages attested in northeast Sicily, between the Early Bronze Age (c. 2300–1700 bc) and the beginning of the Middle Bronze Age (c. 1700–1500 bc). Imported Aegean pottery is present in many Capo Graziano contexts (Jones et al. Reference Jones, Levi and Bettelli2014, 50–54). Typological considerations suggest that the village on the 'Acropolis' of Lipari mostly pertains to the sub-phase Capo Graziano 2 (Bernabò Brea & Cavalier Reference Bernabò Brea and Cavalier1980, 217–58), dating between c. 1730–1500 cal. bc (Alberti Reference Alberti2013; Martinelli et al. Reference Martinelli, Fiorentino and Prosdocimi2010). Twelve out of 20 rectangular weights come from the Capo Graziano layers on the Acropolis, suggesting a notable concentration of the evidence in sub-phase Capo Graziano 2. While it cannot be ruled out that some of the objects pertain to the earlier sub-phase, the evidence from the Acropolis provides a solid terminus ante quem at c. 1500 cal. bc: this makes the Aeolian weights the earliest known in Europe so far, outside of Greece. The type is also well attested in peninsular Italy and Sardinia (Fig. 6A). All the objects come from Bronze Age settlements, the overall chronology spanning between c. 1500 and 725 bc. The materials from Coppa Nevigata (e.g. Cazzella et al. Reference Cazzella, Moscoloni and Recchia2012) and Monte Croce-Guardia (e.g. Cardarelli et al. in press) include several types of potential balance weights that are currently under study, and only the objects pertaining to the rectangular type are considered in this article. The Italian materials are very similar to rectangular weights widespread in central Europe in the LBA—mainly made of bronze, but with a few exemplars in stone—already identified by Pare (Reference Pare1999) (Fig. 6B). In the Late Bronze Age necropolis of Migennes, in France, two balance beams were found in the same grave, together with sets of rectangular weights (Roscio et al. Reference Roscio, Delor and Muller2011). In the eastern Mediterranean, this shape is not very common: a few stone weights from the shipwrecks of Uluburun and Cape Gelydonia can be vaguely compared (Fig. 6C) (Pulak Reference Pulak1996), and a single bronze weight from Uluburun is very similar (Fig. 6.20). The eastern Mediterranean evidence is substantially later than the Aeolian weights, and cannot be used to prove a dependency of western weights on eastern models; besides, it cannot be excluded that some of the rectangular weights in the Anatolian shipwrecks are of western origin. Figure 6. Potential rectangular weights. A: unpublished materials from Peninsular Italy and Sardinia; B: central Europe (from Pare Reference Pare1999); C: (20–21) Uluburun (from Pulak Reference Pulak1996); (22–23) Cape Gelydonia (from Pulak Reference Pulak1996). The sample of rectangular weights includes all the unpublished objects pertaining to this class attested in the Aeolian Islands (n = 16), Sardinia (n = 7) and peninsular Italy (n = 6), 23 objects identified by Pare (Reference Pare1999) in central Europe, six rectangular weights from the burial of Migennes and five objects from the site of Zug-Sumpf, in Switzerland (Bolliger Schreyer et al. Reference Bolliger Schreyer, Maise, Rast-Eicher, Ruckstuhl and Speck2004, Taf. 228). The sample comprises 63 complete or reconstructed items in total, ranging between 0.3 g and 469.41 g. The range of mass values is too wide for a single analysis to be accurate: therefore, the sample was split into two smaller, partially overlapping datasets of 1.5–20 g and 15–470 g; the smallest objects (three in total: 0.30 g, 0.39 g, 1.06 g) were not considered, since the very small size can produce an excessive measurement error. Frequency Distribution Analysis (FDA) shows that the sample forms neat clusters around 3–3.5 g, 6.5–7 g, 13 g, 20 g, 40 g, 50 g, 60 g and 80 g (Fig. 7A). Both datasets were analysed through CQA, targeting 1000 quanta between 1 g and 4 g and between 4 g and 24 g, respectively (Fig. 7B). The significance test rejects the null hypothesis: the quanta at 1.1 g, 1.65 g and 19.54 g are statistically significant, the latter being beyond the 1 per cent significance threshold (alpha = 4.56), i.e. there is less than a 1 per cent chance that a random dataset with the same distribution can produce a quantum with ϕ(q)> = 4.56 in the same range. The analyses show five further peaks around 3.3 g, 4.08 g, 5.16 g, 6.34 g and 10.24 g. The complete array of values forms a perfectly logical series of multiples and fractions, as is expected from quantally configured datasets (Kendall Reference Kendall1974; Pakkanen Reference Pakkanen and Brysbaert2011). By taking the quantum at 19.54 g as reference (for no other reason than being 'the highest'), we obtain a sequence of fractions corresponding to 1/18, 1/12, 1/6, 1/5, 1/4, 1/3 and 1/2. Figure 7. Statistical analysis of potential rectangular weights. A: Frequency Distribution Analysis. B: Cosine Quantogram Analysis of the total sample of potential rectangular weights; the fractions refer to the highest peak at 19.54 g. C: Comparison between the Italian sample (grey area) and the sample collected in Pare (Reference Pare1999) (black line). The CQA is displayed using a logarithmic scale, since the concentration of the peaks in the lower range would make the graph otherwise unreadable. The small sample of rectangular weights from Central Europe was analysed by Pare through CQA (1999); the results are very similar to those obtained in the present study. The comparison between the quantograms of the Italian and of the central European samples shows that the peaks are located approximately in the same position, coinciding, in turn, with the peaks of the total sample (Fig. 7C). Pare identifies the possible unit of the central European weights in the peak between 6 g and 7 g, proposing to connect it to a hypothetical 'Aegean' unit of slightly more than 6 g. However, the current analyses suggest that such a peak can as well be a by-product of a series based on either c. 5 g, 10 g or 20 g (or any multiple of these numbers), of which the value between 6 g and 7 g would represent just a logical fraction. Moreover, the tests clearly show that the Italian sample is significant even if considered separately, while the central European one is not. The comparison between the two samples demonstrates that the two series are perfectly compatible, but also that the relative height of the peaks in the sample from central Europe is not significant. The type includes lenticular objects, always made of stone, with an annular groove or a flattened surface along the diameter. Four of these objects were identified in the Aeolian Islands (Fig. 5B): one from the Acropolis of Lipari (Ausonio II phase, c. 1200–950 bc) and three from Salina-Portella (Milazzese phase, c. 1500–1350 bc). None of them shows clear traces of use wear. These objects are mainly realized in sandstone, but a few exemplars are made of limestone, marble and porphyry (Cardarelli et al. Reference Cardarelli, Pacciarelli, Pallante, Bellintani, Corti and Giordani2001), which, together with the absence of systematic use wear, suggests that the type was not meant to be regularly used in working activities. The type has been already identified as a potential class of balance weights in northern Italy, with a chronology of c. 1500–1150 bc (Cardarelli et al. Reference Cardarelli, Pacciarelli, Pallante, Bernabò Brea, Cardarelli and Cremaschi1997; Reference Cardarelli, Pacciarelli, Pallante, Bellintani, Corti and Giordani2001; Reference Cardarelli, Pacciarelli, Pallante, De Sena and Dessales2004) (Fig. 8.2–3). At least one exemplar is documented in Sardinia, at the coastal site of Sant'Imbenia (e.g. Rendeli Reference Rendeli2012), from a context of the mid eighth century bc (Fig. 8.1). These objects are also widely attested in continental Europe (known as 'Kanneluren-' or 'Rillensteine': Horst Reference Horst1981) (Fig. 8.4–5), although their interpretation as balance weights was never discussed. The variant with the flattened diameter is also similar to a type of balance weight attested in the eastern Mediterranean (Fig. 8.6–7). Figure 8. A: Potential lenticular weights. (1) Nuraghe Sant'Imbenia; (2–3) northern Italy (from Cardarelli et al. Reference Cardarelli, Pacciarelli, Pallante, Bellintani, Corti and Giordani2001); (4–5) Alpine pile-dwellings (Leuvrey Reference Leuvrey1999); (6–7) Uluburun (from Pulak Reference Pulak1996). B: sphendonoid weights (from Pulak Reference Pulak1996). (8) Uluburun; (9) Cape Gelydonia. The sample of lenticular weights includes 65 items in total, ranging between 275.43 g and 1273 g: two objects from the Aeolian Islands, 38 from Northern Italy, 20 from pile-dwelling settlements in Switzerland (Bolliger Schreyer et al. Reference Bolliger Schreyer, Maise, Rast-Eicher, Ruckstuhl and Speck2004, Taf. 223–225; Leuvrey Reference Leuvrey1999, 79–81), and one, unpublished, from Nuraghe Sant'Imbenia in Sardinia (Fig. 8.1), for an overall chronology between c. 1500 and 750 bc. Two outliers were removed before the analyses, in order to maintain the sample at a homogeneous scale: one weight from the Aeolian Islands (2,929 g), and one from Northern Italy (41 g). Finally, a chipped object from the Aeolian Islands was not considered, since it was not possible to obtain a 3D scan. FDA indicates that lenticular weights cluster around c. 440 g, 550 g, 660 g, 850 g and 1,250 g (Fig. 9). The CQA shows three statistically significant peaks at 27.5 g, 107.5 g and c. 440 g (Fig. 10). The three peaks are part of a logical sequence of multiples: 27.5 g is almost exactly a quarter of 107.5 g, and exactly one-sixteenth of 440 g. Cardarelli et al. propose a unit of c. 54 g for the lenticular weights, which is perfectly compatible with the CQA results (54≈27.5×2≈107.6/2≈440/8). All these numbers are equally good candidates to serve as a unit of measurement. Figure 9. Frequency Distribution Analysis of potential lenticular weights. Figure 10. Cosine Quantogram Analysis comparison between the European rectangular and lenticular weight (A) and the balance weights from Ayia Irini (B); the grey bands show where the respective peaks overlap. A: the fractions in plain small text are relative to the quantum of 19.54 g; the numbers in italics are relative to the quantum at 27.5 g. B: the fractions are relative to the Aegean unit of c. 61–65 g. The CQA is displayed using a logarithmic scale, since the concentration of the peaks in the lower range would make the graph otherwise unreadable. A single 'sphendonoid' weight with flat base (137.46 g) is attested in the Ausonio I phase on the Acropolis of Lipari (c. 1350–1200 bc). The type is extremely common in the central and eastern Mediterranean, and is attested at both Uluburun and Cape Gelydonia (Fig. 7B). To date, only one further sphendonoid weight is known in pre-literate Europe, in the grave of Migennes (Roscio et al. Reference Roscio, Delor and Muller2011; identified by Rahmstorf Reference Rahmstorf, Nessel, Heske and Brandherm2014a, 3, 13). In this case, the sample is not large enough for statistical tests, and the identification must rely solely on typology and contextual associations. The site on the acropolis of Lipari is a multi-stratified settlement with four superimposed building phases (Bernabò Brea & Cavalier Reference Bernabò Brea and Cavalier1980); potential balance weights are present in all occupation phases, except one (Milazzese phase, c. 1500–1350 bc). In the first phase (Capo Graziano phase, c. 2300–1500 bc), two groups of three weights come from two of the best preserved houses, while another is associated with the casting-mould of an axe (Fig. 11A). In the Ausonio I phase (c. 1350–1200 bc), a rectangular weight is associated with the sphendonoid weight (Fig. 11B). In the last occupation phase (Ausonio II, c. 1200–950 bc), a pair of rectangular weights is associated with a lenticular weight in the largest house of the settlement, in association with a casting mould and also with a hoard containing approximately 75 kg of copper ingots and scrap metal (Fig. 11C). Figure 11. Lipari, Acropolis. A–C: Distribution of potential balance weights and of the evidence related to metalworking, metal trade and textile production. The position of the symbols is not accurate, having the main purpose of showing which materials were found inside the houses. (A) Capo Graziano phase (c. 2300–1500 bc); (B) Ausonio I phase (c. 1350–1200 bc); (C) Ausonio II phase (c. 1200–950 bc). (D) quantification of different classes of materials inside the houses. The Greek letters identify the different phases of the settlement, from the earliest (δ) to the latest (α). Textile tools also show meaningful patterns of association. All the loom weights found in the settlement are always associated with potential balance weights. The number of spindle whorls inside houses normally ranges between 1 and 7 exemplars; there are only three houses—one for each phase—in which the spindle whorls range between 13 and 19 exemplars: such large numbers of spindle whorls are always associated with loom weights and potential balance weights. Finally, in the site of Portella di Salina, two lenticular weights were found in the same structure (R2), in association with a tin ingot and a casting mould (Bettelli & Cardarelli Reference Bettelli, Cardarelli and Martinelli2010). To summarize, in the Aeolian Islands potential weights often occur in small sets inside houses, and are significantly associated with evidence of metal working, metal hoarding and textile production. In central Europe, a set of rectangular weights is associated with two balance beams in the Late Bronze Age burial of Migennes, together with metallurgy-related working tools and small pieces of scrap gold and bronze (Roscio et al. Reference Roscio, Delor and Muller2011). Balance beams, rectangular weights and scrap gold/bronze represent a recurrent set of associations in LBA burials, possibly related to social figures dealing in metal trade (Pare Reference Pare1999). Finally, lenticular weights are frequently associated, in central and eastern Europe, with metal-working facilities in settlements (Horst Reference Horst1981; Vrdoljak & Stašo Reference Vrdoljak and Stašo1995) and with casting moulds in burials (Schmalfuß Reference Schmalfuß2007). The study of the Aeolian materials highlights the presence in the archipelago of at least two standard types of potential balance weights. Both the rectangular and the lenticular types represent peculiar European shapes, with a distribution spanning from southern Italy to central Europe. The rectangular type, in particular, is very common in Europe and only scarcely documented in the Mediterranean, which may lead to the hypothesis that the rectangular weights attested at Uluburun and Cape Gelydonia have a western origin. Finally, the presence of a single sphendonoid weight—a type widespread in the Eastern Mediterranean and extremely rare in Europe—might represent a residual trace of direct transactions with Mediterranean traders. Rectangular weights are generally plain objects, but a few of them present a hole towards the top end. Such a feature suggests slightly different functions for the two variants. The presence of a hanging hook is a common feature in balance weights. Its use is described in cuneiform texts (Peyronel Reference Peyronel, Ascalone and Peyronel2011): a single weight hanging from one arm of the balance can be used as a counterweight, in order to weigh different quantities repeatedly, against a fixed amount of mass. The hanging hook is also very common in a type of pear-shaped weights, widespread in northern Italy and the Alpine region in the Middle and Late Bronze Age (Cardarelli et al. Reference Cardarelli, Pacciarelli, Pallante, Bellintani, Corti and Giordani2001). Sometimes the insertion of a metal ring is documented in perforated weights (e.g. Feth Reference Feth, Nessel, Heske and Brandherm2014; Kulakoğlu Reference Kulakoğlu, Çiğdem, Horowitz and Gilbert2017; Pulak Reference Pulak1996); the rectangular weights, however, never present traces of metal inside the hole, and thus it is possible that they were simply held by a cord. Furthermore, the association with textile tools in the Aeolian Islands might raise the doubt that these objects are in fact loom weights. However, many clay loom weights with the typical truncated-pyramid shape are documented as well, which leads us to exclude this function for the stone objects. The circular indentation on the heaviest rectangular weight (Fig. 5.15) might be either an aborted perforation, or some kind of quantity mark. In both cases, the attempted perforation would not have affected the mass of the object in any significant way: the object is quite massive (469.41 g), and any weight loss deriving from the perforation would have been much smaller than the commonly accepted error margin of ±5 per cent. The possibility of the existence of quantity marks, on the other hand, cannot be verified, since the occurrence of possible signs in potential balance weights in pre-literate Europe is still too rare. The annular groove of some of the lenticular weights might have been used to fasten a cord, thus suggesting a pendent position. However, the exemplars from the Aeolian Islands and Sardinia, and several exemplars from northern Italy and the Alpine region have a flat surface, which makes them rather closer to the flat variant of the 'domed' weights from the Eastern Mediterranean. Statistical analyses support the balance-weight hypothesis for both rectangular and lenticular weights. The two different types appear to produce a logical sequence of multiples of a common system (Fig. 10A). If we choose the value of 19.54 g as a reference, we obtain a sequence of 1/12–1/6–1/5–1/4–1/3–1/2 for the lower part of the series. The higher range presents a series of very well-fitting multiples of the highest peak (27.5 g), that can be still correlated to the quantum of 19.54 g, for a sequence of 1½, 3, 5 and 20–22. The highest quantum (440 g) is too big to be directly compared, since the standard error distribution of 440 g (i.e. ±5 per cent = ±22 g) is bigger than 19.54, and therefore some uncertainty can persist on the classification of its fractional value. Comparison between the quantogram of the European sample and that of the balance weights of Ayia Irini highlights several similarities (n = 51; range = 12–390 g, excluding six outliers between 506 g and 1615 g) (Fig. 10B). The peaks of the European system match meaningful fractions of the Aegean one at the values of c. 5.16 g, c. 7.22 g, c. 10.24 g and c. 19.54 g, corresponding, respectively, to c. 1/12, c. 1/9, c. 1/6 and c. 1/3 of the Aegean unit of c. 58–65 g (Petruso Reference Petruso1992). The higher and the lower values do not produce notable peaks, but this depends on the sample being composed of mid-range weights. The statistical dispersions of the peaks systematically overlap, showing that the two systems share common multiples and fractions: this means that, regardless of the theoretical unit, such systems could be easily converted into each other, with a negligible error (Ialongo et al. Reference Ialongo, Vacca and Peyronel2018a). The 'matching-points' between the European and the Aegean systems provide convenient conversion factors, of which the Bronze Age traders must have been aware. Hence, previous suggestions about the similarity between the European and the Aegean systems are confirmed (Cardarelli et al. Reference Cardarelli, Pacciarelli, Pallante, Bellintani, Corti and Giordani2001; Pare Reference Pare1999), even though proving a direct dependency is beyond the capabilities of the method. Whether the identification of a theoretical unit may or may not be the point, the results show that the Mediterranean and European systems were largely compatible. Moreover, in the framework of a 'globalized' exchange (Earle et al. Reference Earle, Ling, Uhnèr, Stos-Gale and Melheim2015; Vandkilde Reference Vandkilde2016), one should not rule out the possibility that even the eastern systems may have been influenced by the European ones. The types of balance weights discussed in this article are systematically associated, in the LBA of central and eastern Europe, with balance beams, casting moulds, metal-working tools, metal-working facilities and gold/bronze scraps, both in burials and in settlements (Horst Reference Horst1981; Pare Reference Pare1999; Roscio et al. Reference Roscio, Delor and Muller2011; Schmalfuß Reference Schmalfuß2007; Vrdoljak & Stašo Reference Vrdoljak and Stašo1995). In the Aeolian Islands, the potential balance weights regularly occur in small sets inside houses and are systematically associated with evidence of metal trade (tin ingot and metal hoard), metallurgy (casting moulds) and textile production (loom weights and high amounts of spindle whorls) (Fig. 11). Hence, weighing equipment in Bronze Age Europe is systematically associated with those economic activities that are most expected to be relying on quantity-based exchange. Both metallurgical and textile production require means to assess the value of incoming raw materials and outgoing finished products. It can be argued that the 'added value' of specialized craftsmanship might not determine a mark-up to the purchase value of a crafted product, since other immaterial factors—such as the symbolic meaning of the object or the social prerogatives of the giver—can concur in shaping the perceived value of an object being exchanged (Brück & Fontijn Reference Brück, Fontijn, Fokkens and Harding2013). In any case, this can hardly apply to raw commodities, whose economic value must be at least equal to the amount of labour required for their production: we do not know whether the local production of wool, in the Aeolian Islands, was enough to support local textile craft entirely, but certainly the metallurgical activities had to be supplied through external trade, and there is hardly any way to assess the value of a shipment of raw metal other than by its weight. The spatial distribution indicates that weighing equipment occurs inside one or two houses per phase, suggesting that it was related to trade-dependent activities that were handled within households, the latter not necessarily intended as mere physical spaces but also in the sense of co-operative kinship-based economic units. The clustered distribution of balance weights, textile tools, casting moulds and hoards suggests that not every household was equally engaged in trade-dependent production. Interpretive models of the diachronic development of the Aeolian society in the broader framework of Southern Italy describe an increasing stratification, with specialized craftsmen eventually becoming attached to emerging elites (Peroni Reference Peroni1996). In this perspective, the large house α II in the last occupation layer on the acropolis of Lipari might provide an example of the incipient centralization of some trade-dependent economic activities (Fig. 11C). The presence of the under-floor hoard, with 75 kg of scraps and ingots, hints at the capacity of a single household to gather and dispose of substantial quantities of raw metal that had to be acquired through external trade. The evidence is substantially in line with the documentation from private contexts in the Aegean and the Near East (see above, 'The identification of balance weights'). All considered, it seems plausible that one of the basic purposes of weight-based trade was to exchange raw materials to be transformed into finished products; at the same time, weight-based exchange was also likely employed to transfer transiting commodities to external traders, and vice versa. The evidence suggests that trade-dependent economic activities were handled within a few selected economic units. While most households in a typical Bronze Age village would tend to focus on staple production, a few of them may have invested in trade-dependent production, providing services for the community and seeking marginal profit at the same time. The hypothesis of a weight-based trade managed within households encourages reflection about its agents. In the case, for instance, of metal trade—one of the largest sectors of the Bronze Age economy, largely dependent on long-distance exchange—the economic cycle of a single mass of copper would be articulated into at least three basic phases: extraction, transportation and manufacture (e.g. Earle et al. Reference Earle, Ling, Uhnèr, Stos-Gale and Melheim2015). In a basic model, in which a different agent carries out each phase, we would be dealing theoretically with a miner, a merchant and an artisan, respectively. This, however, does not fully account for all the possible combinations. In a simplified instance, for example, the same agent can be responsible for more than one phase. On the other hand, transportation and manufacture can take place repeatedly and indefinitely, each time carried out by a different agent; not to mention the possible existence of supervisors, appointed with the duty of overseeing the fairness of transactions. In other words, the life-cycle of a single mass of copper implies an indefinite number of instances of weight-based exchange, involving different agents with different skills, purposes and social extractions: a highly varied range of socio-economic figures effectively connected in a seamless flow by weighing technology as a means to quantify exchange values. Future research will help clarify whether the agents behind the Aeolian evidence were crafters, shopkeepers, seafaring merchants, supervisors, or a mix of different figures. Nonetheless, the evidence suggests that such figures may be less elusive, in pre-literate Europe, than one generally thinks. The analysed sample of potential balance weights from Bronze Age Europe meets the expectations set for typological, metrological and contextual characteristics. To summarize: 1) balance weights have standardized shapes, widespread across Europe, do not normally show systematic use wear and are often realized in materials unsuitable for working tools; 2) the statistical tests are significant, and highlight a consistent system of multiples and fractions that is compatible with sets of balance weights widespread across Europe; furthermore, the European weight system is largely compatible with other Mediterranean standards, regardless of whether or not they share the same unit; 3) balance weights are systematically associated, in Europe, with balance beams and with evidence of metal trade, metallurgy and textile production; 4) the evidence from the Aeolian Islands suggests that balance weights were employed for trade-dependent production by a few selected households. The evidence illustrated in this paper is but a small portion of what could be potentially available to research, but first it is necessary to raise more attention around the problem of an independent metrology for pre-literate Bronze Age Europe. Nonetheless, a few general traits can be outlined, to be explored in future research. Weighing equipment begins to spread in Europe in the same period in which copper, possibly along with other goods, assumes the role of a commodity proper (Pare Reference Pare, Fokkens and Harding2013; Renfrew Reference Renfrew, Papadopoulos and Urton2008). Recently, provenance studies of raw materials, in particular metals, have raised the question of a continent-wide network of commodity exchange (e.g. Ling et al. Reference Ling, Stos-Gale, Grandin, Billström, Hjärtnhner-Holdar and Persson2014; Lutz & Pernicka Reference Lutz and Pernicka2013). A striking contrast existed between the distribution of sources and products: the former were rare, concentrated and unevenly distributed, the latter nearly ubiquitous. This might indicate that regional economies developed a specialization in the production of locally abundant raw materials, for which a high demand existed elsewhere, while relying on external trade to acquire commodities that were locally lacking, or simply too costly to produce (Earle et al. Reference Earle, Ling, Uhnèr, Stos-Gale and Melheim2015). The disequilibrium in the relative cost of producing and importing different commodities, at a continental scale, was probably accompanied by the emergence of regionally differentiated, socially acknowledged, yet fluctuating perceptions of costs and gains: i.e. different value systems that had to be converted in order to make cross-regional exchange possible. The 'commodification' of goods was probably accompanied by the development of a cross-cultural frame of reference for the quantification of their value (Pare Reference Pare, Fokkens and Harding2013). In the framework of a continent-wide circulation of commodities and people (Harding Reference Harding, Fokkens and Harding2013; Vandkilde Reference Vandkilde2016), uniform weight systems would have greatly facilitated cross-cultural trade. The Aeolian evidence suggests that this process started at least as early as the first half of the second millennium bc in pre-literate Bronze Age Europe. However, the metrological field of pre-literate Europe is still very young, and thus the earliest attestation known is not necessarily the earliest ever; as research on balance weights progresses, it is not unlikely that new contexts will be identified, further raising this chronological limit. Before we can seriously speak of 'the earliest' weight systems, therefore, it is crucial to identify new contexts and shapes, map them out and discuss the reciprocal differences and similarities. This work was supported by the European Research Council under the European Union's Horizon 2020 Framework Programme and was carried out within the scope of the ERC-2014-CoG 'WEIGHTANDVALUE: Weight metrology and its economic and social impact on Bronze Age Europe, West and South Asia', based at the Georg-August-Universität, Göttingen, Germany [Grant no. 648055], Principal Investigator: Lorenz Rahmstorf. Access to the materials was granted by: Museo Archeologico Regionale Eoliano 'Luigi Bernabò Brea', director M.A. Mastelloni, appointed official M.C. Martinelli; Soprintendenza Archeologia, Belle Arti e Paesaggio per le Province di Sassari e Nuoro, Soprintendente F. Di Gennaro, appointed officials N. Canu and A. Sanciu; Polo Museale della Sardegna, director G. Damiani, appointed officials M. Giorgio and M. Puddu; Museo Archeologico di Dorgali, director G. Pisanu and curator M.G. Corrias; the excavation team of Monte Croce-Guardia, director A. Cardarelli; the excavation team of Coppa Nevigata, directors A. Cazzella and G. Recchia; the excavation team of Oratino, director V. Copat; the excavation team of Nuraghe Sant'Imbenia, director M. Rendeli; the excavation team of Nuraghe Palmavera, director A. Moravetti. My special gratitude goes to Maria Clara Martinelli, Alessandro Usai, Anna Depalmas and Luca Doro, for their invaluable help in accessing the materials. Finally, I am grateful to Lorenz Rahmstorf, Giulia Recchia, Agnese Vacca, Eleonore Pape, Sarah Clegg and Marco Bettelli for the stimulating discussions and helpful advice, and to the anonymous reviewers, who offered extremely detailed and knowledgeable comments. A preliminary version of this paper was presented at the session 'The value of all things. Value expression and value assessment in the ancient world (Europe, Near East and the Mediterranean)', organized by A. Gorgues, L. Melheim and T. Poigt at the 2017 EAA Conference in Maastricht. To view supplementary material for this article, please visit https://doi.org/10.1017/S0959774318000392 Alberti, G., 2013. A Bayesian 14C chronology of Early and Middle Bronze Age in Sicily. Towards an independent absolute dating. Journal of Archaeological Science 40, 2502–13.CrossRefGoogle Scholar Alberti, M.E., Ascalone, E., Parise, N. & Peyronel, L. (eds.), 2006. Weights in context: current approaches to the study of the ancient weight-systems, in Weights in Context. Bronze Age Weighing Systems of the Eastern Mediterranean: Chronology, Typology and Archaeological Contexts, eds. Alberti, E., Ascalone, E. & Peyronel, L.. (Studi e Materiali 13.) Rome: Istituto Italiano di Numismatica, 1–8.Google Scholar Archi, A., 1988. Testi amministrativi: registrazioni di metalli e tessuti. (Archivi Reali di Ebla 7.) Rome: Università degli Studi di Roma 'La Sapienza'.Google Scholar Ascalone, E. & Peyronel, L., 2006. I pesi da bilancia del Bronzo Antico e del Bronzo Medio (Materiali e Studi Archeologici di Ebla VII.) Rome: Università degli Studi di Roma 'La Sapienza'.Google Scholar Bernabò Brea, L. & Cavalier, M., 1968. Stazioni preistoriche delle isole di Panarea, Salina e Stromboli. (Meligunìs Lipàra III.) Palermo: Flaccovio.Google Scholar Bernabò Brea, L. & Cavalier, M., 1980. L'acropoli di Lipari nella preistoria (Meligunìs Lipàra IV.) Palermo: Flaccovio.Google Scholar Bernabò Brea, L. & Cavalier, M., 1991. Filicudi. Insediamenti dell'età del bronzo. (Meligunìs Lipàra VI.). Palermo: Accademia di Scienze, Lettere e Arti.Google Scholar Bettelli, M. & Cardarelli, A., 2010. La spada di bronzo e la grappa di stagno, in Martinelli, M.C., Archeologia delle Isole Eolie. Il villaggio dell'età del Bronzo Medio di Portella di Salina. Ricerche 2006 e 2008. Muggiò: Rebus Edizioni, 165–71.Google Scholar Biga, M.G., 2011. La lana nei testi degli Archivi Reali di Ebla (Siria, XXIV sec. a.C.): alcune osservazioni, in Studi Italiani di Metrologia ed Economia del Vicino Oriente Antico, eds. Ascalone, E. & Peyronel, L.. (Studia Asiana 7.) Rome: Herder, 77–92.Google Scholar Bolliger Schreyer, S., Maise, C., Rast-Eicher, A., Ruckstuhl, B. & Speck, J., 2004. Die spätbronzezeitlichen Ufersiedlungen von Zug-Sumpf. Band 3/2. Die Funde der Grabungen 1923–37. Tafeln und Katalog. Zug: Kantonales Museum für Urgeschichte Zug.Google Scholar Breglia, F., Caricola, I. & Larocca, F., 2016. Macrolithic tools for mining and primary processing of metal ores from the site of Grotta della Monaca (Calabria, Italy). Journal of Lithic Studies 3 (3), 1–20.CrossRefGoogle Scholar Breniquet, C., 2008. Essai sur le tissage en Mésopotamie. Paris: De Boccard.Google Scholar Brück, J. & Fontijn, D., 2013. The myth of the chief: prestige goods, power, and personhood in the European Bronze Age, in The Oxford Handbook of the European Bronze Age, eds. Fokkens, H. & Harding, A.. Oxford: Oxford University Press, 197–215.Google Scholar Cardarelli, A., Bettelli, M., Di Renzoni, A., et al., in press. Nuove ricerche nell'abitato della tarda età del Bronzo di Monte Croce Guardia (Arcevia – AN): scavi 2015–2016. Rivista di Scienze Preistoriche.Google Scholar Cardarelli, A., Pacciarelli, M. & Pallante, P., 1997. Pesi da bilancia nell'età del bronzo?, in Le Terramare. La più antica civiltà padana, eds. Bernabò Brea, M., Cardarelli, A. & Cremaschi, M.. Milan: Electa, 629–42.Google Scholar Cardarelli, A., Pacciarelli, M. & Pallante, P., 2004. Pesi e bilance nell'età del bronzo italiana: quadro generale e nuovi dati, in Archaeological Methods and Approaches: Industry and Commerce in Ancient Italy, eds. De Sena, E. & Dessales, H.. (BAR International Series S1262.) Oxford: Archaeopress, 80–88.Google Scholar Cardarelli, A., Pacciarelli, M., Pallante, P. & Bellintani, P., 2001. Pesi e bilance nell'età del bronzo italiana, in Pondera. Pesi e misure nell'Antichità, eds. Corti, C. & Giordani, N.. Modena: Libra 93, 33–58.Google Scholar Cazzella, A., Levi, S.T. & Williams, J.L., 1997. The petrographic examination of impasto pottery from Vivara and the Aeolian Islands: a case for inter-island pottery exchange in the Bronze Age of southern Italy. Origini 21, 187–205.Google Scholar Cazzella, A., Moscoloni, M. & Recchia, G., 2012. Coppa Nevigata e l'area umida alla foce del Candelaro durante l'età del Bronzo. Foggia: Edizioni del Parco/Grenzi.Google Scholar Chambon, G., 2006. Weights in the documentation from Mari: the issue of the norm, in Weights in Context. Bronze Age weighing systems of the eastern Mediterranean: chronology, yypology and archaeological contexts, eds. Alberti, E., Ascalone, E. & Peyronel, L.. (Studi e Materiali 13.) Rome: Istituto Italiano di Numismatica, 185–202.Google Scholar Chambon, G., 2011. Normes et pratiques: L'homme, la mesure et l'écriture en Mésopotamie. I. Les mesures de capacité et de poids en Syrie Ancienne, d'Ebla à Émar. (Berliner Beiträge zum Vorderen Orient 21.) Berlin: PeWe.Google Scholar Delgado Raack, S. & Risch, R., 2008. Lithic perspectives on metallurgy: an example from Copper and Bronze Age south-east Iberia, in Functional Studies and the Russian Legacy. Proceedings of the International Congress, Verona, 20–23 April 2005, eds. Longo, L. & Skakun, N.. (BAR International series S1783.) Oxford: Archaeopress, 235–52.Google Scholar Earle, T., Ling, J., Uhnèr, C., Stos-Gale, Z. & Melheim, L., 2015. The political economy and metal trade in Bronze Age Europe: understanding regional variability in terms of comparative advantages and articulations. European Journal of Archaeology 18 (4), 633–57.CrossRefGoogle Scholar Feth, W., 2014. Ha B-zeitliche Waagewichte? Überlegungen zu Wirtschaft und Handel in den jungbronzezeitlichen Seeufersiedlungen der Schweiz, in Ressourcen und Rohstoffe in der Bronzezeit. Nutzung – Distribution – Kontrolle, eds. Nessel, B., Heske, I. & Brandherm, D.. Wünsdorf: Brandeburgisches Landesamt für Denkmalpflege und Archäologisches Landesmuseum, 121–9.Google Scholar Fokkens, H. & Harding, A., 2013. Introduction: The Bronze Age of Europe, in The Oxford Handbook of the European Bronze Age, eds. Fokkens, H. & Harding, A.. Oxford: Oxford University Press, 1–11.CrossRefGoogle Scholar Forrer, R., 1906. Die ägyptischen, kretischen, phönikischen etc. Gewichte und Maße der europäischen Kupfer-, Bronze- und Eisenzeit. Grundlagen zur Schaffung einer prähistorischen Metrologie. Jahrbuch der Gesellschaft für Lothringens Geschichte 18, 1–77.Google Scholar Genz, H., 2011. Restoring the balance: an Early Bronze Age scale beam from Tell Fadous-Kfarabida, Lebanon. Antiquity 85, 839–50.CrossRefGoogle Scholar Hafford, W.B., 2005. Mesopotamian mensuration balance pan weights from Nippur. Journal of Economic and Social History of the Orient 48 (3), 345–87.CrossRefGoogle Scholar Hafford, W.B., 2012. Weighting in Mesopotamia. The balance pan weights from Ur. Akkadica 133 (1), 21–65.Google Scholar Harding, A., 2000. European Societies in the Bronze Age. Cambridge: Cambridge University Press.CrossRefGoogle Scholar Harding, A., 2013. Trade and exchange, in The Oxford Handbook of the European Bronze Age, eds. Fokkens, H. & Harding, A.. Oxford: Oxford University Press, 370–79.Google Scholar Horst, F., 1981. Bronzezeitliche Steingegenstände aus dem Elbe-Oder-Raum. Bodendenkmalpflege in Mecklenburg 29, 33–83.Google Scholar Iaia, C., 2014. Ricerche sugli strumenti da metallurgo nella protostoria dell'Italia settentrionale: gli utensili a percussione. Padusa 50, 65–109.Google Scholar Ialongo, N., 2011. Il santuario nuragico di Monte S. Antonio di Siligo (SS). Studio analitico dei contesti cultuali della Sardegna protostorica. Unpublished PhD dissertation, Università di Roma 'La Sapienza'. Available at: http://hdl.handle.net/10805/1490 (accessed 5 June 2018).Google Scholar Ialongo, N., Vacca, A. & Peyronel, L., 2018a. Breaking down the bullion. The compliance of bullion-currencies with official weight-systems in a case-study from the ancient Near East. Journal of Archaeological Science 91, 20–32.CrossRefGoogle Scholar Ialongo, N., Vacca, A. & Vanzetti, A., 2018b. Indeterminacy and approximation in Mediterranean weight systems in the 3rd and 2nd millennia bc, in Gifts, Goods and Money – Comparing currency and circulation systems in past societies, eds. Brandherm, D., Heymans, E. & Hofmann, D.. Oxford: Archaeopress, 9–44.Google Scholar Ialongo, N. & Vanzetti, A., 2016. The intangible weight of things: approximate nominal weights in modern society, in The Intangible Elements of Culture in Ethnoarchaeological Research, eds. Biagetti, S. & Lugli, F.. Basel: Springer International, 283–91.CrossRefGoogle Scholar Joly, J., 1965. Circonscription de Dijon. Gallia préhistoire 8, 57–81.Google Scholar Jones, R., Levi, S.T. & Bettelli, M., 2014. Italo-Mycenaean pottery: the archaeological and archaeometric dimensions. Rome: CNR – Istituto di Studi sul Mediterraneo Antico.Google Scholar Kendall, D.B., 1974. Hunting quanta. Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 276, 231–66.CrossRefGoogle Scholar Kulakoğlu, F., 2017. Balance stone weights and scale-pans from Kültepe-Kanesh: on one of the basic elements of the Old Assyrian trading system, in Overturning Certainties in Near Eastern Archaeology. Festschrift in honor of K. Aslıhan Yener, eds. Çiğdem, M., Horowitz, M.T. & Gilbert, A.S.. Leiden/Boston: Brill, 341–402.CrossRefGoogle Scholar Lawson, A.J., 2000. Potterne 1982–1985. Animal husbandry in later prehistoric Wiltshire. (Wessex Archaeology Report 17.) Salisbury: Wessex Archaeology.Google Scholar Lenerz-de Wilde, M., 1995. Prämonetäre Zahlungsmittel in der Kupfer- und Bronzezeit Mitteleuropas. Fundberichte aus Baden-Württemberg 20 (1), 229–327.Google Scholar Leuvrey, J.M., 1999. Hauterive-Champréveyres 12. L'industrie lithique du Bronze final. Étude typo-technologique. (Archéologie neuchâteloise 24.) Neuchâtel: Muséè Cantonal d'Archéologie.Google Scholar Ling, J., Stos-Gale, Z., Grandin, L., Billström, K., Hjärtnhner-Holdar, E. & Persson, P.O., 2014. Moving metals II: provenancing Scandinavian Bronze Age artefacts by lead isotope and elemental analyses. Journal of Archaeological Science 41, 106–32.CrossRefGoogle Scholar Liverani, M., 1998. Uruk la prima città. Rome/Bari: Laterza.Google Scholar Lo Schiavo, F., 2006. Western weights in context, in Weights in Context. Bronze Age weighing systems of the eastern Mediterranean: chronology, yypology and archaeological contexts, eds. Alberti, E., Ascalone, E. & Peyronel, L.. (Studi e Materiali 13.) Rome: Istituto Italiano di Numismatica, 359–79.Google Scholar Lo Schiavo, M., 2009. A threefold ill-posed problem, in Oxhide Ingots in the Central Mediterranean, eds. Lo Schiavo, F., Muhly, J.D., Maddin, R. & Giumlia-Mair, A.. (Biblioteca di Antichità Cipriote 8.) Rome: A.G. Leventis foundation, 449–71.Google Scholar Lutz, J. & Pernicka, E., 2013. Prehistoric copper from the Eastern Alps. Open Journal of Archaeometry 1. doi: https://doi.org/10.4081/arc.2013.e25.CrossRefGoogle Scholar Martinelli, M.C., 2005. Il Villaggio dell'Età del Bronzo Medio di Portella a Salina nelle Isole Eolie. Florence: Istituto Italiano di Preistoria e Protostoria.Google Scholar Martinelli, M.C., Fiorentino, G., Prosdocimi, B., et al., 2010. Nuove ricerche nell'isediamento sull'istmo di Filo Braccio a Filicudi. Nota preliminare sugli scavi 2009. Origini 32, 285–314.Google Scholar Medović, P., 1995. Die Waage aus der frühhallstattzeitlichen Siedlung Bordjoš (Borjas) bei Novi Bečej (Banat), in Handel, Tausch und Verkehr im bronze- und früheisenzeitlichen Südosteuropa, ed. Hänsel, B.. (Südosteuropa-Schriften 17/Prähistorische Archäologie in Südosteuropa 11.) Munich/Berlin: Südosteuropa-Gesellschaft/Seminar für Ur- und Frühgeschichte der Freien Universität zu Berlin, 209–18.Google Scholar Michailidou, A., 2006. Stone balance weights? The evidence from Akrotiri on Thera, in Weights in Context. Bronze Age weighing systems of the eastern Mediterranean: chronology, typology and archaeological contexts, eds. Alberti, E., Ascalone, E. & Peyronel, L.. (Studi e Materiali 13.) Rome: Istituto Italiano di Numismatica, 233–63.Google Scholar Mohen, J.P. & Bailloud, G., 1987. La vie quotidienne. Le fouilles du Fort-Harrouard. (L' âge du bronze en France 4.) Paris: Picard.Google Scholar Mordant, C. & Mordant, D., 1970. Le site protohistorique des Gours-aux-Lions. (Mémoires de la Société Préhistorique Française 8.) Paris: Société Préhistorique Française.Google Scholar Pakkanen, J., 2011. Aegean Bronze Age weights, chaînes opératoires and the detecting of patterns through statistical analyses, in Tracing Prehistoric Social Networks through Technology: A diachronic perspective on the Aegean, ed. Brysbaert, A.. London: Routledge, 143–66.Google Scholar Pare, C.F.E., 1999. Weights and weighing in Bronze Age central Europe, in Eliten in der Bronzezeit: Ergebnisse zweier Kolloquien in Mainz und Athen. (Monographien des Römisch-Germanischen Zentralmuseums 43.) Mainz/Bonn: Verlag des Römisch-Germanischen Zentralmuseums, 421–514.Google Scholar Pare, C.F.E., 2013. Weighing, commodification and trade, in The Oxford Handbook of the European Bronze Age, eds. Fokkens, H. & Harding, A.. Oxford: Oxford University Press, 508–27.Google Scholar Parise, N., 1971. Per uno studio del sistema ponderale ugaritico. Dialoghi di Archeologia 1, 3–36.Google Scholar Peake, R., Séguier, J.M. & Gomez de Soto, J., 1999. Trois exemples de fléaux de balances en os de l'Age du Bronze. Bulletin de la Société préhistorique française 96 (4), 643–44.CrossRefGoogle Scholar Peroni, R., 1996. L'Italia alle soglie della storia. Bari: Laterza.Google Scholar Peroni, R., 1998. Bronzezeitliche Gewichtssysteme im Metallhandel zwischen Mittelmeer und Ostsee, in Mensch und Umwelt in der Bronzezeit Europas, ed. Hänsel, B.. Kiel: Oetker-Voges, 217–24.Google Scholar Peroni, R., 2006. La circolazione dei beni e le sue motivazioni extraeconomiche ed economiche, in Materie prime e scambi nella Preistoria italiana. (Atti della XXXIX Riunione Scientifica Istituto Italiano di Preistoria e Protostoria.) Florence: Istituto Italiano di Preistoria e Protostoria, 169–87.Google Scholar Petruso, K.M., 1992. Ayia Irini. The balance weights: an analysis of weight measurement in prehistoric Crete and the Cycladic Islands. (Keos 8.) Mainz: Philipp von Zabern.Google Scholar Peyronel, L., 2011. Mašqaltum Kittum. Questioni di equilibrio: bilance e sistemi di pesatura nell'Oriente Antico, in Studi Italiani di Metrologia ed Economia del Vicino Oriente Antico, eds. Ascalone, E. & Peyronel, L.. (Studia Asiana 7.) Rome: Herder, 105–61.Google Scholar Powell, M.A., 1977. Sumerian merchants and the problem of profit. Iraq 39, 23–9.CrossRefGoogle Scholar Powell, M.A., 1979. Ancient Mesopotamian weight metrology: methods, problems and perspectives, in Studies in Honour of Tom B. Jones, eds. Powell, M.A. & Sack, R.H.. (Alter Orient und Altes Testament 203.) Neunkirchen: Butzon & Bercker, 71–109.Google Scholar Powell, M.A., 1996. Money in Mesopotamia. Journal of the Economic and Social History of the Orient 39 (3), 224–42.CrossRefGoogle Scholar Primas, M., 1997. Bronze Age economy and ideology: central Europe in focus. Journal of European Archaeology 5 (1), 115–30.CrossRefGoogle Scholar Pulak, C.M., 1996. Analysis of the Weight Assemblages from the Late Bronze Age Shipwrecks at Uluburun and Cape Gelydonia, Turkey. Unpublished PhD dissertation, Texas A&M University. Available at: https://anthropology.tamu.edu/wp-content/uploads/sites/11/2016/07/Pulak-PhD1996.pdf (accessed 6 February 2018).Google Scholar Rahmstorf, L., 2003. The identification of Early Helladic weights and their wider implications, in Metron. Measuring the Aegean Bronze Age, eds. Foster, K.P. & Laffineur, R.. (Aegeum 24.) Liège/Austin: Université de Liège/University of Texas, 293–9.Google Scholar Rahmstorf, L., 2006a. Zur Ausbreitung vorderasiatischer Innovationen in die frühbronzezeitliche Ägäis. Praehistorische Zeitschrift 81, 49–96.CrossRefGoogle Scholar Rahmstorf, L., 2006b. In search of the earliest balance weights, scales and weighing systems from the East Mediterranean, the Near and Middle East, in Weights in Context. Bronze Age Weighing Systems of Eastern Mediterranean. Chronology, Typology, Material and Archaeological Contexts, eds. Alberti, M.E., Ascalone, E. & Peyronel, L.. (Studi e Materiali 13.) Rome: Istituto Italiano di Numismatica, 9–46.Google Scholar Rahmstorf, L., 2010. The concept of weighing during the Bronze Age in the Aegean, the Near East and Europe, in The Archaeology of Measurement. Comprehending heaven, earth and time in ancient societies, eds. Morley, I. & Renfrew, C.. Cambridge: Cambridge University Press, 88–105.CrossRefGoogle Scholar Rahmstorf, L., 2014a. 'Pebble weights' aus Mitteleuropa und Waagebalken aus der jüngeren Bronzezeit (ca. 14.–12. Jh. V. Chr.), in Ressourcen und Rohstoffe in der Bronzezeit. Nutzung – Distribution – Kontrolle, eds. Nessel, B., Heske, I. & Brandherm, D.. Wünsdorf: Brandenburgisches Landesamt für Denkmalpflege und Archäologisches Landesmuseum, 109–20.Google Scholar Rahmstorf, L., 2014b. Early balance weights in Mesopotamia and western Syria: origin and context, in Proceedings of the 8th International Congress on the Archaeology of the Ancient Near East (Warsaw, April 30th–May 4th 2012), eds. Bieliński, P., Gawlikowski, M., Koliński, R., Ławecka, D., Sołtysiak, A. & Wygnańska, Z.. Wiesbaden: Harrassowitz, 427–41.Google Scholar Rahmstorf, L., 2016. Emerging economic complexity in the Aegean and western Anatolia during earlier third millennium BC, in Of Odysseys and Oddities. Scales and modes of interaction between prehistoric Aegean societies and their neighbours, ed. Molloy, B.P.C.. Oxford/Philadelphia: Oxbow, 225–76.Google Scholar Rendeli, M., 2012. Il progetto Sant'Imbenia. ArcheoArte. Rivista elettronica di Archeologia e Arte 1, supplemento 24, 323–38. doi: 10.4429/j.arart.2011.suppl.24Google Scholar Renfrew, C., 2008. Systems of value among material things: the nexus of fungibility and measure, in The Construction of Value in the Ancient World, eds. Papadopoulos, J.K. & Urton, G.. (Cotsen Advanced Seminar Series 5.) Los Angeles (CA): Cotsen Institute of Archaeology Press, 249–60.Google Scholar Roscio, M., Delor, J.P. & Muller, F., 2011. Late Bronze Age graves with weighing equipment from Eastern France. Archäologische Korrespondenzblatt 41 (2), 173–86.Google Scholar Schmalfuß, G., 2007. Das Gräberfeld Battaune, Kr. Delitzsch in Sachsen. Ein jüngstbronzezeitliches Gräberfeld der Lausitzer Kultur – die Ergebnisse der Grabungen von 1974/75. Leipziger online-Beiträge zur Ur- und Frühgeschichtlichen Archäologie 29. https://www.gko.uni-leipzig.de/historisches-seminar/seminar/ur-und-fruehgeschichte/publikationen/leipziger-online-beitraege.htmlGoogle Scholar Schon, R., 2015. Weight sets: identification and analysis. Cambridge Archaeological Journal 25 (2), 477–94.CrossRefGoogle Scholar Schuster, J., 2014. Bone balance beam, in Cliffs End Farm, Isle of Thanet, Kent, eds. McKinley, J.I., Leivers, M., Schuster, J., Marshall, P., Barclay, A.J. & Stoodley, N.. (Wessex Archaeology Report 31.) Salisbury: Wessex Archaeology, 190–91.Google Scholar Vandkilde, H., 2016. Bronzization: the Bronze Age as a pre-modern globalization. Praehistorische Zeitschrift 91 (1), 103–23.CrossRefGoogle Scholar Viedebantt, O., 1917. Forschungen zur Metrologie des Altertums. (Abhandlungen der Philologisch-Historischen Klasse der Sächsischen Akademie der Wissenschaften 34.) Leipzig: Teubner.Google Scholar Vilaça, R., 2003. Acerca da existência de ponderais em contextos do Bronze Final/Ferro Inicial no território português. O Arqueólogo Português 21, 245–88.Google Scholar Vilaça, R., 2013. Late Bronze Age: Mediterranean impacts in the western end of the Iberian Peninsula (actions and reactions), in Interacción Social y Comercio en la Antesala del Colonialismo, eds. Aubet Semmler, M.E. & Sureda Torres, P.. (Cuadernos de Arqueología Mediterránea 21.) Barcelona: Universidad Pompeu Fabra, 13–41.Google Scholar Vrdoljak, S. & Stašo, F., 1995. Bronze-casting and organization of production at Kalnik-Igrišče (Croatia). Antiquity 70, 49–91.Google Scholar Weissbach, F.H., 1916. Neue Beiträge zur keilinschriftlichen Gewichtskunde. Zeitschrift der Deutschen Morgenländischen Gesellschaft 69, 577–82.Google Scholar Yerkes, R.W., Khalaily, H. & Barkai, R., 2012. Form and function of Early Neolithic bifacial stone tools reflect changes in land use practices during the neolithization process in the Levant. PloS ONE 7 (8), 1–11.CrossRefGoogle Scholar Zaccagnini, C., 1999–2001. The mina of Karkemiš and other minas. State Archives of Assyria Bulletin 13, 39–56.Google Scholar Ialongo supplementary material Ialongo supplementary material 1 File 3 MB File 26 KB Total number of HTML views: 312 Total number of PDF views: 504 * * Views captured on Cambridge Core between 07th September 2018 - 17th January 2021. This data will be updated every 24 hours. 4 Cited by Hostname: page-component-77fc7d77f9-qmqs2 Total loading time: 0.751 Render date: 2021-01-17T10:30:13.855Z Query parameters: { "hasAccess": "1", "openAccess": "1", "isLogged": "0", "lang": "en" } Feature Flags last update: Sun Jan 17 2021 10:03:09 GMT+0000 (Coordinated Universal Time) Feature Flags: { "metrics": true, "metricsAbstractViews": false, "peerReview": true, "crossMark": true, "comments": true, "relatedCommentaries": true, "subject": true, "clr": true, "languageSwitch": true, "figures": false, "newCiteModal": false, "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true } Rahmstorf, Lorenz 2019. Scales, weights and weight-regulated artefacts in Middle and Late Bronze Age Britain. Antiquity, Vol. 93, Issue. 371, p. 1197. Uhlig, Tobias Krüger, Joachim Lidke, Gundula Jantzen, Detlef Lorenz, Sebastian Ialongo, Nicola and Terberger, Thomas 2019. Lost in combat? A scrap metal find from the Bronze Age battlefield site at Tollense. Antiquity, Vol. 93, Issue. 371, p. 1211. Cavazzuti, Claudio 2020. Xosé-Lois Armada, Mercedes Murillo-Barroso and Mike Charlton, eds. Metals, Minds and Mobility: Integrating Scientific Data with Archaeological Theory (Oxford & Philadelphia: Oxbow Books, 2018, 191pp., 40 b/w illustr., 22 colour plates, 5 tables, hbk, ISBN 978-1-7857-0905-0). European Journal of Archaeology, Vol. 23, Issue. 1, p. 137. Hermann, Raphael Steinhoff, Judith Schlotzhauer, Philipp and Vana, Philipp 2020. Breaking News! Making and testing Bronze Age balance scales. Journal of Archaeological Science: Reports, Vol. 32, Issue. , p. 102444. Nicola Ialongo (a1)
CommonCrawl
Units Wiki Units of temperature, SI base units This page uses content from the Engineering wiki on Wikia. The original article was at Kelvin. The list of authors can be seen in the page history. As with the Units of Measurement wiki, the text of the Engineering wiki is available under Creative Commons License see Wikia:Licensing. Kelvin temperature conversion formulas Conversion from kelvins degrees Celsius °C = K − 273.15 degrees Celsius kelvins K = °C + 273.15 kelvins degrees Fahrenheit °F = K × 1.8 − 459.67 degrees Fahrenheit kelvins K = (°F + 459.67) / 1.8 Note that for temperature intervals rather than temperature readings, 1 K = 1 °C and 1 K = 1.8 °F Additional conversion formulas Conversion calculator for units of temperature The kelvin (symbol: K) is the SI unit of temperature, and is one of the seven SI base units. It is defined as the fraction 1/273.16 of the thermodynamic (absolute) temperature of the triple point of water. A temperature given in kelvins, without further qualification, is measured with respect to absolute zero, where molecular motion stops (except for the residual quantum mechanical zero-point energy). It is also common to give a temperature relative to the Celsius temperature scale, with a reference temperature of 0° C = 273.15 K, approximately the melting point of water under ordinary conditions. The kelvin is named after the British physicist and engineer William Thomson, 1st Baron Kelvin; his barony was in turn named after the River Kelvin, which runs through the grounds of the University of Glasgow. Typographical conventions Edit The word kelvin as an SI unit is correctly written with a lowercase k (unless at the beginning of a sentence), and is never preceded by the words degree or degrees, or the symbol °, unlike degrees Fahrenheit, or degrees Celsius. This is because the latter are adjectives, whereas kelvin is a noun. It takes the normal plural form by adding an s in English: kelvins. When the kelvin was introduced in 1954 (10th General Conference on Weights and Measures (CGPM), Resolution 3, CR 79), it was the "degree Kelvin", and written °K; the "degree" was dropped in 1967 (13th CGPM, Resolution 3, CR 104). Note that the symbol for the kelvin unit is always a capital K and never italicized. There is a space between the number and the K, as with all other SI units. Unicode includes the "kelvin sign" at U+212A (in your browser it looks like ). However, the "kelvin sign" is canonically decomposed into U+004B, thereby seen as a (preexisting) encoding mistake, and it is better to use U+004B (K) directly. Conversion factors Edit Kelvins and Celsius Edit The Celsius temperature scale is now defined in terms of the kelvin, with 0 °C corresponding to 273.15 kelvins. kelvins to degrees Celsius $ \mathrm{C} = \mathrm{K} - 273.15 $ Temperature and energy Edit Strictly speaking, the temperature of a system is well-defined only if its particles (atoms, molecules, electrons) are at equilibrium and obey a Boltzmann distribution (or its quantum mechanical counterpart). In a thermodynamic system, the energy of the particles of a perfect gas is proportional to the absolute temperature, where the constant of proportionality is the Boltzmann constant. As a result, it is possible to determine the average kinetic energy $ \overline{\mathrm{E_{kin}}} $ of the gas particles at the temperature T, or to calculate the temperature of the gas from the average kinetic energy of the particles: $ \overline{\mathrm{E_{kin}}} = \frac{3}{2} \cdot k_B \cdot \mathrm{T} $ The temperature of equilibrium electromagnetic radiation, a system of photons, is determined by the energy intensity, as given by Planck's blackbody distribution law. ITS-90 International Temperature Scale BIPM brochure on the kelvin Kelvin to Celsius conversion Retrieved from "https://units.fandom.com/wiki/Kelvin?oldid=6031" Units of temperature More Units of Measurement Wiki 1 Gravity of Earth 2 Spring balance Units of Measurement Wiki is a FANDOM Lifestyle Community.
CommonCrawl
Eventual regularity for the parabolic minimal surface equation Variational parabolic capacity December 2015, 35(12): 5689-5709. doi: 10.3934/dcds.2015.35.5689 On the classical limit of the Schrödinger equation Claude Bardos 1, , François Golse 2, , Peter Markowich 3, and Thierry Paul 2, Université Paris-Diderot, Laboratoire J.-L. Lions, BP187, 4 place Jussieu, 75252 Paris Cedex 05, France Ecole polytechnique, CMLS, 91128 Palaiseau Cedex, France, France King Abdullah University of Science and Technology, MCSE Division, Thuwal 23955-6900, Saudi Arabia Received April 2014 Published May 2015 This paper provides an elementary proof of the classical limit of the Schrödinger equation with WKB type initial data and over arbitrary long finite time intervals. We use only the stationary phase method and the Laptev-Sigal simple and elegant construction of a parametrix for Schrödinger type equations [A. Laptev, I. Sigal, Review of Math. Phys. 12 (2000), 749--766]. We also explain in detail how the phase shifts across caustics obtained when using the Laptev-Sigal parametrix are related to the Maslov index. Keywords: WKB expansion, Maslov index., caustic, Lagrangian manifold, classical limit, Schrödinger equation, Fourier integral operators. Mathematics Subject Classification: Primary: 35Q41, 81Q20; Secondary: 35S30, 53D1. Citation: Claude Bardos, François Golse, Peter Markowich, Thierry Paul. On the classical limit of the Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2015, 35 (12) : 5689-5709. doi: 10.3934/dcds.2015.35.5689 L. V. Ahlfors, Complex Analysis: An Introduction to the Theory of Analytic Functions of One Complex Variable,, $2^{nd}$ edition, (1966). Google Scholar V. I. Arnold, Characteristic class entering in quantization condition,, Func. Anal. Appl., 1 (1967), 1. doi: 10.1007/BF01075861. Google Scholar V. I. Arnold, Geometrical Methods of the Theory of Ordinary Differential Equations,, Springer-Verlag, (1988). doi: 10.1007/978-1-4612-1037-5. Google Scholar V. I. Arnold, Mathematical Methods of Classical Mechanics,, Springer-Verlag, (1989). doi: 10.1007/978-1-4757-2063-1. Google Scholar C. Bardos, F. Golse, P. Markowich and T. Paul, Hamiltonian evolution of monokinetic measures with rough momentum profile,, Archive for Rational Mechanics and Analysis, 217 (2015), 71. doi: 10.1007/s00205-014-0829-7. Google Scholar P. Gérard, P. Markowich, N. Mauser and F. Poupaud, Homogenization limit and Wigner transforms,, Comm. on Pure and App. Math., 50 (1997), 323. doi: 10.1002/(SICI)1097-0312(199704)50:4<323::AID-CPA4>3.0.CO;2-C. Google Scholar V. Guillemin and S. Sternberg, Geometric Asymptotics,, Amer. Math. Soc., (1977). Google Scholar L. Hörmander, The Analysis of Linear Partial Differential Operators I. Distribution Theory and Fourier Analysis,, $2^{nd}$ edition, (1990). doi: 10.1007/978-3-642-96750-4. Google Scholar L. Hörmander, The Analysis of Linear Partial Differential Operators II. Differential Operators with Constant Coefficients,, Springer-Verlag, (1983). doi: 10.1007/978-3-642-96750-4. Google Scholar L. Hörmander, The Analysis of Linear Partial Differential Operators III. Pseudo-differential Operators,, $2^{nd}$ edition, (1994). doi: 10.1007/978-3-540-49938-1. Google Scholar L. Hörmander, The Analysis of Linear Partial Differential Operators IV. Fourier Integral Operators,, $2^{nd}$ edition, (1994). doi: 10.1007/978-3-642-00136-9. Google Scholar A. Laptev and I. Sigal, Global Fourier integral operators and semiclassical asymptotics,, Review of Math. Phys., 12 (2000), 749. doi: 10.1142/S0129055X00000289. Google Scholar J. Leray, Lagrangian Analysis and Quantum Mechanics,, The MIT Press, (1981). Google Scholar P.-L. Lions and T. Paul, Sur les mesures de Wigner,, Rev. Mat. Iberoamericana, 9 (1993), 553. doi: 10.4171/RMI/143. Google Scholar V. P. Maslov, Théorie des Perturbations et Méthodes Asymptotiques,, Dunod, (1972). Google Scholar V. P. Maslov and M. V. Fedoryuk, Semiclassical Approximation in Quantum Mechanics,, Reidel Publishing Company, (1981). Google Scholar J. Milnor, Morse Theory,, Princeton Univ. Press, (1963). Google Scholar D. Serre, Matrices,, $2^{nd}$ edition, (2010). doi: 10.1007/978-1-4419-7683-3. Google Scholar J.-M. Souriau, Construction explicite de l'indice de Maslov,, in Group Theoretical Methods in Physics (eds. A. Janner, (1976), 117. Google Scholar César Augusto Bortot, Wellington José Corrêa, Ryuichi Fukuoka, Thales Maier Souza. Exponential stability for the locally damped defocusing Schrödinger equation on compact manifold. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1367-1386. doi: 10.3934/cpaa.2020067 Jaime Cruz-Sampedro. Schrödinger-like operators and the eikonal equation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 495-510. doi: 10.3934/cpaa.2014.13.495 David Damanik, Zheng Gan. Spectral properties of limit-periodic Schrödinger operators. Communications on Pure & Applied Analysis, 2011, 10 (3) : 859-871. doi: 10.3934/cpaa.2011.10.859 Kanghui Guo and Demetrio Labate. Sparse shearlet representation of Fourier integral operators. Electronic Research Announcements, 2007, 14: 7-19. doi: 10.3934/era.2007.14.7 Elena Cordero, Fabio Nicola, Luigi Rodino. Time-frequency analysis of fourier integral operators. Communications on Pure & Applied Analysis, 2010, 9 (1) : 1-21. doi: 10.3934/cpaa.2010.9.1 Shuya Kanagawa, Ben T. Nohara. The nonlinear Schrödinger equation created by the vibrations of an elastic plate and its dimensional expansion. Conference Publications, 2013, 2013 (special) : 415-426. doi: 10.3934/proc.2013.2013.415 Lihui Chai, Shi Jin, Qin Li. Semi-classical models for the Schrödinger equation with periodic potentials and band crossings. Kinetic & Related Models, 2013, 6 (3) : 505-532. doi: 10.3934/krm.2013.6.505 Yuto Miyatake, Tai Nakagawa, Tomohiro Sogabe, Shao-Liang Zhang. A structure-preserving Fourier pseudo-spectral linearly implicit scheme for the space-fractional nonlinear Schrödinger equation. Journal of Computational Dynamics, 2019, 6 (2) : 361-383. doi: 10.3934/jcd.2019018 Juhi Jang, Ning Jiang. Acoustic limit of the Boltzmann equation: Classical solutions. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 869-882. doi: 10.3934/dcds.2009.25.869 Nakao Hayashi, Tohru Ozawa. Schrödinger equations with nonlinearity of integral type. Discrete & Continuous Dynamical Systems - A, 1995, 1 (4) : 475-484. doi: 10.3934/dcds.1995.1.475 Hiroshi Isozaki, Hisashi Morioka. A Rellich type theorem for discrete Schrödinger operators. Inverse Problems & Imaging, 2014, 8 (2) : 475-489. doi: 10.3934/ipi.2014.8.475 Jean Bourgain. On random Schrödinger operators on $\mathbb Z^2$. Discrete & Continuous Dynamical Systems - A, 2002, 8 (1) : 1-15. doi: 10.3934/dcds.2002.8.1 Jean Bourgain. On quasi-periodic lattice Schrödinger operators. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 75-88. doi: 10.3934/dcds.2004.10.75 Fengping Yao. Optimal regularity for parabolic Schrödinger operators. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1407-1414. doi: 10.3934/cpaa.2013.12.1407 Fritz Gesztesy, Roger Nichols. On absence of threshold resonances for Schrödinger and Dirac operators. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020243 Camille Laurent. Internal control of the Schrödinger equation. Mathematical Control & Related Fields, 2014, 4 (2) : 161-186. doi: 10.3934/mcrf.2014.4.161 D.G. deFigueiredo, Yanheng Ding. Solutions of a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 563-584. doi: 10.3934/dcds.2002.8.563 Frank Wusterhausen. Schrödinger equation with noise on the boundary. Conference Publications, 2013, 2013 (special) : 791-796. doi: 10.3934/proc.2013.2013.791 A. Pankov. Gap solitons in periodic discrete nonlinear Schrödinger equations II: A generalized Nehari manifold approach. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 419-430. doi: 10.3934/dcds.2007.19.419 M. Burak Erdoǧan, William R. Green. Dispersive estimates for matrix Schrödinger operators in dimension two. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4473-4495. doi: 10.3934/dcds.2013.33.4473 Claude Bardos François Golse Peter Markowich Thierry Paul
CommonCrawl
What's the greatest range of orders of magnitude? There's a famous claim along the lines of "40 dp of PI are sufficient to calculate the circumference of the Observable Universe to the width of a hydrogen atom" I don't know the accuracy and detail of claim, but it prompted me to be curious ... I assume that the claim (if it were true and accurately remembered) if equivalent to stating: "There are 40 orders of (decimal) magnitude difference between the diameter of the universe and the diamater of a hydrogen atom". But that's not the biggest possible difference between interestingly measurable things, because the diameter of a hydrogen atom isn't the smallest length ... we could go smaller (protons, electrons, quarks, planck length) I don't know astrophysics well enough to know whether there's anything that's interesting to describe bigger than the Observable Universe. But, it seems that whilst considering length you can arrive at "The greatest possible difference in orders of magnitude". But there are other things that can be measured. Time for example. So question: What metric has the greatest range of orders of magnitude that are interesting to talk about? and how big is that range? order-of-magnitude BrondahlBrondahl $\begingroup$ You might not know astrophysics but have you done some research yourself to answer your own question? $\endgroup$ – Farcher Feb 6 '17 at 11:37 $\begingroup$ I would say the ratio of the diameter of the visible universe to that of the planck length, about $10^{71}$ $\endgroup$ – user126422 Feb 6 '17 at 15:58 $\begingroup$ @AlbertAspect: How about the volume of the visible universe divided by the volume of a cube with each side equal to the planck length. Would that be about 10^(71*3)? $\endgroup$ – James Feb 6 '17 at 16:53 $\begingroup$ @James I believe he is asking for length only. In any case what is "interesting to talk about" is subjective. $\endgroup$ – user126422 Feb 6 '17 at 16:56 $\begingroup$ @Bop_Bee that's the magnitude of error in our otherwise best theories compared to reality. Yes, it's possibly the most orders of magnitude error in any theory ever not instantly consigned to the mental trash-can of bad ideas. $\endgroup$ – nigel222 Feb 16 '17 at 15:38 Your question is pretty vague, but I will restrict it to mean: what is the physical property with the largest range of measured values. This is still probably subjective, but it's a little more manageable and fun to think about anyway. Here's one possibility: range of measured half-lives of radioactive isotopes (see wiki list). The shortest measured half-life (that of hydrogen-7) is order $10^{-23}$ seconds, and the longest (that of tellurium-128) is order $10^{31}$ seconds, so they span an amazing 54 orders of magnitude in all. This is kind of ridiculous. It is more than the ratio between the size of a proton and the size of the observable universe, which are separated by a mere 41 orders of magnitude (maybe this is what your quote is supposed to say?), and it is about the difference between the Planck length and a light-year (!). It's fun to think about what the experimental challenges must be making measurements on both ends of that spectrum. Both ends (particularly the long-time end) are bounded by experimental ability, so this is not too far from being a list of the range of times over which we can measure anything. Naturally, that means it is subject to change. For example, we've been looking for proton decay for a long time, but all we can say right now is that the lifetime must be more than order $10^{39}$ seconds. If we ever find it this range will shoot at least another hundred million times larger. RococoRococo $\begingroup$ The decay time is exponentially related to the size of the potential barrier which has to be "tunnelled" in order for decay to happen. You can of course squash any range by taking logarithms, but in this case the logarithm has very significant physical meaning. $\endgroup$ – nigel222 Feb 16 '17 at 15:32 $\begingroup$ @nigel222 Yes, I drew out this point explicitly in another recent question: physics.stackexchange.com/questions/312406/… . Similar remarks apply to conductivity, which in many insulators will scale something like $e^{-L}$ where L is the macroscopic size of the system, and can be thought of (in a tight-binding model) as electrons tunnelling from site to site. It's a safe bet that most physical quantities that compete with these will also have a similar exponential underlying dependence on some parameter. $\endgroup$ – Rococo Feb 16 '17 at 16:23 $\begingroup$ and at the extreme limit there is the lifespan of a black hole, which will eventually evaporate by emitting Hawking radiation. Well, it will if the theory is right. It's un-observable, for at least two reasons I can think of. $\endgroup$ – nigel222 Feb 16 '17 at 16:29 Resistivity has a quite impressive range--for example, the resistivity of teflon is about $10^{30}$ times higher than the resistivity of copper. So I think "resistivity of different materials" might be the winner, or at least a contender, for most orders of magnitude ratio of quantities that can and often do come up naturally in everyday life. Steve ByrnesSteve Byrnes $\begingroup$ You're underselling yourself. You've omitted superconductors, which have resistance not measurably higher than zero in that an electric current continues "forever", or at least until the liquid helium is delivered too late! $\endgroup$ – nigel222 Feb 16 '17 at 15:45 $\begingroup$ For any quantity that can be either zero or nonzero, it spans infinity different orders of magnitude. That's kinda cheating. So I'll say that "nonzero resistivities" is my non-cheating answer. $\endgroup$ – Steve Byrnes Feb 16 '17 at 17:04 Measured bulk baryonic densities vary by around 45 orders of magnitude - from around $10^{18}$ kg/m$^3$ in neutron stars to $4\times 10^{-28}$ kg/m$^3$ for the universe as a whole. Observed energy of a single particle is interesting, because energy (and particles) are fundamental. At one extreme, the IceCube observatory has claimed detection of neutrinos with energies of 0.001eV. I'm not sure if an energy difference counts, but the Mossbauer effect means that the change of energy arising from doppler-shifting gamma-photons from a radioactive source moving at a few centimeters per second is detectable: that's an energy difference of under $10^{-5}$ eV. At the other extreme there are "OMG" cosmic rays with energies in excess of $10^{20}$ eV. We can't be absolutely certain that these are single protons. A mechanism for generating such particles is hard to envisage (and it has to be located in our immediate galactic proximity!). It's possible that it's spitting out atomic nuclei rather than protons, in which case we should maybe deduct a couple of orders of magnitude for safety. Anyway, that's at least 23 orders of magnitude, maybe a few more. We can of course detect electromagnetic radiation with frequencies of a few Hz and maybe lower, and one must assume that this corresponds to photons of femto-eV. However, we couldn't detect a single such photon, only the effect of large correlated numbers thereof. nigel222nigel222 Temperature achieved in man-made experiments ranges from half a nanoKelvin to five TeraKelvin (quark-gluon plasmas), or across 22 orders of magnitude. The temperature at the start of the universe was a lot higher, but because the universe was opaque in its early days, any attempt to "measure" that temperature must be a theoretical derivation from other observations. Possibly, of the order of the Planck temperature $1.4\times 10^{32} K$, above which it is unclear whether "temperature" has any meaning. But there are various ways of defining temperature, and under some definitions it is possible to create systems in which "temperature" reaches infinity, goes negative, and starts approaching zero from the other direction! Negative thermodynamic temperatures are measures of population inversions, as found in every lasing medium. Yes, this is perhaps cheating, from the perspective of this question. protected by Qmechanic♦ Feb 16 '17 at 16:14 Not the answer you're looking for? Browse other questions tagged order-of-magnitude or ask your own question. Why do nuclei decay so fast and slow? How can we make an order-of-magnitude estimate of the strength of Earth's magnetic field? Dimensional analysis to estimate order of magnitude of quantities Measuring the nearest order of magnitude Ignoring orders of magnitude in a problem Order of magnitude (serway 9th) Magnitude of Black hole entropy What is the point of generalizing a more specific result to an order of magnitude? Different approch on Order of magnitude confusion When is the order of magnitude not equal to the exponent of scientific notation?
CommonCrawl
Volume 19 Issue 11 Validation of self-reported fig... Core reader Validation of self-reported figural drawing scales against anthropometric measurements in adults Participants' data Figural scales Anthropometric measures Receiver-operating characteristic curves for figural scales Validation modelling This article has been cited by the following publications. This list is generated based on data provided by CrossRef. Moscone, Anne-Laure Amorim, Michel-Ange Le Scanff, Christine and Leconte, Pascale 2017. A model-driven approach to studying dissociations between body size mental representations in anorexia nervosa. Body Image, Vol. 20, Issue. , p. 40. Easton, Jonathan F. Stephens, Christopher R. and Sicilia, Heriberto Román 2017. An Analysis of Real, Self-Perceived, and Desired BMI: Is There a Need for Regular Screening to Correct Misperceptions and Motivate Weight Reduction?. Frontiers in Public Health, Vol. 5, Issue. , Cornelissen, Piers L. Cornelissen, Katri K. Groves, Victoria McCarty, Kristofor and Tovée, Martin J. 2018. View-dependent accuracy in body mass judgements of female bodies. Body Image, Vol. 24, Issue. , p. 116. Lønnebotn, Marianne Svanes, Cecilie Igland, Jannicke Franklin, Karl A. Accordini, Simone Benediktsdóttir, Bryndís Bentouhami, Hayat Blanco, José A. G. Bono, Roberto Corsico, Angelo Demoly, Pascal Dharmage, Shyamali Dorado Arenas, Sandra Garcia, Judith Heinrich, Joachim Holm, Mathias Janson, Christer Jarvis, Debbie Leynaert, Bénédicte Martinez-Moratalla, Jesús Nowak, Dennis Pin, Isabelle Raherison-Semjen, Chantal Sánchez-Ramos, Jose Luis Schlünssen, Vivi Skulstad, Svein Magne Dratva, Julia Gómez Real, Francisco and Vinciguerra, Manlio 2018. Body silhouettes as a tool to reflect obesity in the past. PLOS ONE, Vol. 13, Issue. 4, p. e0195697. Google Scholar Citations View all Google Scholar citations for this article. View all citations for this article on Scopus Public Health Nutrition, Volume 19, Issue 11 August 2016 , pp. 1944-1951 Julia Dratva (a1) (a2), Randi Bertelsen (a3), Christer Janson (a4), Ane Johannessen (a3) (a5), Bryndis Benediktsdóttir (a6), Lennart Bråbäck (a7), Shyamali C Dharmage (a8), Bertil Forsberg (a7), Thorarinn Gislason (a6), Debbie Jarvis (a9), Rain Jogi (a10) (a11), Eva Lindberg (a4), Dan Norback (a4), Ernst Omenaas (a5), Trude D Skorge (a3), Torben Sigsgaard (a12), Kjell Toren (a13), Marie Waatevik (a5), Gundula Wieslander (a4), Vivi Schlünssen (a12), Cecilie Svanes (a3) (a14) and Francisco Gomez Real (a5) (a15) (a1) 1 Department of Epidemiology and Public Health, Swiss Tropical and Public Health Institute, Socinstrasse 57, PO Box 4002, Basel, Switzerland 2 University of Basel, Basel, Switzerland 3 Department of Occupational Medicine, Haukeland University Hospital, Bergen, Norway 4 Department of Medical Sciences, Uppsala University, Uppsala, Sweden 5 Department of Clinical Sciences, University of Bergen, Bergen, Norway 6 Department of Allergy, Respiratory Medicine and Sleep, Landspitali University Hospital, Reykjavik, Iceland 7 Occupational and Environmental Medicine, Department of Public Health and Clinical Medicine, Umeå University, Umeå, Sweden 8 Allergy and Lung Health Unit, Centre for Epidemiology and Biostatistics, Melbourne School of Population and Global Health, University of Melbourne, Melbourne, Australia 9 Faculty of Medicine, National Heart & Lung Institute, Imperial College, London, UK (a10) 10 Lung Clinic, Foundation Tartu University Clinics, Tartu, Estonia 11 Department of Pulmonary Medicine, Tartu University, Tartu, Estonia 12 Department of Public Health, Aarhus University, Aarhus, Denmark 13 Occupational and Environmental Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden 14 Centre for International Health, University of Bergen, Bergen, Norway 15 Department of Gynecology and Obstetrics, Haukeland University Hospital, Bergen, Norway Copyright: © The Authors 2016 DOI: https://doi.org/10.1017/S136898001600015X Published online by Cambridge University Press: 16 February 2016 Figures: Fig. 1 Figural scales for (a) men and (b) women introduced in the Respiratory Health in Northern Europe (RHINE III) survey Table 1 Discriminatory capabilities of figural scales for identifying obesity, overweight and increased WC, according to sex, in Scandinavian adults aged 38–66 years (sub-sample of the RHINE III): results of ROC curve analyses Fig. 2 Box-and-whisker plots showing the distribution of oBMI and WC by figural scale, according to sex, in Scandinavian adults aged 38–66 years (sub-sample of the RHINE III): (a) oBMI in women (n 674); (b) oBMI in men (n 769); (c) WC in women (n 527); (d) WC in men (n 500). The bottom and top edge of the box represent the first and third quartiles (interquartile range); the line within the box represents the median; the ends of the bottom and top whiskers represent the upper and lower adjacent values; and the dots represent outliers (oBMI, objectively measured BMI; WC, waist circumference; RHINE III, Respiratory Health in Northern Europe survey) Fig. 3 ROC curves (——●——, data points; ■, cut-off point of optimal sensitivity and specificity; — — —, reference line of no discrimination) for identifying obese subjects with figural scales, according to sex, in Scandinavian adults aged 38–66 years (sub-sample of the RHINE III): (a) women, sensitivity=0·71, specificity=0·88, AUC=0·8786; (b) men, sensitivity=0·76, specificity=0·85, AUC=0·8630 (ROC, receiver-operating characteristic; RHINE III, Respiratory Health in Northern Europe survey; AUC, area under the curve) Send article to Kindle To send this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the 'name' part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle. Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Send article to Dropbox To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox. Send article to Google Drive To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive. The aim of the present study was to validate figural drawing scales depicting extremely lean to extremely obese subjects to obtain proxies for BMI and waist circumference in postal surveys. Reported figural scales and anthropometric data from a large population-based postal survey were validated with measured anthropometric data from the same individuals by means of receiver-operating characteristic curves and a BMI prediction model. Adult participants in a Scandinavian cohort study first recruited in 1990 and followed up twice since. Individuals aged 38–66 years with complete data for BMI (n 1580) and waist circumference (n 1017). Median BMI and waist circumference increased exponentially with increasing figural scales. Receiver-operating characteristic curve analyses showed a high predictive ability to identify individuals with BMI > 25·0 kg/m2 in both sexes. The optimal figural scales for identifying overweight or obese individuals with a correct detection rate were 4 and 5 in women, and 5 and 6 in men, respectively. The prediction model explained 74 % of the variance among women and 62 % among men. Predicted BMI differed only marginally from objectively measured BMI. Figural drawing scales explained a large part of the anthropometric variance in this population and showed a high predictive ability for identifying overweight/obese subjects. These figural scales can be used with confidence as proxies of BMI and waist circumference in settings where objective measures are not feasible. C Svanes and FG Real share last authorship. The obesity epidemic has drawn attention to the role of metabolic factors in the aetiology of non-communicable diseases. Considerable evidence points to the adverse impact of obesity on cardiometabolic and respiratory diseases( 1 , 2 ). Obesity's adverse effect may be related to metabolic and inflammatory pathways( 3 , 4 ) and/or actual weight of body fat( 5 , 6 ). In clinical and epidemiological studies, BMI or waist-to-hip ratio is most commonly used to define anthropometric status. These proxies of body fat are related to clinical health outcomes( 1 , 7 ). More sophisticated measures of body composition, such as bioimpedance measurements, are also used to assess body fat and fat-free mass( 8 ). However, in large population-based studies or studies in remote settings, neither option is feasible. Figural stimuli, representing a range of figural drawing scales (figural scales) from extremely lean to extremely obese, are an easy-to-administer self-reported measure of body image. First introduced and validated by Stunkard et al. in 1983( 9 ), figural scales have been used in many studies in place of measured or self-reported height and weight( 9 – 12 ) or to assess body satisfaction by comparing an individual's perception of his/her body with his/her ideal body image( 13 , 14 ). The latest Respiratory Health in Northern Europe (RHINE) survey, performed in 2010–2012, introduced modernised figural stimuli for men and women, with nine categories, similar to Stunkard's figural scales. The purpose of introducing figural stimuli in the survey was twofold. First, the figural scales complemented self-reported current height and weight by adding information on body fat distribution. Second, if the figural scales proved to be a valid instrument in the present, they could be used to assess anthropometric status at specific time points in the past (time of menopause, 55 years, 40 years, 30 years) and thereby provide a history of anthropometrics, often missing in epidemiological research. Figural scales could also be a valid alternative to or an additional instrument for assessing anthropometric data in cultural settings in which people do not know their height and weight because they are not commonly measured or because anthropometric measurements cannot be acquired. In the current study we validated the reported current figural scales with measured current BMI in a sub-sample of RHINE participants with data on measured height, weight and waist circumference (WC). Our aim was to investigate the predictive power of figural scales to identify individuals at metabolic risk. The RHINE study population (www.rhine.nu) consists of the population-based study sample recruited for the first stage of the European Community Respiratory Health Survey (ECRHS, 1990–1994; www.ecrhs.org)( 15 ). Men and women aged 20–44 years were randomly selected from population registers within specific boundaries and were sent a questionnaire by post (n 21 802; response rate 83·7 %). RHINE study centres are located in Reykjavik, Iceland; Bergen, Norway; Umeå, Uppsala and Gothenburg, Sweden; Aarhus, Denmark; and Tartu, Estonia. The study was approved by the local ethics commissions of each study centre. All participants provided written informed consent. Since the initial survey, participants have been followed up twice by postal questionnaire( 16 ). The analyses presented here were performed on a sub-sample of RHINE III participants who also participated in the third ECRHS clinical study (2011–2012) and had complete data on objectively measured BMI and current figural scales (n 1580; see online supplementary material 1, Supplemental Fig. 1). Through the postal questionnaires, RHINE III collected data on current and past health, lifestyle and socio-economic status, including self-reported height and weight. BMI was defined as weight/height2. Education was defined based on the highest educational degree achieved (obligatory, secondary or tertiary levels). The figural stimuli introduced during RHINE III were designed specifically for the survey by Alejandro Villén-Real (Fig. 1) and based on Stunkard's body image scales( 9 ). Participants were asked to tick the figural scale that best described their current figure. Body weight, height and waist were measured following the ECRHS standard protocol and using calibrated scales and tape bands at the RHINE study centres. Obese (BMI>30·0 kg/m2) and overweight (BMI=25·0–30·0 kg/m2) participants, as well as those with a WC of >88 cm in women and >102 cm in men, were considered to be 'at risk'( 17 ). Study participants' characteristics were stratified by sex. We compared the prevalence of characteristics among participants with and without objective anthropometric data, as well as between participants who had answered the question on body image and those who had not. In a multivariable logistic regression, we investigated the odds of non-response to the figural scales (see online supplementary material 1, Supplemental Table 1). WC, waist circumference; RHINE III, Respiratory Health in Northern Europe survey; AUC, area under the curve; ROC, receiver-operating characteristic. Obese, BMI≥30·0 kg/m2; overweight, BMI≥25·0 kg/m2. The median and interquartile range of objectively measured BMI (oBMI), waist circumference (WC) and self-reported BMI (sBMI) were calculated for each body shape. Differences between oBMI and sBMI were assessed. Spearman correlations between the anthropometric measures and figural scales were calculated. Receiver-operating characteristic (ROC) curves( 18 ) were computed to investigate the ability of the figural stimuli to identify subjects 'at risk' as defined above (obese (BMI>30·0 kg/m2) v. non-obese; overweight (BMI>25·0 kg/m2) v. non-overweight; females and males with WC >88 cm and >102 cm, respectively, v. females and males with WC below these values). ROC curves plot the true positive rate (correct detection) against the false positive rate (false alarm) for each figural scale. The area under the curve (AUC) is a measure of test accuracy( 19 ). ROC curves also visually facilitate identification of the best possible figural scale (optimal sensitivity and specificity criterion), which allows discrimination between obese and non-obese subjects with maximum sensitivity (correct detection) and the least loss of specificity (correct rejection). ROC curve analyses were performed separately for different sex and age groups (<52 years v. ≥52 years, mean age). The empirical optimal sensitivity and specificity criterion and the Youden index were calculated for figural scales. After dividing the study population into two random data sets, a BMI prediction model was developed with one of the samples and validated in the other. In a step-wise backward modelling procedure, the number of covariates was reduced, based on a significance level of 0·2, to a final model with the highest explanatory power (adjusted r 2; Akaike criterion( 20 )). Models were built for men and women separately. In the model, oBMI was log-transformed ( $\log {\mathop{\rm BMI}\nolimits} $ ) to account for oBMI's left-skewed distribution. The predictive model for $\log {\mathop{\rm BMI}\nolimits} $ in women included: figural scales corresponding to current figure, age, educational status, current use of hormonal replacement therapy (HRT), prevalence of menopausal symptoms ever (MP symptoms ever) and chronic disease status (i.e. asthma and diabetes), as well as study centre as a random variable. The male predictive model included: current figural scales, age, educational status, smoking status, CVD status and study centre. The coefficients of the predictive model were then applied to the second random sample to predict $\log {\mathop{\rm BMI}\nolimits} $ . The difference between predicted $\log {\mathop{\rm BMI}\nolimits} $ and measured $\log {\mathop{\rm BMI}\nolimits} $ was calculated and the values were converted back into the original units (using exp(x)). Of the RHINE III survey population (n 12 660), 93 % filled in current figural scales. Of these respondents, those participating in the ECRHS clinical assessment (and therefore with measured oBMI and WC) differed significantly from those without objective anthropometric measures. Differences were found with respect to age (52·6 years v. 51·5 years) and smoking status (15 % v. 17 %). Participants with objective measures also had a higher prevalence of respiratory disease (27 % v. 12 %), since about a third of the random ECRHS sample had been recruited based on symptoms suggestive of asthma( 15 ). However, metabolically important measures such as physical exercise or diabetes were not significantly different between the two groups (see online supplementary material 1, Supplemental Table 1). The multivariable logistic regression yielded a slightly higher odds of non-responders among older participants (OR=1·03 per year of age, 95 % CI 1·02, 1·04; see online supplementary material 1, Supplemental Table 2) and participants with high educational status (OR=0·81, 95 % CI 0·73, 0·91), and a somewhat lower ratio among women (OR=0·74, 95 % CI 0·64, 0·87; Supplemental Table 2). The analytic sample included slightly more women (58 %) than men (42 %). The mean age of the overall sample was 52·4 years (sd 7·1), ranging from 38 to 66 years. With respect to anthropometric measures, the mean oBMI was 26·5 kg/m2 (women, 25·7 (sd 6·2) kg/m2; men, 26·9 (sd 4·7) kg/m2) and the mean WC was 94·3 cm (women, 87·8 (sd 17·6) cm; men, 98·6 (sd 15·1) cm). With increasing figural scale, from scale 1 (extremely lean) to scale 9 (extremely obese), the median, 25th and 75th percentiles of oBMI and WC increased (Fig. 2). The self-reported BMI percentiles showed a comparative increase (see online supplementary material 1, Supplemental Table 3). The overall Spearman correlation coefficient between oBMI and sBMI was very high (0·94), with a difference between sexes (women, 0·944; men, 0·928; both significant P<0·001, sex difference P=0·02). The correlation coefficient between oBMI and figural scale rating was higher for women (0·77) than for men (0·70; both P<0·001, sex difference P<0·007). The differences between sBMI and oBMI were marginal, with a tendency towards under-reporting at higher figural scales and over-reporting at lower figural scales (Supplemental Table 3). The modal and median scale in both women and men was 5, corresponding to a median oBMI of 26·6 kg/m2 in women and 26·3 kg/m2 in men. Based on measured WC, about half of the women (48·6 % had WC>88 cm; range 88·05–152·25 cm) and about two-fifths of the men (38·4 % had WC>102 cm; range 102·05–145·5 cm) were defined as 'at risk'. Less than 1 % of participants had an oBMI of <18·5 kg/m2, so no 'underweight' category was defined. Instead, these participants were included in the 'normal weight' category (see online supplementary material 1, Supplemental Table 4). The ROC curve analyses yielded high AUC values for identifying obesity (women, AUC=0·879; men, AUC=0·863), as well as overweight (women, AUC=0·859; men, AUC=0·842; Fig. 3). When adjusted for age, the discriminatory power of the figural scales remained consistently high (women, AUC=0·815; men, AUC=0·784). The optimal sensitivity and specificity criterion (optimal criterion) for identifying metabolic risk, correctly classifying the greatest number of 'at risk' individuals and minimising false positives, was assessed separately for obese, overweight and WC (Table 1). Differences in predictive power and optimal criterion by age were observed for both men and women. For older participants (≥52 years) the optimal criterion remained the same, while for younger participants of both sexes (<52 years) the optimal criterion to identify obesity was one scale lower; for example, for women a figural scale of 4 instead of 5. For men, ROC curve analyses also yielded a lower optimal criterion for overweight, figural scale of 4 instead of 5. On the other hand, no differences by age were observed for WC. Figural scales alone already explained a large part of oBMI variance (adjusted r 2=0·58 for women, 0·48 for men). Adding additional model covariates improved the adjusted r 2 substantially. The validation model showed good predictive power for BMI with an adjusted r 2 of 0·67 for women and 0·52 for men. When potential influential data points, based on Cook's distance estimation, were excluded from the data set, the adjusted r 2 increased to 0·74 in women and 0·62 in men. However, the effect estimates did not change considerably. The multivariable linear regression models we obtained for predicting BMI based on figural scales were as follows. For women: $$\eqalignno{ &#x0026; \log {\mathop{\rm BMI}\nolimits} \,{\equals}\,{\rm 2} \cdot {\rm 73661}{\plus}{\rm figural\, \, scales}{\times}(0 \cdot 0987671) \cr &#x0026; \,\qquad \quad {\plus}{\rm age}\, {\times}(0 \cdot 0004111){\plus}{\rm educational\, \, status} \cr &#x0026; \,\qquad \quad {\times}({\minus}0 \cdot 0169593) {\plus}{\rm ever\, \, asthma}\, {\times}(0 \cdot 0246553)\cr &#x0026; \,\qquad \quad {\plus}{\rm diabetes}\, {\times}(0 \cdot 0705768){\plus}{\rm MP\ symptoms\ ever} \cr &#x0026;\qquad\quad\,{\times}({\minus}0 \cdot 0302591){\plus}{\rm HRT\, \, current\, \, use}\, {\times}(0 \cdot 0303381)\cr &#x0026; \qquad\quad{\plus}{\rm centre}{\times}({\minus}0 \cdot 0001263). $$ For men: $$\eqalignno{ &#x0026; \log {\mathop{\rm BMI}\nolimits}\,{\equals}\,3 \cdot 152997{\plus}{\rm figural\, \, scales}\, {\times}(0 \cdot 0564805) \cr &#x0026; \,\quad \qquad {\plus}{\rm age}\, {\times}({\minus}0 \cdot 0028301)\, {\plus}{\rm educational\, \, status}\, \cr &#x0026; \,\quad \qquad {\times}({\minus}0 \cdot 0071292)\, {\plus}{\mathop{\rm CVD}\nolimits} \, {\times}(0 \cdot 0542313) \cr &#x0026; \,\quad \qquad {\plus}{\rm smoking\, \, status}\, {\times}({\minus}0 \cdot 0243481){\plus}{\rm centre} \cr &#x0026; \,\quad \qquad {\times}({\minus}0 \cdot 000238). $$ The mean difference between predicted BMI and oBMI was −0·01 (sd 2·64) kg/m2 for women and −0·033 (sd 2·87) kg/m2 for men. The negative values imply that the predicted measure was higher than the measured value. Our validation study shows that the figural scales used in RHINE III are a reliable means of identifying populations 'at risk', for both men and women. They can identify subjects 'at risk' with high accuracy, as shown by the ROC curve analyses with an AUC well above 0·8, which is of high value to studies on chronic disease development. Both sexes reported their current figural scales in accordance with objectively measured current BMI and WC. The figural scales performed well in the validation analyses, with highly consistent results across various validation approaches. The analyses could also validate BMI based on self-reported weight and height. In general, the figural scales were well reported by RHINE participants, with a small degree of missing information in the full study (6·6 %). We observed some differences between responders and non-responders. Older, less educated and male participants were less inclined to fill in the figural scales. Objectively measured BMI corresponded very well with the expected increase at each figural scale. Correlation was high between the two methods and comparable to correlations found by Bulik et al. (females, r=0·81; males, r=0·73)( 18 ). Some studies do not report equally high correlations, but the finding of higher correlation for women than for men is consistently observed in studies( 21 – 23 ). Correlations were also high for self-reported and objectively measured BMI, with the expected degree of over- and under-reporting depending on the figural scale. We observed an overlap of BMI ranges across the figural scales, as expected. The ordinal and fixed scale forces people to decide on one figure or the other, even though they might feel they are in between two figural scales. This may result in a greater range of BMI( 24 ). The larger variability of objectively measured BMI observed at the extreme ends of the figural scales relates to the smaller number of participants found at these extremes, compared with other studies. In addition, pathological misconceptions of body weight and body image would be reflected in the extreme ends of the figural scales and may also explain some of the variation in these extreme categories. The ROC curve analyses provided optimal sensitivity and specificity criteria for metabolic risk in this Scandinavian population, both in men and women. For younger men and women, the optimal criterion for obesity (and for overweight among men) was one figural scale lower than for older subjects. This is possibly due to more muscle mass in younger persons than in the elderly. While we cannot fully explain this observation, WC showed no age group difference, supporting our hypothesis. Younger people might also be more conscious about body norms in their age group and report lower figural scales. The optimal criteria for metabolic risk based on measured WC were the same as for overweight. The similarity between overweight and obesity criteria implies that while figural scales have a high power to identify subjects 'at risk', differentiating between those overweight and obese is more difficult. ROC curve analyses and calculated optimal criteria are similar to findings in other validation studies( 13 , 18 , 24 ). The importance of calculating optimal sensitivity and specificity criteria using objective data within a study population is underlined by Madrigal et al., who compared researchers' and participants' perception of the figures and found considerable discrepancy leading ultimately to a considerable misclassification( 22 ). Differential interpretation of the figural stimuli has also been shown by ethnicity, with different ethnic groups assigning different BMI to the same figural stimuli( 21 , 25 , 26 ). Figural scales alone explained about 50 % of BMI variance, comparable to the r 2 published by Kaufer-Horwitz et al.( 24 ) and Bulik et al.( 18 ). The explanatory power increased when additional subjective data were added to the predictive model and after excluding significant outliers. The mean difference between predicted and objectively measured BMI was only 1 BMI unit. We observed larger, non-significant differences in the objective measures at the extreme figural scales (extremely lean, figural scale 1 and extremely obese, figural scale 9). Researchers wishing to use figural scales to estimate BMI are advised to collect these additional data, where possible, to achieve the highest degree of accuracy. Some study limitations need to be considered when applying the figural scales. First, misconceptions of body size and weight might be differential and reporting bias, for example by sex, education and overweight( 27 ) or due to psychiatric disorders( 28 ), cannot be totally excluded. Therefore, at the individual level, there is a risk of misclassification when using figural scales. At the population level, however, the instrument discriminates well between individuals without or 'at risk'. Second, participants were recruited randomly and should represent the general population living at the respective study sites, all set in Nordic countries. For that reason, our results are generalizable to Northern Europe or similar European countries only. We had no additional data on ethnicity, which could have introduced non-differential misclassification and loss of power( 25 ). Third, the sample yields a high prevalence of asthma. There is, however, no reason to believe that asthmatics would have a differential perception of their body image. While obesity is known as a risk factor of asthma, we did not observe a high prevalence of obesity in our study population. Altogether, we do not assume differential misclassification due to the asthmatics in our study. Finally, few participants were found at the extreme figural scales and thus there was insufficient power to calculate corresponding BMI or cut-off points for underweight subjects. In summary, subjects 'at risk', as defined by BMI and WC, can be identified with high accuracy using this figural scale. The reliability of the RHINE III figural stimuli for a Scandinavian population is comparable to figural stimuli applied in other, partially ethnically diverse population-based studies. Given the good performance of the figural stimuli, we will further investigate their use in public health and clinical studies. The figural scales are a valid alternative or even an additional instrument to assess anthropometric data in public health research. Financial support: C.S., V.S., B.B. and T.G. are members of the COST Action BM1201. J.D. received a research scholarship from the COST Action BM1201 (COST-STSM-ECOST-STSM-BM1201-160913-036015). The RHINE III was supported financially by the Norwegian Research Council (grant number 214123); the Bergen Medical Research Foundation; the Western Norwegian Regional Health Authorities (grant numbers 911 892 and 911 631); the Norwegian Labour Inspection; the Norwegian Asthma and Allergy Association; the Faculty of Health of Aarhus University (project number 240008); The Wood Dust Foundation (project number 444508795); the Danish Lung Association; the Swedish Heart and Lung Foundation; the Vårdal Foundation for Health Care Science and Allergy Research; the Swedish Council for Working Life and Social Research; the Bror Hjerpstedt Foundation; the Swedish Asthma and Allergy Association; the Icelandic Research Council; and the Estonian Science Foundation (grant number 4350). ECRHS acknowledgements are provided in the online supplementary material 2. Conflict of interest: The authors declare that they have no conflict of interest. Authorship: J.D., main author, developed the analytic plan presented in this paper; she conducted the analyses, interpreted the data in conjunction with all authors, and was responsible for writing the first and all consecutive drafts of the submitted manuscript. R.B. was involved in the data collection in Norway; she participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions C.J. is the principal investigator (PI) of the RHINE study and is in the ECRHS study directorate; he participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. A.J. was involved in the RHINE protocol development, the data collection in Norway, the preparation of the RHINE cohort data set and provided statistical support; she participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. B.B. was involved in the data collection in Iceland; she participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. L.B. was involved in the data collection in Umeå, Sweden; he participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. B.F. is the PI of the study centre in Umeå, Sweden; he participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. T.G. is the PI of the study centre in Iceland and is in the RHINE and ECRHS study directorate; he participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. D.J. is the study coordinator of ECRHS; she participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. R.J. is the PI of the study centre in Tartu, Estonia and is a member of the RHINE study directorate; he participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. E.L. was involved in the data collection in Umeå, Sweden; she participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. D.N. was involved in the data collection in Uppsala, Sweden; he participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. E.O. is the PI of the study centre in Bergen, Norway and is a member of the RHINE study directorate; he participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. V.S is the PI in the study centre in Aarhus, Denmark and is a member of the RHINE study directorate; she participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. T.D.S. was involved in the data collection in Bergen, Norway; she participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. T.S. was involved in the data collection in Aarhus, Denmark; he participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. K.T. is the PI in Gothenburg, Sweden and is in the RHINE and ECRHS study directorate; he participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. M.W. was involved in the data collection in Bergen, Norway; she participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. G.W. was involved in the data collection in Uppsala, Sweden; she participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. S.C.D. is the PI of ECRHS, Melbourne, Australia; she participated in the interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. C.S. participated in the study protocol development and data collection in Bergen, Norway and is in the ECRHS study directorate; she participated in the development of the analytic plan, interpretation of the results in conjunction with all authors and provided critical commentary in the draft revisions. F.G.R. developed the body silhouette protocol for RHINE and ECRHS, and participated in the study protocol development and data collection in Bergen, Norway; he participated in the development of the analytic plan, interpretation of results in conjunction with all authors and provided critical commentary in the draft revisions. C.S. and F.G.R. share last authorship. Ethics of human subject participation: The study was approved by the local ethical commissions of each study centre. All participants provided written informed consent. To view supplementary material for this article, please visit http://dx.doi.org/10.1017/S136898001600015X 1. Seidell, JC, Kahn, HS, Williamson, DF et al. (2001) Report from a Centers for Disease Control and Prevention Workshop on use of adult anthropometry for public health and primary health care. Am J Clin Nutr 73, 123–126. 2. Ali, Z & Ulrik, CS (2013) Obesity and asthma: a coincidence or a causal relationship? A systematic review. Respir Med 107, 1287–1300. 3. Beuther, DA, Weiss, ST & Sutherland, ER (2006) Obesity and asthma. Am J Respir Crit Care Med 174, 112–119. 4. Fantuzzi, G (2005) Adipose tissue, adipokines, and inflammation. J Allergy Clin Immunol 115, 911–919. 5. Canoy, D, Luben, R, Welch, A et al. (2004) Abdominal obesity and respiratory function in men and women in the EPIC-Norfolk Study, United Kingdom. Am J Epidemiol 159, 1140–1149. 6. Leone, N, Courbon, D, Thomas, F et al. (2009) Lung function impairment and metabolic syndrome. Am J Respir Crit Care Med 179, 509–516. 7. Rastogi, D, Canfield, SM, Andrade, A et al. (2012) Obesity-associated asthma in children: a distinct entity. Chest 141, 895–905. 8. Jaffrin, MY (2009) Body composition determination by bioimpedance: an update. Curr Opin Clin Nutr Metab Care 12, 482–486. 9. Sorensen, TI & Stunkard, AJ (1993) Does obesity run in families because of genes? An adoption study using silhouettes as a measure of obesity. Acta Psychiatr Scand Suppl 370, 67–72. 10. Romieu, I, Escamilla-Núñez, MC, Sánchez-Zamorano, LM et al. (2012) The association between body shape silhouette and dietary pattern among Mexican women. Public Health Nutr 15, 116–125. 11. Tehard, B, Liere, MJV, Nougué, CC et al. (2002) Anthropometric measurements and body silhouette of women: validity and perception. J Am Diet Assoc 102, 1779–1784. 12. Munoz-Cachon, MJ, Salces, I, Arroyo, M et al. (2009) Overweight and obesity: prediction by silhouettes in young adults. Obesity (Silver Spring) 17, 545–549. 13. Kabir, Y, Zafar, TA & Waslien, C (2013) Relationship between perceived body image and recorded body mass index among Kuwaiti female university students. Women Health 53, 693–705. 14. Leonhard, ML & Barry, NJ (1998) Body image and obesity: effects of gender and weight on perceptual measures of body image. Addict Behav 23, 31–34. 15. Burney, PG, Luczynska, C, Chinn, S et al. (1994) The European Community Respiratory Health Survey. Eur Respir J 7, 954–960. 16. Johannessen, A, Verlato, G, Benediktsdottir, B et al. (2014) Longterm follow-up in European respiratory health studies – patterns and implications. BMC Pulm Med 14, 63. 17. World Health Organization (2008) Waist Circumference and Waist–Hip Ratio: Report of a WHO Expert Consultation. Geneva: WHO, Department of Nutrition for Health and Development. 18. Bulik, CM, Wade, TD, Heath, AC et al. (2001) Relating body mass index to figural stimuli: population-based normative data for Caucasians. Int J Obes Relat Metab Disord 25, 1517–1524. 19. Fawcett, T (2006) An introduction to ROC analysis. Pattern Recogn Lett 27, 861–874. 20. Burnham, KP & Anderson, DR (2004) Multimodel inference – understanding AIC and BIC in model selection. Sociol Method Res 33, 261–304. 21. Fitzgibbon, ML, Blackman, LR & Avellone, ME (2000) The relationship between body image discrepancy and body mass index across ethnic groups. Obes Res 8, 582–589. 22. Madrigal, H, Sanchez-Villegas, A, Martinez-Gonzalez, MA et al. (2000) Underestimation of body mass index through perceived body image as compared to self-reported body mass index in the European Union. Public Health 114, 468–473. 23. Martinez, JA, Kearney, JM, Kafatos, A et al. (1999) Variables independently associated with self-reported obesity in the European Union. Public Health Nutr 2, 125–133. 24. Kaufer-Horwitz, M, Martinez, J, Goti-Rodriguez, LM et al. (2006) Association between measured BMI and self-perceived body size in Mexican adults. Ann Hum Biol 33, 536–545. 25. Kronenfeld, LW, Reba-Harrelson, L, Von Holle, A et al. (2010) Ethnic and racial differences in body size perception and satisfaction. Body Image 7, 131–136. 26. Mciza, Z, Goedecke, JH, Steyn, NP et al. (2005) Development and validation of instruments measuring body image and body weight dissatisfaction in South African mothers and their daughters. Public Health Nutr 8, 509–519. 27. Paeratakul, S, White, MA, Williamson, DA et al. (2002) Sex, race/ethnicity, socioeconomic status, and BMI in relation to self-perception of overweight. Obes Res 10, 345–350. 28. Torres-McGehee, TM, Monsma, EV, Dompier, TP et al. (2012) Eating disorder risk and the role of clothing in collegiate cheerleaders' body images. J Athl Train 47, 541–548. Loading article...
CommonCrawl
Segmentation of roots in soil with U-Net Abraham George Smith ORCID: orcid.org/0000-0001-9782-28251,2, Jens Petersen2, Raghavendra Selvan2 & Camilla Ruø Rasmussen1 Plant root research can provide a way to attain stress-tolerant crops that produce greater yield in a diverse array of conditions. Phenotyping roots in soil is often challenging due to the roots being difficult to access and the use of time consuming manual methods. Rhizotrons allow visual inspection of root growth through transparent surfaces. Agronomists currently manually label photographs of roots obtained from rhizotrons using a line-intersect method to obtain root length density and rooting depth measurements which are essential for their experiments. We investigate the effectiveness of an automated image segmentation method based on the U-Net Convolutional Neural Network (CNN) architecture to enable such measurements. We design a data-set of 50 annotated chicory (Cichorium intybus L.) root images which we use to train, validate and test the system and compare against a baseline built using the Frangi vesselness filter. We obtain metrics using manual annotations and line-intersect counts. Our results on the held out data show our proposed automated segmentation system to be a viable solution for detecting and quantifying roots. We evaluate our system using 867 images for which we have obtained line-intersect counts, attaining a Spearman rank correlation of 0.9748 and an \(r^2\) of 0.9217. We also achieve an \(F_1\) of 0.7 when comparing the automated segmentation to the manual annotations, with our automated segmentation system producing segmentations with higher quality than the manual annotations for large portions of the image. We have demonstrated the feasibility of a U-Net based CNN system for segmenting images of roots in soil and for replacing the manual line-intersect method. The success of our approach is also a demonstration of the feasibility of deep learning in practice for small research groups needing to create their own custom labelled dataset from scratch. High-throughput phenotyping of roots in soil has been a long-wished-for goal for various research purposes [1,2,3,4]. The challenge of exposing the architecture of roots hidden in soil has promoted studies of roots in artificial growth media [5]. However, root growth is highly influenced by physical constraints [6] and such studies have shown to be unrepresentative of roots in soil [7, 8]. Traditionally studies of roots in soil have relied on destructive and laborious methods such as trenches in the field and soil coring followed by root washing [9]. Recently 3D methods such as X-ray computed tomography [10] and magnetic resonance imaging [11] have been introduced, but these methods require expensive equipment and only allow small samples. Since the 1990, rhizotrons [12,13,14] and minirhizotrons [15, 16] which allow non-invasive monitoring of spatial and temporal variations in root growth in soil, have gained popularity. Minirhizotrons facilitate the repeated observation and photographing of roots through the transparent surfaces of below ground observation tubes [17]. A major bottleneck when using rhizotron methods is the extraction of relevant information from the captured images. Images have traditionally been annotated manually using the line-intersect method where the number of roots crossing a line in a grid is counted and correlated to total root length [18, 19] or normalised to the total length of grid line [20]. The line-intersect method was originally developed for washed roots but is now also used in rhizotron studies where a grid is either directly superimposed on the soil-rhizotron interface [21, 22] or indirectly on recorded images [23, 24]. The technique is arduous and has been reported to take 20 min per metre of grid line in minirhizotron studies [25]. Line-intersect counts are not a direct measurement of root length and do not provide any information on architectural root traits such as branching, diameter, tip count, growth speed or growth angle of laterals. To overcome these issues, several attempts have been made to automate the detection and measurement of roots, but all of them require manual supervision, such as mouse clicks to detect objects [26, 27]. The widely used "RootFly" software provides both manual annotation and automatic root detection functionality [28]. Although the automatic detection worked well on the initial three datasets the authors found it did not transfer well to new soil types (personal communication with Stan Birchfield, September 27, 2018). Following the same manual annotation procedure as in RootFly, [29] calculated that it takes 1–1.5 h per 100 cm2 to annotate images of roots from minirhizotrons, adding up to thousands of hours for many minirhizotron experiments. Although existing software is capable of attaining much of the desired information, the annotation time required is prohibitive and severely limits the use of such tools. Image segmentation is the splitting of an image into different meaningful parts. A fully automatic root segmentation system would not just save agronomists time but could also provide more localized information on which roots have grown and by how much as well as root width and architecture. The low contrast between roots and soil has been a challenge in previous attempts to automate root detection. Often only young unpigmented roots can be detected [30] or roots in black peat soil [31]. To enable detection of roots of all ages in heterogeneous field soils, attempts have been made to increase the contrast between soil and roots using custom spectroscopy. UV light can cause some living roots to fluoresce and thereby stand out more clearly [3] and light in the near–infrared spectrum can increase the contrast between roots and soil [32]. Other custom spectroscopy approaches have shown the potential to distinguish between living and dead roots [33, 34] and roots from different species [35, 36]. A disadvantage of such approaches is that they require more complex hardware which is often customized to a specific experimental setup. A method which works with ordinary RGB photographs would be attractive as it would not require modifications to existing camera and lighting setups, making it more broadly applicable to the wider root research community. Thus in this work we focus on solving the problem of segmenting roots from soil using a software driven approach. Prior work on segmenting roots from soil in photographs has used feature extraction combined with traditional machine learning methods [37, 38]. A feature extractor is a function which transforms raw data into a suitable internal representation from which a learning subsystem can detect or classify patterns [39]. The process of manually designing a feature extractor is known as feature engineering. Effective feature engineering for plant phenotyping requires a practitioner with a broad skill-set as they must have sufficient knowledge of both image analysis, machine learning and plant physiology [40]. Not only is it difficult to find the optimal description of the data but the features found may limit the performance of the system to specific datasets [41]. With feature engineering approaches, domain knowledge is expressed in the feature extraction code so further programming is required to re-purpose the system to new datasets. Deep learning is a machine learning approach, conditioned on the training procedure, where a machine fed with raw data automatically discovers a hierarchy of representations that can be useful for detection or classification tasks [39]. Convolutional Neural Networks (CNNs) are a class of deep learning architectures where the feature extraction mechanism is encoded in the weights (parameters) of the network, which can be updated without the need for manual programming by changing or adding to the training data. Via the training process a CNN is able to learn from examples, to approximate the labels or annotations for a given input. This makes the effectiveness of CNNs highly dependent on the quality and quantity of the annotations provided. Deep learning facilitates a decoupling of plant physiology domain knowledge and machine learning technical expertise. A deep learning practitioner can focus on the selection and optimisation of a general purpose neural network architecture whilst root experts encode their domain knowledge into annotated data-sets created using image editing software. CNNs have now established their dominance on almost all recognition and detection tasks [42,43,44,45]. They have also been used to segment roots from soil in X-ray tomography [46] and to identify the tips of wheat roots grown in germination paper growth pouches [41]. CNNs have an ability to transfer well from one task to another, requiring less training data for new tasks [47]. This gives us confidence that knowledge attained from training on images of roots in soil in one specific setup can be transferred to a new setup with a different soil, plant species or lighting setup. The aim of this study is to develop an effective root segmentation system using a CNN. For semantic segmentation tasks CNN architectures composed of encoders and decoders are often used. These so-called encoder-decoder architectures firstly transform the input using an encoder into a representation with reduced spatial dimensions which may be useful for classification tasks but will lack local detail, then a decoder will up-sample the representation given by the encoder to a similar resolution as the original input, potentially outputting a label for each pixel. Another encoder-decoder based CNN system for root image analysis is RootNav 2.0 [48] which is targeted more towards experimental setups with the entire root system visible, where it enables extraction of detailed root system architecture measurements. We use the U-Net CNN encoder-decoder architecture [49], which has proven to be especially useful in contexts where attaining large amounts of manually annotated data is challenging, which is the case in biomedical or biology experiments. As a baseline machine learning approach we used the Frangi vessel enhancement filter [50], which was originally developed to enhance vessel structures on images of human vasculature. Frangi filtering represents a more traditional and simpler off-the-shelf approach which typically has lower minimum hardware requirements when compared to U-Net. We hypothesize that (1) U-Net will be able to effectively discriminate between roots and soil in RGB photographs, demonstrated by a strong positive correlation between root length density obtained from U-Net segmentations and root intensity obtained from the manual line-intersect method. And (2) U-Net will outperform a traditional machine learning approach with larger amounts of agreement between the U-Net segmentation output and the test set annotations. We used images of chicory (Cichorium intybus L.) taken during summer 2016 from a large 4 m deep rhizotron facility at University of Copenhagen, Taastrup, Denmark (Fig. 1). The images had been used in a previous study [51] where the analysis was performed using the manual line-intersect method. As we make no modifications to the hardware or photographic procedure, we are able to evaluate our method as a drop-in replacement to the manual line-intersect method. Chicory (Cichorium intybus L.) growing in the rhizotron facility The facility from which the images were captured consists of 12 rhizotrons. Each rhizotron is a soil filled rectangular box with 20 1.2 m wide vertically stacked transparent acrylic panels on two of its sides which are covered by 10 mm foamed PVC plates. These plates can be removed to allow inspection of root growth at the soil-rhizotron interface. There were a total of 3300 images which had been taken on 9 different dates during 2016. The photos were taken from depths between 0.3 and 4 m. Four photos were taken of each panel in order to cover its full width, with each individual image covering the full height and 1/4 of the width (For further details of the experiment and the facility see [51]). The image files were labelled according to the specific rhizotron, direction and panel they are taken from with the shallowest which is assigned the number 1 and the deepest panel being assigned the number 20. Line-intersect counts were available for 892 images. They had been obtained using a version of the line-intersect method [18] which had been modified to use grid lines [19, 52] overlaid over an image to compute root intensity. Root intensity is the number of root intersections per metre of grid line in each panel [20]. In total four different grids were used. Coarser grids were used to save time when counting the upper panels with high root intensity and finer grids were used to ensure low variation in counts from the lower panels with low root intensity. The 4 grids used had squares of sizes 10, 20, 40 and 80 mm. The grid size for each depth was selected by the counter, aiming to have at least 50 intersections for all images obtained from that depth. For the deeper panels with less roots, it was not possible to obtain 50 intersections per panel so the finest grid (10 mm) was always used. To enable comparison we only used photos that had been included in the analysis by the manual line-intersect method. Here photos containing large amounts of equipment were not deemed suitable for analysis. From the 3300 originals, images from panels 3, 6, 9, 12, 15 and 18 were excluded as they contained large amounts of equipment such as cables and ingrowth cores. Images from panel 1 were excluded as it was not fully covered with soil. Table 1 shows the number of images from each date, the number of images remaining after excluding panels unsuitable for analysis and if line-intersect counts were available. Table 1 Number of images from each date Deeper panels were sometimes not photographed as when photographing the panels the photographer worked from the top to the bottom and stopped when it was clear that no deeper roots could be observed. We took the depth distribution of all images obtained from the rhizotrons in 2016 into account when selecting images for annotation in order to create a representative sample (Fig. 2). After calculating how many images to select from each depth the images were selected at random. The number of images selected for annotation from each panel depth The first 15 images were an exception to this. They had been selected by the annotator whilst aiming to include all depths. We kept these images but ensured they were not used in the final evaluation of model performance as we were uncertain as to what biases had led to their selection. We chose a total of 50 images for annotation. This number was based on the availability of our annotator and the time requirements for annotation. To facilitate comparison with the available root intensity measurements by analysing the same region of the image as [51], the images were cropped from their original dimensions of \(4608\times 2592\) pixels to \(3991\times 1842\) pixels which corresponds to an area of approximately 300 \(\times\) 170 mm of the surface of the rhizotron. This was done by removing the right side of the image where an overlap between images is often present and the top and bottom which included the metal frame around the acrylic glass. A detailed per-pixel annotation (Fig. 3) was then created as a separate layer in Photoshop by a trained agronomist with extensive experience using the line-intersect method. Annotation took approximately 30 min per image with the agronomist labelling all pixels which they perceived to be root. The number of annotated root pixels ranged from 0 to 203533 (2.8%) per image. Data split During the typical training process of a neural network, the labelled or annotated data is split into a training, validation and test dataset. The training set is used to optimize a neural network using a process called Stochastic Gradient Descent (SGD) where the weights (parameters) are adjusted in such a way that segmentation performance improves. The validation set is used for giving an indication of system performance during the training procedure and tuning the so-called hyper-parameters, not optimised by SGD such as the learning rate. See the section U-Net Implementation for more details. The test set performance is only calculated once after the neural network training process is complete to ensure an unbiased indication of performance. Firstly, we selected 10 images randomly for the test set. As the test set only contained 10 images, this meant the full range of panel heights could not be included. One image was selected from all panel heights except for 13, 17, 18 and 20. The test set was not viewed or used in the computation of any statistics during the model development process, which means it can be considered as unseen data when evaluating performance. Secondly, from the remaining 40 images we removed two images. One because it didn't contain any roots and another because a sticker was present on the top of the acrylic. Thirdly, the remaining 38 images were split into split into training and validation datasets. We used the root pixel count from the annotations to guide the split of the images into a train and validation data-set. The images were ordered by the number of root pixels in each image and then 9 evenly spaced images were selected for the validation set with the rest being assigned to the training set. This was to ensure a range of root intensities was present in both training and validation sets. To evaluate the performance of the model during development and testing we used \(F_1\). We selected \(F_1\) as a metric because we were interested in a system which would be just as likely to overestimate as it would underestimate the roots in a given photo. That meant precision and recall were valued equally. In this context precision is the ratio of correctly predicted root pixels to the number of pixels predicted to be root and recall is the ratio of correctly predicted root pixels to the number of actual root pixels in the image. Both recall and precision must be high for \(F_1\) to be high. $$\begin{aligned} F_{1} = 2 \cdot {\frac{\text {precision} \cdot \text {recall}}{\text {precision} + \text {recall}}} \end{aligned}$$ The \(F_1\) of the segmentation output was calculated using the training and validation sets during system development. The completed system was then evaluated using the test set in order to provide a measure of performance on unseen data. We also report accuracy, defined as the ratio of correctly predicted to total pixels in an image. In order to facilitate comparison and correlation with line-intersect counts, we used an approach similar to [53] to convert a root segmentation to a length estimate. The scikit-image skeletonize function was used to first thin the segmentation and then the remaining pixels were counted. This approach was used for both the baseline and the U-Net segmentations. For the test set we also measured correlation between the root length of the output segmentation and the manual root intensity given by the line-intersect method. We also measured correlation between the root length of our manual per-pixel annotations and the U-Net output segmentations for our held out test set. To further quantify the effectiveness of the system as a replacement for the line-intersect method, we obtained the coefficient of determination (\(r^2\)) for the root length given by our segmentations and root intensity given by the line-intersect method for 867 images. Although line-intersect counts were available for 892 images, 25 images were excluded from our correlation analysis as they had been used in the training dataset. Frangi vesselness implementation For our baseline method we built a system using the Frangi Vesselness enhancement filter [50]. We selected the Frangi filter based on the observation that the roots look similar in structure to blood vessels, for which the Frangi filter was originally designed. We implemented the system using the Python programming language (version 3.6.4), using the scikit-image [54] (version 0.14.0) version of Frangi. Vesselness refers to a measure of tubularity that is predicted by the Frangi filter for a given pixel in the image. To obtain a segmentation using the Frangi filter we thresholded the output so only regions of the image above a certain vesselness level would be classified as roots. To remove noise we further processed the segmentation output using connected component analysis to remove regions less than a threshold of connected pixels. To find optimal parameters for both the thresholds and the parameters for the Frangi filter we used the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [55]. In our case the objective function to be minimized was \(1 - mean(F_1)\) where \(mean(F_1)\) is the mean of the \(F_1\) scores of the segmentations produced from the thresholded Frangi filter output. U-Net implementation We implemented a U-Net CNN in Python (version 3.6.4) using PyTorch [56] which is an open source machine learning library which utilizes GPU accelerated tensor operations. PyTorch has convenient utilities for defining and optimizing neural networks. We used an NVIDIA TITAN Xp 12 GB GPU. Except for the input layer which was modified to receive RGB instead of a single channel, our network had the same number of layers and dimensions as the original U-Net [49]. We applied Group norm [57] after all ReLU activations as opposed to Batch norm [58] as batch sizes as small as ours can cause issues due to inaccurate batch statistics degrading the quality of the resulting models [59]. The original U-Net proposed in [49] used Dropout which we avoided as in some cases the combination of dropout and batch normalisation can cause worse results [60]. He initialisation [61] was used for all layers. Sub region of one of the photos in the training data. a Roots and soil as seen through the transparent acrylic glass on the surface of one of the rhizotrons and b is the corresponding annotation showing root pixels in white and all other pixels in black. Annotations like these were used for training the U-Net CNN Instance selection The network takes tiles with size \(572 \times 572\) as input and outputs a segmentation for the centre \(388 \times 388\) region for each tile (Fig. 4). We used mirroring to pad the full image before extracting tiles. Mirroring in this context means the image was reflected at the edges to make it bigger and provide some synthetic context to allow segmentation at the edges of the image. In neural network training an epoch refers to a full pass over the training data. Typically several epochs are required to reach good performance. At the start of each epoch we extracted 90 tiles with random locations from each of the training images. These tiles were then filtered down to only those containing roots and then a maximum of 40 was taken from what ever was left over. This meant images with many roots would still be limited to 40 tiles. The removal of parts of the image which does not contain roots has similarity to the work of [62] who made the class imbalance problem less severe by cropping regions containing empty space. When training U-Net with mini batch SGD, each item in a batch is an image tile and multiple tiles are input into the network simultaneously. Using tiles as opposed to full images gave us more flexibility during experimentation as we could adjust the batch size depending on the available GPU memory. When training the network we used a batch size of 4 to ensure we did not exceed the limits of the GPU memory. Validation metrics were still calculated using all tiles with and without soil in the validation set. U-Net receptive field input size (blue) and output size (green). The receptive field is the region of the input data which is provided to the neural network. The output size is the region of the original image which the output segmentation is for. The output is smaller than the input to ensure sufficient context for the classification of each pixel in the output Preprocessing and augmentation Each individual image tile was normalised to \([-0.5, +0.5]\) as centering inputs improves the convergence of networks trained with gradient descent [63]. Data augmentation is a way to artificially expand a dataset and has been found to improve the accuracy of CNNs for image classification [64]. We used color jitter as implemented in PyTorch, with the parameters 0.3, 0.3, 0.2 and 0.001 for brightness, contrast saturation and hue respectively. We implemented elastic grid deformation (Fig. 5) as described by [65] with a probability of 0.9. Elastic grid deformations are parameterized by the standard deviation of a Gaussian distribution \(\sigma\) which is an elasticity coefficient and \(\alpha\) which controls the intensity of the deformation. As opposed to [65] who suggests a constant value for \(\sigma\) and \(\alpha\), we used an intermediary parameter \(\gamma\) sampled from [0.0, 1.0) uniformly. \(\gamma\) was then used as an interpolation co-efficient for both \(\sigma\) from [15, 60] and \(\alpha\) from [200, 2500]. We found by visual inspection that the appropriate \(\alpha\) was larger for a larger \(\sigma\). If a too large \(\alpha\) was used for a given \(\sigma\) then the image would look distorted in unrealistic ways. The joint interpolation of both \(\sigma\) and \(\alpha\) ensured that the maximum intensity level for a given elasticity coefficient would not lead to over distorted and unrealistic looking deformations. We further scaled \(\alpha\) by a random amount from [0.4, 1) so that less extreme deformations would also be applied. We consider the sampling of tiles from random locations within the larger images to provide similar benefits to the commonly used random cropping data augmentation procedure. The augmentations were ran on 8 CPU threads during the training process. a Elastic grid applied to an image tile and b corresponding annotation. A white grid is shown to better illustrate the elastic grid effect. A red rectangle illustrates the region which will be segmented. Augmentations such as elastic grid are designed to increase the likelihood that the network will work on similar data that is not included in the training set Loss functions quantify our level of unhappiness with the network predictions on the training set [66]. During training the network outputs a predicted segmentation for each input image. The loss function provides a way to measure the difference between the segmentation output by the network and the manual annotations. The result of the loss function is then used to update the network weights in order to improve its performance on the training set. We used the Dice loss as implemented in V-Net [67]. Only 0.54% of the pixels in the training data were roots which represents a class imbalance. Training on imbalanced datasets is challenging because classifiers are typically designed to optimise overall accuracy which can cause minority classes to be ignored [68]. Experiments on CNNs in particular have shown the effect of class imbalance to be detrimental to performance [69] and can cause issues with convergence. The Dice loss is an effective way to handle class imbalanced datasets as errors for the minority class will be given more significance. For predictions p, ground truth annotation g, and number of pixels in an image N, Dice loss was computed as: $$\begin{aligned} DL=1 - \frac{2 (p \cap g)}{p \cup g} =1 - \frac{2\sum _{i}^{N}p_{i}g_{i}}{\sum _{i}^{N}p_{i}+\sum _{i}^{N}g_{i}} \end{aligned}$$ The Dice coefficient corresponds to \(F_1\) when there are only two classes and ranges from 0 to 1. It is higher for better segmentations. Thus it is subtracted from 1 to convert it to a loss function to be minimized. We combined the Dice loss with cross-entropy multiplied by 0.3, which was found using trial and error. This combination of loss functions was used because it provided better results than either loss function in isolation during our preliminary experiments. We used SGD with Nesterov momentum based on the formula from [70]. We used a value of 0.99 for momentum as this was used in the original U-Net implementation. We used an initial learning rate of 0.01 which was found by using trial and error whilst monitoring the validation and training \(F_1\). The learning rate alters the magnitude of the updates to the network weights during each iteration of the training procedure. We used weight decay with a value of \(1 \times 10^{-5}\). A learning rate schedule was used where the learning rate would be multiplied by 0.3 every 30 epochs. Adaptive optimization methods such as Adam [71] were avoided due to results showing they can cause worse generalisation behaviour [72, 73]. The \(F_1\) computed on both the augmented training and validation after each epoch is shown in Fig. 6. \(F_1\) on training and validation data sets. \(F_1\) is a measure of the system accuracy. The training \(F_1\) continues to improve whilst the validation \(F_1\) appears to plateau at around epoch 40. This is because the network is starting to fit to noise and other anomalies in the training data which are not present in the validation images We succeeded in getting both the U-Net and the Frangi filter system to segment roots in the images in the train and validation datasets (Table 2) as well as the held out test set (Table 3). As \(F_1\), recall and precision is not defined for images without roots we report the results on all images combined (Table 3). We report the mean and standard deviation of the per image results from the images which contain roots (Table 4). When computing these per image statistics we can see that U-Net performed better than the Frangi system for all metrics attained. Table 2 Best U-Net model results on the train set and the validation set used for early stopping Table 3 Metrics on all images combined for the held out test set for the Frangi and U-Net segmentation systems Table 4 Mean and standard deviation of results on images containing roots Train and validation set metrics The final model parameters were selected based on the performance on the validation set. The best validation results were attained after epoch 73 after approximately 9 h and 34 min of training. The performance on the training set was higher than the validation set (Table 2). As parameters have been adjusted based on the data in the training and validation datasets these results are unlikely to be reliable indications of the model performance on new data so we report the performance on an unseen test set in the next section. Test set results The overall percentage of root pixels in the test data was 0.49%, which is lower than either the training or validation dataset. Even on the image with the highest errors the CNN is able to predict many of the roots correctly (Fig. 7). Many of the errors appear to be on the root boundaries. Some of the fainter roots are also missed by the CNN. For the image with the highest (best) \(F_1\) the U-Net segmentation appears very similar to the original annotation (Fig. 8). The segmentation also contains roots which where missed by the annotator (Fig. 8d) which we were able to confirm by asking the annotator to review the results. U-Net was also often able to segment the root-soil boundary more cleanly than the annotator (Fig. 9). False negatives can be seen at the top of the image where the CNN has failed to detect a small section of root (Fig. 8d). Original photo, annotation, segmentation output from U-Net and errors. To illustrate the errors the false positives are shown in red and the false negatives are shown in green. This image is a subregion of a larger image for which U-Net got the worst (lowest) \(F_1\) Original photo, annotation, segmentation output from U-Net and errors. To illustrate the errors the false positives are shown in red and the false negatives are shown in green. This image is a subregion of a larger image for which U-Net got the best (highest) \(F_1\). The segmentation also contains roots which were missed by the annotator. We were able to confirm this by having the annotator review these particular errors From left to right: Image, annotation overlaid over image in red, U-Net segmentation overlaid over image in blue, errors with false positive shown in red and false negative shown in green. Many of the errors are along an ambiguous boundary region between the root and soil. Much of the error region is caused by annotation, rather than CNN segmentation errors The performance of U-Net as measured by \(F_1\) was better than that of the Frangi system when computing metrics on all images combined (Table 3). It also had a closer balance between precision and recall. The U-Net segmentations have a higher \(F_1\) for all images with roots in the test data (Fig. 10). Some segmentations from the Frangi system have an \(F_1\) below 0.4 whilst all the U-Net segmentations give an \(F_1\) above 0.6 with the highest being just less than 0.8. The average predicted value for U-Net was over twice that of the Frangi system. This means U-Net predicted twice as many pixels to be root as Frangi did. The \(F_1\) for the 8 images containing roots for both the Frangi and U-Net systems The slight over estimation of total root pixels explains why recall is higher than precision for U-Net. The accuracy is above 99% for both systems. This is because accuracy is measured as the ratio of pixels predicted correctly and the vast majority of pixels are soil that both systems predicted correctly. For the two images which did not contain roots each misclassified pixel is counted as a false positive. The Frangi system gave 1997 and 1432 false positives on these images and the U-Net system gave 508 and 345 false positives. The Spearman rank correlation for the corresponding U-Net and line-intersect root intensities for the test data is 0.9848 (\(p=2.288 \times 10^{-7}\)). The U-Net segmentation can be seen to give a similar root intensity to the manual annotations (Fig. 11). Normalised root length from the U-Net segmentations, manual annotations and the line-intersect counts for the 10 test images. The measurements are normalised using the maximum value. All three methods have the same maximum value (Image 6) We report the root intensity with the segmented root length for 867 images taken in 2016 (Fig. 12). The two measurements have a Spearman rank correlation of 0.9748 \((p < 10^{-8})\) and an \(r^2\) of 0.9217. Although the two measurements correlate strongly, there are some notable deviations including images for which U-Net predicted roots not observed by the manual annotator. From this scatter plot we can see that the data is heteroscedastic, forming a cone shape around the regression line with the variance increasing as root intensity increases in both measurements. RI vs segmented root length for 867 images taken in 2016. The two measurements have a Spearman rank correlation of 0.9748 and an \(R^2\) of 0.9217 We have presented a method to segment roots from soil using a CNN. The segmentation quality as shown in Figs. 7c and 8c and the approximation of the root length given by our automated method and the manual line-intersect method for the corresponding images as shown in Figs. 11 and 12 are a strong indication that the system works well for the intended task of quantifying roots. The high correlation coefficient between the measurements from the automated and manual methods supports our hypothesis that a trained U-Net is able to effectively discriminate between roots and soil in RGB photographs. The consistently superior performance of the U-Net system on the unseen test set over the Frangi system as measured by \(F_1\) score supports our second hypothesis that a trained U-Net will outperform a Frangi filter based approach. The good generalisation behaviour and the success of the validation set at closely approximating the test set error indicate we would likely not need as many annotations for validation on future root datasets. As shown in Fig. 12 there are some images for which U-Net predicted roots and the line-intersection count was 0. When investigating these cases we found some false positives caused by scratches in the acrylic glass. Such errors could be problematic as they make it hard to attain accurate estimates of maximum rooting depth as the scratches could cause rooting depth to be overestimated. One way to fix this would be to manually design a dataset with more scratched panels in it in order to train U-Net not to classify them as roots. Another possible approach would be to automatically find difficult regions of images using an active learning approach such as [74] which would allow the network to query which areas of images should be annotated based on its uncertainty. An oft-stated limitation of CNNs is that they require large scale datasets [75] with thousands of densely labelled images [76] for annotation. In this study we were able to train from scratch, validate and test a CNN with only 50 images which were annotated in a few days by a single agronomist with no annotation or machine learning experience. Our system was also designed to work with an existing photography setup using an ordinary off-the-shelf RGB camera. This makes our method more broadly accessible than methods which require a more complex multi-spectral camera system. We used a loss function which combined Dice and cross entropy. In preliminary experiments we found this combined loss function to be more effective than either Dice or cross entropy used in isolation. Both [77] and [78] found empirically that a combination of Dice and cross entropy was effective at improving accuracy. Although [77] claims the combination of the loss functions is a way to yield better performance in terms of both pixel accuracy and segmentation metrics, we feel more research is needed to understand the exact benefits of such combined loss functions. Converting from segmentation to root length was not the focus of the current study. The method we used consisted of skeletonization and then pixel counting. One limitation of this method is that it may lead to different length estimates depending on the orientation of the roots [79]. See [79] for an in depth investigation and proposed solutions. Finding ways to improve annotation quality would also be a promising direction for further work. Figure 9 shows how even a high quality segmentation will still have a large number of errors due to issues with annotation quality. This makes the \(F_1\) given for a segmentation to not be representative of the systems' true performance. [80] found significant disagreement between human raters in segmenting tumour regions with Dice (equivalent to our \(F_1\)) scores between 74 and 85%. We suspect a similar level of error is present in our root annotations and that improving annotation quality would improve the metrics. Improved annotation quality would be particularly useful for the test and validation datasets as it would allow us to train the model to a higher performance. One way to improve the quality of annotations would be to combine various annotations by different experts using a majority vote algorithm such as the one used by [80] although caution should be taken when implementing such methods as in some cases they can accentuate more obvious features, causing an overestimation of performance [81]. It may also be worth investigating ways to reduce the weight of errors very close to the border of an annotation, as seen in Fig. 9, these are often issues with annotation quality or merely ambiguous boundary regions where a labelling of either root or soil should not be detrimental to the \(F_1\). One way to solve the problem with misleading errors caused by ambiguous boundary regions is the approach taken by [41] which involved having a boundary region around each area of interest where a classification either way will not affect the overall performance metrics. We excluded an image not containing roots and an image containing a sticker from our training and validation data. During training we also excluded parts of the image where no roots were found in order to handle the severe class imbalance present in the dataset. A limitation of this approach is that it may be useful for the network to learn to deal with stickers and in some cases, images without roots could contain hard negative examples which the network must learn to handle in order for it to achieve acceptable performance. For future research we aim to explore how well the segmentation system performance will transfer to photographs from both other crop species and different experimental setups. In our work so far we have explored ways to deal with a limited dataset by using data augmentation. Transfer learning is another technique which has been found to improve the performance of CNNs when compared to training from scratch for small datasets [47]. We can simultaneously investigate both transfer learning and the feasibility of our system to work with different kinds of plants by fine-tuning our existing network on root images from new plant species. [82] found pre-training U-Net to both substantially reduce training time and prevent overfitting. Interestingly, they pre-trained U-Net on two different datasets containing different types of images and found similar performance improvements in both cases. Such results indicate that pre-training U-Net using images which are substantially different from our root images may also provide performance advantages. Contra to this, [83] found training from scratch to give equivalent results to a transfer learning approach, which suggests that in some cases training time rather than final model performance will be the benefit of a transfer learning approach. As shown in Fig. 7, the CNN would leave gaps when a root was covered by large amounts of soil. An approach such as [84] could be used to recover such gaps which may improve the biological relevance of our root length estimates and potentially facilitate the extraction of more detailed root architecture information. As opposed to U-Net, the Frangi filter is included in popular image processing packages such as MATLAB and scikit-image. Although the Frangi filter was initially simple to implement, we found the scikit-image implementation too slow to facilitate optimisation on our dataset and substantial modifications were required to make optimisation feasible. Another disadvantage of the CNN we implemented is that as opposed to the Frangi filter, it requires a GPU for training. It is, however, possible to use a CPU for inference. [85] demonstrated that in some cases U-Net can be compressed to 0.1% of it's original parameter count with a very small drop in accuracy. Such an approach could be useful for making our proposed system more accessible to hardware constrained researchers. The data used for the current study is available from [86]. The source code for the systems presented is available from [87] and the trained U-Net model is available from [88]. Trachsel S, Kaeppler SM, Brown KM, Lynch JP. Shovelomics: high throughput phenotyping of maize (Zea mays L.) root architecture in the field. Plant Soil. 2011;341(1–2):75–87. Wasson AP, Richards RA, Chatrath R, Misra SC, Prasad SVS, Rebetzke GJ, Kirkegaard JA, Christopher J, Watt M. Traits and selection strategies to improve root systems and water uptake in water-limited wheat crops. J Exp Bot. 2012;63(9):3485–98. https://doi.org/10.1093/jxb/ers111. Wasson A, Bischof L, Zwart A, Watt M. A portable fluorescence spectroscopy imaging system for automated root phenotyping in soil cores in the field. J Exp Bot. 2016;67(4):1033–43. https://doi.org/10.1093/jxb/erv570. Maeght J-L, Rewald B, Pierret A. How to study deep roots-and why it matters. Front Plant Sci. 2013;4(299):1–14. https://doi.org/10.3389/fpls.2013.00299. Lynch JP, Brown KM. New roots for agriculture: exploiting the root phenome. Philos Trans R Soc B Biol Sci. 2012;367(1595):1598–604. https://doi.org/10.1098/rstb.2011.0243. Bengough AG, Mullins CE. Penetrometer resistance, root penetration resistance and root elongation rate in two sandy loam soils. Plant Soil. 1991;131(1):59–66. https://doi.org/10.1007/bf00010420. Wojciechowski T, Gooding MJ, Ramsay L, Gregory PJ. The effects of dwarfing genes on seedling root growth of wheat. J Exp Bot. 2009;60(9):2565–73. https://doi.org/10.1093/jxb/erp107. Watt M, Moosavi S, Cunningham SC, Kirkegaard JA, Rebetzke GJ, Richards RA. A rapid, controlled-environment seedling root screen for wheat correlates well with rooting depths at vegetative, but not reproductive, stages at two field sites. Ann Bot. 2013;112(2):447–55. https://doi.org/10.1093/aob/mct122. van Noordwijk M, Floris J. Loss of dry weight during washing and storage of root samples. Plant Soil. 1979;53(1–2):239–43. https://doi.org/10.1007/bf02181896. Mooney SJ, Pridmore TP, Helliwell J, Bennett MJ. Developing X-ray computed tomography to non-invasively image 3-d root systems architecture in soil. Plant Soil. 2011;352(1–2):1–22. https://doi.org/10.1007/s11104-011-1039-9. Stingaciu L, Schulz H, Pohlmeier A, Behnke S, Zilken H, Javaux M, Vereecken H. In situ root system architecture extraction from magnetic resonance imaging for water uptake modeling. Vadose Zone J. 2013;. https://doi.org/10.2136/vzj2012.0019. Taylor H, Huck M, Klepper B, Lund Z. Measurement of soil-grown roots in a rhizotron 1. Agron J. 1970;62(6):807–9. Huck MG, Taylor HM. The rhizotron as a tool for root research. Adv Agron. 1982;35:1–35. https://doi.org/10.1016/S0065-2113(08)60320-X. Van de Geijn S, Vos J, Groenwold J, Goudriaan J, Leffelaar P. The wageningen rhizolab—a facility to study soil-root-shoot-atmosphere interactions in crops. Plant Soil. 1994;161(2):275–87. Taylor HM, Upchurch DR, McMichael BL. Applications and limitations of rhizotrons and minirhizotrons for root studies. Plant Soil. 1990;129(1):29–35. https://doi.org/10.1007/bf00011688. Johnson MG, Tingey DT, Phillips DL, Storm MJ. Advancing fine root research with minirhizotrons. Environmental and Experimental Botany. 2001;45(3):263–89. https://doi.org/10.1016/S0098-8472(01)00077-6 Rewald B, Ephrath JE. Minirhizotron techniques. In: Eshel A, Beeckman T, editors. Plant roots: the hidden half. New York: CRC Press; 2013. p. 1–15. https://doi.org/10.1201/b14550. Newman E. A method of estimating the total length of root in a sample. J Appl Ecol. 1966;3(1):139–45. https://doi.org/10.2307/2401670. Tennant D. A test of a modified line intersect method of estimating root length. J Ecol. 1975;63(3):995–1001. https://doi.org/10.2307/2258617. Thorup-Kristensen K. Are differences in root growth of nitrogen catch crops important for their ability to reduce soil nitrate-n content, and how can this be measured? Plant Soil. 2001;230(2):185–95. Upchurch DR, Ritchie JT. Root observations using a video recording system in mini-rhizotrons1. Agron J. 1983;75(6):1009. https://doi.org/10.2134/agronj1983.00021962007500060033x. Ulas A, auf'm Erley GS, Kamh M, Wiesler F, Horst WJ. Root-growth characteristics contributing to genotypic variation in nitrogen efficiency of oilseed rape. J Plant Nutr Soil Sci. 2012;175(3):489–98. https://doi.org/10.1002/jpln.201100301. Heeraman DA, Juma NG. A comparison of minirhizotron, core and monolith methods for quantifying barley (Hordeum vulgare L.) and fababean (Vicia faba L.) root distribution. Plant Soil. 1993;148(1):29–41. https://doi.org/10.1007/bf02185382. Thorup-Kristensen K, Rasmussen CR. Identifying new deep-rooted plant species suitable as undersown nitrogen catch crops. J Soil Water Conserv. 2015;70(6):399–409. https://doi.org/10.2489/jswc.70.6.399. Böhm W, Maduakor H, Taylor HM. Comparison of five methods for characterizing soybean rooting density and development1. Agron J. 1977;69(3):415. https://doi.org/10.2134/agronj1977.00021962006900030021x. Lobet G, Draye X, Périlleux C. An online database for plant image analysis software tools. Plant Methods. 2013;9(1):38. Cai J, Zeng Z, Connor JN, Huang CY, Melino V, Kumar P, Miklavcic SJ. Rootgraph: a graphic optimization tool for automated image analysis of plant roots. J Exp Bot. 2015;66(21):6551–62. Zeng G, Birchfield ST, Wells CE. Automatic discrimination of fine roots in minirhizotron images. New Phytol. 2007;. https://doi.org/10.1111/j.1469-8137.2007.02271.x. Ingram KT, Leers GA. Software for measuring root characters from digital images. Agron J. 2001;93(4):918. https://doi.org/10.2134/agronj2001.934918x. Vamerali T, Ganis A, Bona S, Mosca G. An approach to minirhizotron root image analysis. Plant Soil. 1999;217(1–2):183–93. Nagel KA, Putz A, Gilmer F, Heinz K, Fischbach A, Pfeifer J, Faget M, Blossfeld S, Ernst M, Dimaki C, Kastenholz B, Kleinert A-K, Galinski A, Scharr H, Fiorani F, Schurr U. GROWSCREEN-rhizo is a novel phenotyping robot enabling simultaneous measurements of root and shoot growth for plants grown in soil-filled rhizotrons. Funct Plant Biol. 2012;39(11):891. https://doi.org/10.1071/fp12023. Nakaji T, Noguchi K, Oguma H. Classification of rhizosphere components using visible—near infrared spectral images. Plant Soil. 2007;310(1–2):245–61. https://doi.org/10.1007/s11104-007-9478-z. Wang Z, Burch WH, Mou P, Jones RH, Mitchell RJ. Accuracy of visible and ultraviolet light for estimating live root proportions with minirhizotrons. Ecology. 1995;76(7):2330–4. Smit AL, Zuin A. Root growth dynamics of brussels sprouts (Brassica olearacea var.gemmifera) and leeks ((Allium porrum L.) as reflected by root length, root colour and UV fluorescence. Plant Soil. 1996;185(2):271–80. https://doi.org/10.1007/bf02257533. Goodwin RH, Kavanagh F. Fluorescing substances in roots. Bull Torrey Bot Club. 1948;75(1):1. https://doi.org/10.2307/2482135. Rewald B, Meinen C, Trockenbrodt M, Ephrath JE, Rachmilevitch S. Root taxa identification in plant mixtures—current techniques and future challenges. Plant Soil. 2012;359(1–2):165–82. https://doi.org/10.1007/s11104-012-1164-0. Zeng G, Birchfield ST, Wells CE. Detecting and measuring fine roots in minirhizotron images using matched filtering and local entropy thresholding. Mach Vis Appl. 2006;17(4):265–78. Zeng G, Birchfield ST, Wells CE. Rapid automated detection of roots in minirhizotron images. Mach Vis Appl. 2010;21(3):309–17. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436. Tsaftaris SA, Minervini M, Scharr H. Machine learning for plant phenotyping needs image processing. Trends Plant Sci. 2016;21(12):989–91. Pound MP, Atkinson JA, Townsend AJ, Wilson MH, Griffiths M, Jackson AS, Bulat A, Tzimiropoulos G, Wells DM, Murchie EH, et al. Deep machine learning provides state-of-the-art performance in image-based plant phenotyping. GigaScience. 2017. https://doi.org/10.1093/gigascience/gix083. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems; 2012. p. 1097–05. Farabet C, Couprie C, Najman L, LeCun Y. Learning hierarchical features for scene labeling. EEE Trans Pattern Anal Mach Intell. 2013;35(8):1915–29. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015, p. 1–9. Tompson J, Goroshin R, Jain A, LeCun Y, Bregler C. Efficient object localization using convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015, p. 648–56. Douarre C, Schielein R, Frindel C, Gerth S, Rousseau D. Transfer learning from synthetic data applied to soil-root segmentation in X-ray tomography images. J Imaging. 2018;4(5):65. Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, Liang J. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging. 2016;35(5):1299–312. Yasrab R, Atkinson JA, Wells DM, French AP, Pridmore TP, Pound MP. Rootnav 2.0: deep learning for automatic navigation of complex plant root architectures. GigaScience. 2019;8(11):123. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer; 2015, p. 234–41. Frangi AF, Niessen WJ, Vincken KL, Viergever MA. Multiscale vessel enhancement filtering. In: International conference on medical image computing and computer-assisted intervention. Springer; 1998, p. 130–7. Rasmussen CR, Thorup-Kristensen K. Dresbøll DB Uptake of subsoil water below 2 m fails to alleviate drought response in deep-rooted Chicory (Cichorium intybus L.). Plant Soil. 2020;446:275–90. https://doi.org/10.1007/s11104-019-04349-7. Marsh B. Measurement of length in random arrangements of lines. J Appl Ecol. 1971;8:265–7. Andrén O, Elmquist H, Hansson A-C. Recording, processing and analysis of grass root images from a rhizotron. Plant Soil. 1996;185(2):259–64. https://doi.org/10.1007/bf02257531. Van der Walt S, Schönberger JL, Nunez-Iglesias J, Boulogne F, Warner JD, Yager N, Gouillart E, Yu T. scikit-image: image processing in python. PeerJ. 2014;2:453. Hansen N. The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772; 2016. Paszke A, Gross S, Chintala S, Chanan G, Yang E, DeVito Z, Lin Z, Desmaison A, Antiga L, Lerer A. Automatic differentiation in pytorch. In: NIPS-W; 2017. Wu Y, He K. Group normalization. In: Proceedings of the European conference on computer vision (ECCV). 2018. p. 3–19. Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167; 2015. Ioffe S. Batch renormalization: Towards reducing minibatch dependence in batch-normalized models. In: Advances in neural information processing systems, 2017. p. 1945–1953. Li X, Chen S, Hu X, Yang J. Understanding the disharmony between dropout and batch normalization by variance shift. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2019. p. 2682–90. He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision; 2015, p. 1026–34. Kayalibay B, Jensen G, van der Smagt P. Cnn-based segmentation of medical imaging data. arXiv preprint arXiv:1701.03056; 2017. Le Cun Y, Kanter I, Solla SA. Eigenvalues of covariance matrices: application to neural-network learning. Phys Rev Lett. 1991;66(18):2396. Perez L, Wang J. The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621; 2017. Simard PY, Steinkraus D, Platt JC. Best practices for convolutional neural networks applied to visual document analysis. In: Null. IEEE; 2003, p. 958. Karpathy A. Convolutional neural networks for visual recognition. Course notes hosted on GitHub. http://cs231n.github.io. Accessed 4 Feb 2020. Milletari F, Navab N, Ahmadi S-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D Vision (3DV). IEEE; 2016, p. 565–71. Visa S, Ralescu A. Issues in mining imbalanced data sets-a review paper. In: Proceedings of the sixteen midwest artificial intelligence and cognitive science conference, 2005;2005: 67–73. sn. Buda M, Maki A, Mazurowski MA. A systematic study of the class imbalance problem in convolutional neural networks. Neural Netw. 2018;106:249–59. Sutskever I, Martens J, Dahl G, Hinton G. On the importance of initialization and momentum in deep learning. In: International conference on machine learning; 2013, p. 1139–47. Kingma D, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980; 2014. Wilson AC, Roelofs R, Stern M, Srebro N, Recht B. The marginal value of adaptive gradient methods in machine learning. In: Advances in neural information processing systems; 2017, p. 4151–61. Zhang J, Mitliagkas I. Yellowfin and the art of momentum tuning. arXiv preprint arXiv:1706.03471; 2017. Yang L, Zhang Y, Chen J, Zhang S, Chen DZ. Suggestive annotation: a deep active learning framework for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer; 2017, p. 399–407. Ma B, Liu S, Zhi Y, Song Q. Flow based self-supervised pixel embedding for image segmentation. arXiv preprint arXiv:1901.00520; 2019. Roy AG, Siddiqui S, Pölsterl S, Navab N, Wachinger C. 'Squeeze & excite' guided few-shot segmentation of volumetric images. Med Image Anal. 2020;59:101587. Khened M, Kollerathu VA, Krishnamurthi G. Fully convolutional multi-scale residual DenseNets for cardiac segmentation and automated cardiac diagnosis using ensemble of classifiers. Med Image Anal. 2019;51:21–45. Roy AG, Conjeti S, Karri SPK, Sheet D, Katouzian A, Wachinger C, Navab N. Relaynet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks. Biomed Opt Express. 2017;8(8):3627–42. Kimura K, Kikuchi S, Yamasaki S-I. Accurate root length measurement by image analysis. Plant Soil. 1999;216(1–2):117–27. Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, Burren Y, Porz N, Slotboom J, Wiest R, et al. The multimodal brain tumor image segmentation benchmark (brats). IEEE Trans Med Imaging. 2015;34(10):1993. Lampert TA, Stumpf A, Gançarski P. An empirical study into annotator agreement, ground truth estimation, and algorithm evaluation. IEEE Trans Image Process. 2016;25(6):2557–72. Iglovikov V, Shvets A. Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation. arXiv preprint arXiv:1801.05746; 2018. He K, Girshick R, Dollár P. Rethinking imagenet pre-training. In: Proceedings of the IEEE international conference on computer vision. 2019. p. 4918–4927. Chen H, Valerio Giuffrida M, Doerner P, Tsaftaris SA. Adversarial large-scale root gap inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops; 2019. Mangalam K, Salzamann M. On compressing u-net using knowledge distillation. arXiv preprint arXiv:1812.00249; 2018. Smith AG, Petersen J, Selvan R, Rasmussen CR. Data for paper 'Segmentation of Roots in Soil with U-Net'. https://doi.org/10.5281/zenodo.3527713. Smith AG. Abe404/segmentation\_of\_roots\_in\_soil\_with\_unet: 1.3. https://doi.org/10.5281/zenodo.3627186. Smith AG, Petersen J, Selvan R, Rasmussen CR. Trained U-Net Model for paper 'Segmentation of Roots in Soil with U-Net'. Zenodo. 2019;. https://doi.org/10.5281/zenodo.3484015. We thank Kristian Thorup-Kristensen and Dorte Bodin Dresbøll for their continued support and supervision and for the creation and maintenance of the rhizotron facility used in this study. We thank Villum Foundation (DeepFrontier project, grant number VKR023338) for financial support for this study. The funding body did not play any role in neither design of the study, data collection, analysis, interpretation of data nor in writing the manuscript. Department of Plant and Environmental Sciences, University of Copenhagen, Højbakkegaard Allé 13, 2630, Taastrup, Denmark Abraham George Smith & Camilla Ruø Rasmussen Department of Computer Science, University of Copenhagen, Universitetsparken 1, 2100, Copenhagen Ø, Denmark Abraham George Smith, Jens Petersen & Raghavendra Selvan Abraham George Smith Raghavendra Selvan Camilla Ruø Rasmussen AS implemented the U-Net, baseline system and wrote the manuscript with assistance from all authors. CR did the annotations and collaborated on the design of the study and the introduction. RS and JP provided valuable machine learning expertise. All authors read and approved the final manuscript. Correspondence to Abraham George Smith. Smith, A.G., Petersen, J., Selvan, R. et al. Segmentation of roots in soil with U-Net. Plant Methods 16, 13 (2020). https://doi.org/10.1186/s13007-020-0563-0 Rhizotron Root intersection method
CommonCrawl
nLab > nLab General Discussions: Gödel's second incompleteness theorem CommentAuthorRichard Williamson CommentTimeJan 5th 2015 (edited Jan 5th 2015) Author: Richard Williamson Format: MarkdownItexHello all, I am very interested in, and have been thinking for a while now about, trying to give a categorical proof, and indeed even formulate a nice categorical statement, of Gödel's second incompleteness theorem. There was a recent post here in which this cropped up: as mentioned in that post, Joyal announced a proof in the 1970s, but details have never been given. It seems to me that many people would be interested in a proof, including several who read and write posts to the nForum. Therefore, I would like to propose that we try to come up with a proof in this thread. Just for a little more motivation, perhaps I can also mention that I was somewhat surprised, when attempting to find a good exposition of a proof of the second incompleteness theorem, that in fact this seems to be next to impossible! Gödel himself did not give details, and every exposition that I have seen skips some: the standard approach makes use of the fact that certain conditions variously ascribed to Hilbert, Bernays, or Gödel (usually the first two) hold, but showing that these conditions indeed hold appears to be rather horrific. A good, conceptual proof would, I feel, be really worth having. I hope that it may be possible to avoid these conditions, and find an alternative way of proceeding, in a categorical setting. The rough idea is to 'carry out the proof of the first incompleteness theorem inside one's formal system', but putting this into practise is another thing entirely: I hope that a categorical approach may lead to an elegant way to proceed. I am happy to take responsibility for writing up any proof that is arrived at, assigning authorship to all contributors, or to an appropriate group title, or in any other way that is considered most appropriate. Others would be welcome of course to write it up too in their own style (I would use a public git repository to post the source code). I have some ideas about how to approach a proof. Rather than write an enormous post attempting to cover everything in one go, I will instead try to get the ball rolling with one question, which I feel is actually the key to the story: how can Gödel numbering be understood in a good way from a categorical point of view? A nice categorical proof of the first incompleteness theorem was outlined by Lawvere in the 1960s, and indeed has been written up in one form on the nLab. However, such proofs are not complete, exactly because they skip over the construction of a Gödel numbering. Whilst the ordinary construction of a Gödel numbering would no doubt suffice to complete such proofs, I find this unsatisfactory: I would not regard such a proof as truly categorical. I feel that there should be a better conceptual formalism in which to understand Gödel numbering from a categorical point of view. I do not have an answer to my question yet, just some 'very early stage ideas'. These may very well have flaws, but I hope that through discussing these ideas, or alternative ones that people can suggest, we will be able to refine them. Let us first fix a category T to which we would like Gödel's second incompleteness theorem to apply. This category should 'have a sufficiently nice internal logic, and contain enough arithmetic'. Joyal's arithmetic universes seem intended to be such categories, but 40 years on, it seems to me that one should just follow one's nose. A Heyting pretopos with a parameterised natural numbers object seems like a good place to begin. For reasons that I hope to explain at a later time, slightly stronger properties of the logic of T may be needed, but we can ignore this for now. I feel that it may be conceptually nicer, however, to work with the hyperdoctrine associated to T rather than T itself. An idea that I have regarding a possible way to understand Gödel numbering is: as a 'morphism of hyperdoctrines from T to an 'hyperdoctrine internal to T' which itself is a rich enough theory of arithmetic'. There are, however, many things to make precise here. Even the notion of a morphism of hyperdoctrines does not seem to have been studied in depth in the literature, but it shouldn't be too difficult to formulate an appropriate one for the purposes at hand. More difficult would seem to be the 'internal' aspect. One can try to ignore the hyperdoctrinal point of view (or to pass to and fro between it and just working with T), and try to work with a category internal to T which has similar properties. But it still seems to me that there is work to do: can logical theories be 'internalised' in this way? How to construct the relevant category/hyperdoctrine internal to T? And the question that I feel that I understand least: how to make sense of a 'morphism of hyperdoctrines from the hyperdoctrine associated to T to the internal category/hyperdoctrine'? There are other important aspects. I have written 'hyperdoctrine', but there are various choices to be made as to the kind of hyperdoctrine to be used. An important part of Gödel numbering is of course that proofs must be arithmetised, and therefore it would seem to me that the hyperdoctrine should be 'proof relevant', namely land in (appropriate) categories rather than (appropriate) posets. Actually carrying out the Gödel numbering itself would seem to me straightforward along the lines of Gödel's original approach, given an appropriate categorical context in which to understand it. Thoughts/pointing out of flaws/alternative ideas are very welcome! I am very interested in, and have been thinking for a while now about, trying to give a categorical proof, and indeed even formulate a nice categorical statement, of Gödel's second incompleteness theorem. There was a recent post here in which this cropped up: as mentioned in that post, Joyal announced a proof in the 1970s, but details have never been given. It seems to me that many people would be interested in a proof, including several who read and write posts to the nForum. Therefore, I would like to propose that we try to come up with a proof in this thread. Just for a little more motivation, perhaps I can also mention that I was somewhat surprised, when attempting to find a good exposition of a proof of the second incompleteness theorem, that in fact this seems to be next to impossible! Gödel himself did not give details, and every exposition that I have seen skips some: the standard approach makes use of the fact that certain conditions variously ascribed to Hilbert, Bernays, or Gödel (usually the first two) hold, but showing that these conditions indeed hold appears to be rather horrific. A good, conceptual proof would, I feel, be really worth having. I hope that it may be possible to avoid these conditions, and find an alternative way of proceeding, in a categorical setting. The rough idea is to 'carry out the proof of the first incompleteness theorem inside one's formal system', but putting this into practise is another thing entirely: I hope that a categorical approach may lead to an elegant way to proceed. I am happy to take responsibility for writing up any proof that is arrived at, assigning authorship to all contributors, or to an appropriate group title, or in any other way that is considered most appropriate. Others would be welcome of course to write it up too in their own style (I would use a public git repository to post the source code). I have some ideas about how to approach a proof. Rather than write an enormous post attempting to cover everything in one go, I will instead try to get the ball rolling with one question, which I feel is actually the key to the story: how can Gödel numbering be understood in a good way from a categorical point of view? A nice categorical proof of the first incompleteness theorem was outlined by Lawvere in the 1960s, and indeed has been written up in one form on the nLab. However, such proofs are not complete, exactly because they skip over the construction of a Gödel numbering. Whilst the ordinary construction of a Gödel numbering would no doubt suffice to complete such proofs, I find this unsatisfactory: I would not regard such a proof as truly categorical. I feel that there should be a better conceptual formalism in which to understand Gödel numbering from a categorical point of view. I do not have an answer to my question yet, just some 'very early stage ideas'. These may very well have flaws, but I hope that through discussing these ideas, or alternative ones that people can suggest, we will be able to refine them. Let us first fix a category T to which we would like Gödel's second incompleteness theorem to apply. This category should 'have a sufficiently nice internal logic, and contain enough arithmetic'. Joyal's arithmetic universes seem intended to be such categories, but 40 years on, it seems to me that one should just follow one's nose. A Heyting pretopos with a parameterised natural numbers object seems like a good place to begin. For reasons that I hope to explain at a later time, slightly stronger properties of the logic of T may be needed, but we can ignore this for now. I feel that it may be conceptually nicer, however, to work with the hyperdoctrine associated to T rather than T itself. An idea that I have regarding a possible way to understand Gödel numbering is: as a 'morphism of hyperdoctrines from T to an 'hyperdoctrine internal to T' which itself is a rich enough theory of arithmetic'. There are, however, many things to make precise here. Even the notion of a morphism of hyperdoctrines does not seem to have been studied in depth in the literature, but it shouldn't be too difficult to formulate an appropriate one for the purposes at hand. More difficult would seem to be the 'internal' aspect. One can try to ignore the hyperdoctrinal point of view (or to pass to and fro between it and just working with T), and try to work with a category internal to T which has similar properties. But it still seems to me that there is work to do: can logical theories be 'internalised' in this way? How to construct the relevant category/hyperdoctrine internal to T? And the question that I feel that I understand least: how to make sense of a 'morphism of hyperdoctrines from the hyperdoctrine associated to T to the internal category/hyperdoctrine'? There are other important aspects. I have written 'hyperdoctrine', but there are various choices to be made as to the kind of hyperdoctrine to be used. An important part of Gödel numbering is of course that proofs must be arithmetised, and therefore it would seem to me that the hyperdoctrine should be 'proof relevant', namely land in (appropriate) categories rather than (appropriate) posets. Actually carrying out the Gödel numbering itself would seem to me straightforward along the lines of Gödel's original approach, given an appropriate categorical context in which to understand it. Thoughts/pointing out of flaws/alternative ideas are very welcome! CommentAuthorSridharRamesh Author: SridharRamesh Format: MarkdownItexAlas, this had been the subject of the thesis I was working on until I chose to leave grad school (for personal reasons), but I never found the discipline to complete writing up my ideas. I still hoped to eventually force myself to write it and still, I suppose, entertain the fantasy of thus completing my Ph.D., but perhaps I may never develop the relevant willpower. I may as well sketch out some of the basic ideas (I am quickly copying and pasting some snippets from emails here before commuting to work, out of the fear that these starting ideas are such low-hanging fruit that once anyone else begins thinking about the problem, they would independently strike upon the same thoughts; I will return later to edit this post to make the markup prettier and the exposition clearer; this may be unreadable to start with): I trust you understand the concepts of an "internal category", "internal category with finite limits", etc.; that is, the concepts of "category", "category with (chosen) finite limits", and so on are given by essentially algebraic theories, and thus can be given models as suitable finite-limit diagrams inside any category with finite limits. Given a model M of [whatever essentially algebraic theory] internal to finite-limit category T, and a finite-limit preserving functor f : T -> T', we get also a model f[M] internal to T', taking the image in the obvious way. In particular, we can take f : T -> Set to be the global sections functor Glob = Hom_T(1, -), and thus obtain, from any finite-limit category C internal to T, a genuine finite-limit category Glob[C]. What's more, there's a finite-limit preserving functor from Glob[C] to T given as follows: take an object X in Glob[C] (amounting to a morphism : 1 -> Ob(C) in T) to the object Hom_C(1, X) in T (the subobject of Mor(C) constructed by appropriate finite limit), with corresponding action on the morphisms of Glob[C] using the composition structure of C. This basically amounts to an internal version of the global sections functor [more generally, we have such a functor from Hom_T(X, -)[C] to the slice category T/X for each object X, comprising a natural transformation between the corresponding functors from T to the category of finite-limit categories and on-the-nose finite-limit functors]. Let's call this functor G_C for now, till I come up with a better name. By an "introspective theory", I mean: A) a category T with finite limits [i.e., an essentially algebraic theory] B) a category V with finite limits internal to T C) a finite-limit preserving functor S from T to Glob[V] D) a natural transformation J from the identity functor on T to the composition G_V . S The idea here is that an introspective theory amounts to a theory extending the theory of finite-limit categories, with furthermore the property that every model of this theory has also an internal model of this theory, and a homomorphism into the global sections of that internal model. Note that the theory of introspective theories is itself essentially algebraic. In particular, there is an initial introspective theory. I call this the theory of "Goedel-Loeb categories". Next, recall how, given a category with terminal object C internal to category with finite limits T, we defined a functor G_C from Glob[C] to T amounting to taking an object of Glob[C] to the T-definable set of its global sections. Well, I should have noted in somewhat more generality that in exactly the same way, we have a functor "Hom_C" from Glob[C]^{op} x Glob[C] to T, taking a pair of objects in Glob[C] to the T-definable set of C-morphisms between them. From now on for a while, we'll be working within the context of an arbitrary given introspective theory <T, V, S, J>. I'll frequently speak in the "internal logic" of T for convenience. Note that S[V] is a category internal to Glob[V]; in the internal logic of T, we might well say S[V] describes a category internal to V. When adopting this perspective, I shall refer to S[V] as V_1. Note also that the action of J induces a map from Mor(V) to (G_V . S) Mor(V). That is, in the internal logic of T, we have an object S(Mor(V)) = Mor(V_1) in V, whose global sections comprise the set (G_V . S) Mor(V), into which there is a mapping from the set Mor(V). The naturality diagrams for J, along with the morphisms in T giving the FinLimCat structure of V, will cause this mapping to amount to a finite limit preserving functor from V to Glob[V_1]. Let's call this functor f for now. Accordingly, we can define a bifunctor of type V^{op} x V -> V given by the rule (a, b) |-> Hom_{V_1} (f(a), f(b)), the output of which I'll refer to as [](a -> b) for evocative reasons. In the particular special case when a = 1, we will write simply []b. (N.B.: I've not defined any such thing as "a -> b" on its own; there's no assumption of cartesian closure anywhere. It is important to me to note that all the basics can be carried out in a very minimal context. For that matter, I do not impose any arithmetic structure at any point; it is even more important to me to note that the essence of Goedel coding is not to do with arithmetic, though that happens to be one medium through which it can be realized.) Alas, this had been the subject of the thesis I was working on until I chose to leave grad school (for personal reasons), but I never found the discipline to complete writing up my ideas. I still hoped to eventually force myself to write it and still, I suppose, entertain the fantasy of thus completing my Ph.D., but perhaps I may never develop the relevant willpower. I may as well sketch out some of the basic ideas (I am quickly copying and pasting some snippets from emails here before commuting to work, out of the fear that these starting ideas are such low-hanging fruit that once anyone else begins thinking about the problem, they would independently strike upon the same thoughts; I will return later to edit this post to make the markup prettier and the exposition clearer; this may be unreadable to start with): I trust you understand the concepts of an "internal category", "internal category with finite limits", etc.; that is, the concepts of "category", "category with (chosen) finite limits", and so on are given by essentially algebraic theories, and thus can be given models as suitable finite-limit diagrams inside any category with finite limits. Given a model M of [whatever essentially algebraic theory] internal to finite-limit category T, and a finite-limit preserving functor f : T -> T', we get also a model f[M] internal to T', taking the image in the obvious way. In particular, we can take f : T -> Set to be the global sections functor Glob = Hom_T(1, -), and thus obtain, from any finite-limit category C internal to T, a genuine finite-limit category Glob[C]. What's more, there's a finite-limit preserving functor from Glob[C] to T given as follows: take an object X in Glob[C] (amounting to a morphism : 1 -> Ob(C) in T) to the object Hom_C(1, X) in T (the subobject of Mor(C) constructed by appropriate finite limit), with corresponding action on the morphisms of Glob[C] using the composition structure of C. This basically amounts to an internal version of the global sections functor [more generally, we have such a functor from Hom_T(X, -)[C] to the slice category T/X for each object X, comprising a natural transformation between the corresponding functors from T to the category of finite-limit categories and on-the-nose finite-limit functors]. Let's call this functor G_C for now, till I come up with a better name. By an "introspective theory", I mean: A) a category T with finite limits [i.e., an essentially algebraic theory] B) a category V with finite limits internal to T C) a finite-limit preserving functor S from T to Glob[V] D) a natural transformation J from the identity functor on T to the composition G_V . S The idea here is that an introspective theory amounts to a theory extending the theory of finite-limit categories, with furthermore the property that every model of this theory has also an internal model of this theory, and a homomorphism into the global sections of that internal model. Note that the theory of introspective theories is itself essentially algebraic. In particular, there is an initial introspective theory. I call this the theory of "Goedel-Loeb categories". Next, recall how, given a category with terminal object C internal to category with finite limits T, we defined a functor G_C from Glob[C] to T amounting to taking an object of Glob[C] to the T-definable set of its global sections. Well, I should have noted in somewhat more generality that in exactly the same way, we have a functor "Hom_C" from Glob[C]^{op} x Glob[C] to T, taking a pair of objects in Glob[C] to the T-definable set of C-morphisms between them. From now on for a while, we'll be working within the context of an arbitrary given introspective theory <T, V, S, J>. I'll frequently speak in the "internal logic" of T for convenience. Note that S[V] is a category internal to Glob[V]; in the internal logic of T, we might well say S[V] describes a category internal to V. When adopting this perspective, I shall refer to S[V] as V_1. Note also that the action of J induces a map from Mor(V) to (G_V . S) Mor(V). That is, in the internal logic of T, we have an object S(Mor(V)) = Mor(V_1) in V, whose global sections comprise the set (G_V . S) Mor(V), into which there is a mapping from the set Mor(V). The naturality diagrams for J, along with the morphisms in T giving the FinLimCat structure of V, will cause this mapping to amount to a finite limit preserving functor from V to Glob[V_1]. Let's call this functor f for now. Accordingly, we can define a bifunctor of type V^{op} x V -> V given by the rule (a, b) |-> Hom_{V_1} (f(a), f(b)), the output of which I'll refer to as [](a -> b) for evocative reasons. In the particular special case when a = 1, we will write simply []b. (N.B.: I've not defined any such thing as "a -> b" on its own; there's no assumption of cartesian closure anywhere. It is important to me to note that all the basics can be carried out in a very minimal context. For that matter, I do not impose any arithmetic structure at any point; it is even more important to me to note that the essence of Goedel coding is not to do with arithmetic, though that happens to be one medium through which it can be realized.) Format: MarkdownItexI've hit character limits on a single post, so I'll stop there for now, but outline some of what comes next: It is relatively straightforward to show that [] acts as a K4 modal operator, in that in addition to its bifunctoriality, it preserves finite limits in its second argument, and also comes with a canonical natural transformation from [](a -> b) to [][](a -> b) = [](1 -> [](a -> b)), which is the analogue of the modal axiom 4. With a fair amount of more work, we can also show that it satisfies the analogue of Loeb's theorem. Loeb's theorem is usually stated as something like "[]([]A -> A) |- []A" (Goedel's Second Incompleteness Theorem is the special case where A is thought of as definitional falsehood; i.e., an initial object). But I'll want to note, first of all, that if we think of x |-> [](x -> A) as a presheaf P, that Loeb's theorem can instead be looked at as P(P(1)) |- P(1). And indeed, we'll have a version of Loeb's theorem applying to suitably internal presheaves more generally than simply the representable ones. And, secondly, I'll state ominously that when we _do_ get around to establishing Loeb's theorem in this context, since we are in a category and not a mere preorder, we can further ask nontrivially about equations satisfied of the morphism taking the place of the turnstile. And what will be of particular note to us will be that the morphism we construct to establish Loeb's theorem will act as a modalized fixed point combinator. And this will lead into various discussions of fixed point uniqueness results under various conditions. The crowning accomplishment in my eyes was the establishment of modal fixed points in certain quite general cases which are simultaneously initial algebras and terminal coalgebras. (Also of some interest, one of the fixed points we can get in this fashion is an object X ~= [](X -> Omega), where Omega acts as a type of truth values. This models a certain pseudo-naive modal set theory (furthermore, because of the initiality and terminality results, the model will satisfy versions of both Foundation and Anti-foundation). One interesting observation is that in such a pseudo-naive modal set theory, the analogue of Russell's paradox ends up leading not to inconsistency, but just to Goedel's incompleteness theorem again.) Anyway, I'll have to wait till later to talk about any of that (or, again, to make any of this readable; I understand that it probably is just a dump of text right now). I've hit character limits on a single post, so I'll stop there for now, but outline some of what comes next: It is relatively straightforward to show that [] acts as a K4 modal operator, in that in addition to its bifunctoriality, it preserves finite limits in its second argument, and also comes with a canonical natural transformation from [](a -> b) to [][](a -> b) = [](1 -> [](a -> b)), which is the analogue of the modal axiom 4. With a fair amount of more work, we can also show that it satisfies the analogue of Loeb's theorem. Loeb's theorem is usually stated as something like "[]([]A -> A) |- []A" (Goedel's Second Incompleteness Theorem is the special case where A is thought of as definitional falsehood; i.e., an initial object). But I'll want to note, first of all, that if we think of x |-> [](x -> A) as a presheaf P, that Loeb's theorem can instead be looked at as P(P(1)) |- P(1). And indeed, we'll have a version of Loeb's theorem applying to suitably internal presheaves more generally than simply the representable ones. And, secondly, I'll state ominously that when we do get around to establishing Loeb's theorem in this context, since we are in a category and not a mere preorder, we can further ask nontrivially about equations satisfied of the morphism taking the place of the turnstile. And what will be of particular note to us will be that the morphism we construct to establish Loeb's theorem will act as a modalized fixed point combinator. And this will lead into various discussions of fixed point uniqueness results under various conditions. The crowning accomplishment in my eyes was the establishment of modal fixed points in certain quite general cases which are simultaneously initial algebras and terminal coalgebras. (Also of some interest, one of the fixed points we can get in this fashion is an object X ~= [](X -> Omega), where Omega acts as a type of truth values. This models a certain pseudo-naive modal set theory (furthermore, because of the initiality and terminality results, the model will satisfy versions of both Foundation and Anti-foundation). One interesting observation is that in such a pseudo-naive modal set theory, the analogue of Russell's paradox ends up leading not to inconsistency, but just to Goedel's incompleteness theorem again.) Anyway, I'll have to wait till later to talk about any of that (or, again, to make any of this readable; I understand that it probably is just a dump of text right now). Format: MarkdownItexPaul Taylor pointed out [on MO](http://mathoverflow.net/a/190767/4177) that Joyal proved that > In the initial arithmetic universe $\mathcal{A}$, the internal hom-object $A(1,0)$ of its internal initial arithmetic universe $A$ (where $0,1:{\mathbf 1}\rightrightarrows A$ are the internal initial and terminal objects of $A$) is not isomorphic to the initial object $\mathbf 0$ of $\mathcal{A}$. Paul Taylor pointed out on MO that Joyal proved that In the initial arithmetic universe 𝒜\mathcal{A}, the internal hom-object A(1,0)A(1,0) of its internal initial arithmetic universe AA (where 0,1:1⇉A0,1:{\mathbf 1}\rightrightarrows A are the internal initial and terminal objects of AA) is not isomorphic to the initial object 0\mathbf 0 of 𝒜\mathcal{A}. Format: MarkdownItexThank you very much, Sridhar, for your thoughts! This all sounds very interesting, and I would very much like to hear more. I wish you the best of luck with writing up your PhD if you find the opportunity and wish to do so! It seems that your approach is a little different from the one I have in mind, which is great: I would be interested to explore both possibilities. It also means that I can hopefully avoid pursuing some line of thought which might overlap with material that you would like to include in your PhD, which I would not wish to do. I would like to say that I agree entirely with the parenthetical remark at the end of your first comment: I also would like to abstract away from arithmetic, and it is important to me to carry out the proof in a predicative constructive setting. Just a couple of questions for now. 1) Could you give an example (or more than one) of an introspective theory? I appreciate that this may not be easy, but it could be very helpful. 2) A related but presumably harder question: can you describe the 'theory of Gödel-Löb categories' explicitly? I think I see very roughly how you are trying to construct your modal operator, but get a bit lost in your second comment! If you would be able to elaborate upon the overall structure of your argument, being as precise as possible (just pointing out where there are things that need to be checked/which are just ideas at this stage), that would be extremely helpful. To compare this approach with what I had in mind, perhaps one could say that what I am proposing is from one (but only one: what you are describing looks equally interesting) point of view more radical: I am trying to do away with the Löb's theorem/Hilbert-Bernays-Gödel conditions path to the second incompleteness theorem altogether. In outline, what I would like to do is the following: 1) Express the internal logic of T in an 'arithmetic theory S internal to T', namely understand Gödel numbering in a good way in categorical logic. As you say, 'arithmetic' should not ultimately be necessary, but I'll keep it for simplicity for now. This point is the one which seems most crucial to me, and is the one which my original post was about. 2) Carry out Lawvere's diagonal argument to prove the first incompleteness theorem _in T_, namely using the internal logic of T instead of the usual 'external' reasoning. I am reasonably confident that this step can be carried out without too much trouble given the first step. Thank you very much, Sridhar, for your thoughts! This all sounds very interesting, and I would very much like to hear more. I wish you the best of luck with writing up your PhD if you find the opportunity and wish to do so! It seems that your approach is a little different from the one I have in mind, which is great: I would be interested to explore both possibilities. It also means that I can hopefully avoid pursuing some line of thought which might overlap with material that you would like to include in your PhD, which I would not wish to do. I would like to say that I agree entirely with the parenthetical remark at the end of your first comment: I also would like to abstract away from arithmetic, and it is important to me to carry out the proof in a predicative constructive setting. Just a couple of questions for now. 1) Could you give an example (or more than one) of an introspective theory? I appreciate that this may not be easy, but it could be very helpful. 2) A related but presumably harder question: can you describe the 'theory of Gödel-Löb categories' explicitly? I think I see very roughly how you are trying to construct your modal operator, but get a bit lost in your second comment! If you would be able to elaborate upon the overall structure of your argument, being as precise as possible (just pointing out where there are things that need to be checked/which are just ideas at this stage), that would be extremely helpful. To compare this approach with what I had in mind, perhaps one could say that what I am proposing is from one (but only one: what you are describing looks equally interesting) point of view more radical: I am trying to do away with the Löb's theorem/Hilbert-Bernays-Gödel conditions path to the second incompleteness theorem altogether. In outline, what I would like to do is the following: 1) Express the internal logic of T in an 'arithmetic theory S internal to T', namely understand Gödel numbering in a good way in categorical logic. As you say, 'arithmetic' should not ultimately be necessary, but I'll keep it for simplicity for now. This point is the one which seems most crucial to me, and is the one which my original post was about. 2) Carry out Lawvere's diagonal argument to prove the first incompleteness theorem in T, namely using the internal logic of T instead of the usual 'external' reasoning. I am reasonably confident that this step can be carried out without too much trouble given the first step. Format: MarkdownItexThat remark of Taylor is too cryptic to be helpful to me, I'm afraid, David! If someone could explain carefully how this statement (or the slightly different one in a different comment of Taylor on MathOverflow) expresses Gödel's second incompleteness theorem, that would be a very good start! That remark of Taylor is too cryptic to be helpful to me, I'm afraid, David! If someone could explain carefully how this statement (or the slightly different one in a different comment of Taylor on MathOverflow) expresses Gödel's second incompleteness theorem, that would be a very good start! Format: MarkdownItexI gather it is something like as follows: An arithmetic universe is, we gather, the minimal theory that contains an internal model of itself and can interpret arithmetic. If the initial/free arithmetic universe exists and is nontrivial, then it has an internal model of itself in which 0=1. But Taylor is not exactly responsive to comments/questions at his answer, for whatever reason. I gather it is something like as follows: An arithmetic universe is, we gather, the minimal theory that contains an internal model of itself and can interpret arithmetic. If the initial/free arithmetic universe exists and is nontrivial, then it has an internal model of itself in which 0=1. But Taylor is not exactly responsive to comments/questions at his answer, for whatever reason. Format: MarkdownItexThank you, yes, I would guess that what you describe is roughly the statement that Taylor describes in his [other comment](http://mathoverflow.net/a/126782). I would guess that the statement in the comment you linked to is roughly that there is no proof of falsity. However, it is the precise details that are unclear to me. Particularly the distinction between logic in the starting category (say the initial arithmetic universe) and the logic in the internal model: making this precise is one of my questions in my original post, and I feel that it is the key to the story. One fundamental question is: what is meant precisely by 'an internal model' here? A related, but very basic, question of a more general nature, which I have not yet thought carefully through myself, but which could help to clarify things, is the following: is there a good notion of internal logic of an _internal_ category? How does this work precisely? What are the assumptions on the external category? Maybe the idea is then that the internal model is some kind of internal category which (in the case of the _initial_ arithmetic universe) has _the same logic_ as the external one (again, one important point would be to make this precise, perhaps using the formalism of hyperdoctrines). If the previous paragraph can be made precise, then it seems to me that Taylor's two statements both express (in different ways) the inconsistency of the logic of the external category. I don't see how they express Gödel's second incompleteness theorem, and indeed it would seem to me that Taylor claims in the first MathOverflow comment, the one I have linked to, that Joyal _proves_ that the logic of the external category is inconsistent, which would of course be nonsense (or revolutionary)! Presumably something I have written is therefore not what Joyal/Taylor have in mind. Perhaps the 'internal logic' does not agree with the external one, and the consistency statement says something about the internal logic. But then: what exactly does it say? How does proving that something (internal or external) is inconsistent correspond to the second incompleteness theorem? I would rather think that, using the diagonal argument and a Gödel numbering as I outlined above, one would show that an _assumption_ of consistency leads to a contradiction (or show something analogous, that may be better from a constructive point of view). Hopefully it is clear from all this that clarifying the basic questions around internal logic and internal models seems crucial. Thank you, yes, I would guess that what you describe is roughly the statement that Taylor describes in his other comment. I would guess that the statement in the comment you linked to is roughly that there is no proof of falsity. However, it is the precise details that are unclear to me. Particularly the distinction between logic in the starting category (say the initial arithmetic universe) and the logic in the internal model: making this precise is one of my questions in my original post, and I feel that it is the key to the story. One fundamental question is: what is meant precisely by 'an internal model' here? A related, but very basic, question of a more general nature, which I have not yet thought carefully through myself, but which could help to clarify things, is the following: is there a good notion of internal logic of an internal category? How does this work precisely? What are the assumptions on the external category? Maybe the idea is then that the internal model is some kind of internal category which (in the case of the initial arithmetic universe) has the same logic as the external one (again, one important point would be to make this precise, perhaps using the formalism of hyperdoctrines). If the previous paragraph can be made precise, then it seems to me that Taylor's two statements both express (in different ways) the inconsistency of the logic of the external category. I don't see how they express Gödel's second incompleteness theorem, and indeed it would seem to me that Taylor claims in the first MathOverflow comment, the one I have linked to, that Joyal proves that the logic of the external category is inconsistent, which would of course be nonsense (or revolutionary)! Presumably something I have written is therefore not what Joyal/Taylor have in mind. Perhaps the 'internal logic' does not agree with the external one, and the consistency statement says something about the internal logic. But then: what exactly does it say? How does proving that something (internal or external) is inconsistent correspond to the second incompleteness theorem? I would rather think that, using the diagonal argument and a Gödel numbering as I outlined above, one would show that an assumption of consistency leads to a contradiction (or show something analogous, that may be better from a constructive point of view). Hopefully it is clear from all this that clarifying the basic questions around internal logic and internal models seems crucial. CommentAuthorZhen Lin Author: Zhen Lin Format: MarkdownItexOne has to read carefully. > In the initial arithmetic universe $\mathcal{A}$, the internal hom-object $A(1,0)$ of its internal initial arithmetic universe $A$ (where $0,1:{\mathbf 1}\rightrightarrows A$ are the internal initial and terminal objects of $A$) is not isomorphic to the initial object $\mathbf 0$ of $\mathcal{A}$. On the one hand, that means $A (1, 0)$ is not empty. On the other hand, $A (1, 0)$ is not inhabited. So it follows that, in the internal logic of $\mathcal{A}$, the theory of arithmetic universes is neither consistent (i.e. "every arithmetic universe is non-degenerate") nor inconsistent (i.e. "every arithmetic universe is degenerate"). One has to read carefully. On the one hand, that means A(1,0)A (1, 0) is not empty. On the other hand, A(1,0)A (1, 0) is not inhabited. So it follows that, in the internal logic of 𝒜\mathcal{A}, the theory of arithmetic universes is neither consistent (i.e. "every arithmetic universe is non-degenerate") nor inconsistent (i.e. "every arithmetic universe is degenerate"). Format: MarkdownItexThank you, Zhen Lin, this is helpful. I was referring to the other statement of Taylor, in the comment that I linked to, but your point still stands. I believe that I now understand the structure of the argument, namely: 1) one proves that $G \cong 0$ does not hold (in the statement in Taylor's first comment), or that $A(1,0)$ is not empty (in the statement in Taylor's second comment), noting, as you remark, that this does not imply that the internal logic of the initial arithmetic universe is inconsistent, just that it is not provably consistent; 2) _if_ the internal logic of the initial arithmetic universe contained a proof of its own consistency, we _would_ have a proof that $G \cong 0$ (respectively $A(1,0)$ is empty). We conclude from 1) and 2) that the internal arithmetic universe does not contain a proof of its own consistency, as we wished to establish. If this is correct (and, again, the fifth paragraph of my previous post can be made precise), then I see now how Joyal/Taylor's statement expresses the second incompleteness theorem. My idea for proving 1) would still be as in message #5 in this thread, except that, given a precise understanding of internal model/its internal logic, some things have been resolved with regards to the first part of proof idea. It would remain to carry out the Gödel numbering, and then to try to carry out the second part, namely an internalisation of the diagonal argument à la Lawvere. Thank you, Zhen Lin, this is helpful. I was referring to the other statement of Taylor, in the comment that I linked to, but your point still stands. I believe that I now understand the structure of the argument, namely: 1) one proves that G≅0G \cong 0 does not hold (in the statement in Taylor's first comment), or that A(1,0)A(1,0) is not empty (in the statement in Taylor's second comment), noting, as you remark, that this does not imply that the internal logic of the initial arithmetic universe is inconsistent, just that it is not provably consistent; 2) if the internal logic of the initial arithmetic universe contained a proof of its own consistency, we would have a proof that G≅0G \cong 0 (respectively A(1,0)A(1,0) is empty). We conclude from 1) and 2) that the internal arithmetic universe does not contain a proof of its own consistency, as we wished to establish. If this is correct (and, again, the fifth paragraph of my previous post can be made precise), then I see now how Joyal/Taylor's statement expresses the second incompleteness theorem. My idea for proving 1) would still be as in message #5 in this thread, except that, given a precise understanding of internal model/its internal logic, some things have been resolved with regards to the first part of proof idea. It would remain to carry out the Gödel numbering, and then to try to carry out the second part, namely an internalisation of the diagonal argument à la Lawvere. Format: MarkdownItexRe #8 >what is meant precisely by 'an internal model' here? the theory of an arithmetic universe is, I gather (modulo precise implementation details), expressible in precisely the internal logic available in an arithmetic universe. I'm sorry for being so brief, but I only have the comments of Taylor and (in her articles) of Maietti to work from, neither of which actually cover Joyal's result, merely formalising the notion of arithmetic universe, based on examples in Joyal's talk. Re #8 what is meant precisely by 'an internal model' here? the theory of an arithmetic universe is, I gather (modulo precise implementation details), expressible in precisely the internal logic available in an arithmetic universe. I'm sorry for being so brief, but I only have the comments of Taylor and (in her articles) of Maietti to work from, neither of which actually cover Joyal's result, merely formalising the notion of arithmetic universe, based on examples in Joyal's talk. Format: MarkdownItexNo problem, I realise that this is the case. My questions were intended for anyone who might wish to chip in, and indeed partly rhetorical. I think that I begin to see how the formalism of internal models, and so on, needed here can be defined. I plan to continue to work on the details of trying to internalise Lawvere's diagonal argument, and will write again once I have something to add. In the meantime, thoughts and discussion are of course welcome! No problem, I realise that this is the case. My questions were intended for anyone who might wish to chip in, and indeed partly rhetorical. I think that I begin to see how the formalism of internal models, and so on, needed here can be defined. I plan to continue to work on the details of trying to internalise Lawvere's diagonal argument, and will write again once I have something to add. In the meantime, thoughts and discussion are of course welcome! CommentTimeJul 8th 2017 (edited Jul 8th 2017) Format: MarkdownItexBack then I had missed the point alluded to by Sridhar Ramesh in #2 and #3 above, regarding the perspective on incompleteness via Löb's theorem. I have given it a stub entry _[[Löb's theorem]]_ and linked to it from "incompleteness theorem". Experts please check and expand Back then I had missed the point alluded to by Sridhar Ramesh in #2 and #3 above, regarding the perspective on incompleteness via Löb's theorem. I have given it a stub entry Löb's theorem and linked to it from "incompleteness theorem". Experts please check and expand
CommonCrawl
Counting palindromes Alex Xela shows us the world of palindromic numbers, and calculates the chances of getting one Image: Flickr user Chris, CC BY 2.0 by Alex Burlton. Published on 12 March 2018. About a year ago, I was walking to work when I started to think about palindromes. This was partly because I'd seen one at work the day before, but mostly because my iPod had packed up on me, leaving me with a good half an hour with nothing better to do. A palindrome is a word like LEVEL, which reads the same forwards and backwards. Specifically, I was thinking about 5-digit numbers, so my palindromes were things like 01410. I'd seen one like this at work the previous day, which got me thinking-what were the chances of that happening? This turns out not to be a very interesting question to ask—for a random 5-digit number, the answer is simply 1 in 100. The first 3 digits can be whatever they like, and then the remaining two must match up with a 1-in-10 chance for each. The Sator Square (AD79), a word square containing a five-word Latin palindrome. M Disdero, CC BY-SA 3.0 I was still 25 minutes from work at this point, so I needed to make things harder. Next, I thought about what would happen if you threw re-arrangement into the mix. If a well-meaning person presented you with a bag of five numbers and wanted you to make a palindrome, what are the chances that you could do it? To answer this, we need to first define exactly what is meant by one of these 'bags'. In mathematical terms, I'm talking about unordered multisets. Unordered simply means that, for example, $\{1, 2, 3, 4, 5\}$ and $\{5, 4, 3, 2, 1\}$ are equal-they are considered to be two representations of the same set. And multiset means that the sets may contain repeated elements, so $\{1, 1, 2\}$ is a different set to $\{1, 2\}$. If it's possible to make a palindrome from the individual elements, then I'll call such a set palindromic. To see how complicated this makes the problem, let's think about something simpler for a moment. How many of these sets are there? When we were talking about 5-digit numbers, this was easy-there are 100,000 (it's simply all the numbers from 00000 to 99999). However, there are far fewer unordered sets of 5 digits. To see why this is true, consider the set $\{0, 3, 3, 8, 8\}$. This is a single set, but can produce 30 different 5-digit numbers (03388, 33088, 83830 to name a few). So, do we just divide by 30? Nope-the relationship isn't even linear. For $\{1, 2, 3, 4, 5\}$ there are 120 rearrangements, and for $\{0, 0, 0, 0, 0\}$ there is only 1. Clearly this isn't simple, although there is a known solution to this question which we'll come to a bit later. Breaking the problem down For the remainder of my commute I didn't calculate the answer, but I did settle on an approach to complete when I returned home that evening. To solve the problem, I broke it down into cases. In fact, as I was thinking about 5 'things' and how often they were repeated, I thought about it like a game of poker. As an example, let's think about the bags where all 5 digits are the same ("five-of-a-kind"). There are clearly 10 of these, and for any one of them you can make a palindrome. Similarly, there are 90 "four-of-a-kind" bags (of the form AAAAB: 10 choices for A, then 9 for B: $10 \times 9 = 90$) and 90 "full house" bags (of the form AAABB), all of which also produce palindromes. Continuing in this way, the remaining cases are AAABC ("three of a kind"), AABBC ("two pair"), AABCD ("one pair") and ABCDE ("high card"). Crunching the numbers reveals that 550 out of a total of 2002 bags are palindromic-the chance of finding a palindrome in the rephrased problem is now a little over 25%. Coincidentally, the total number of bags is itself a palindrome in this case! Well, that was a fun diversion—nothing more to see here, right? Wrong. Now I wanted to solve the general problem, by which I mean two things. Firstly, let's throw away the digits 0-9 and have an arbitrary alphabet of $b$ characters. That could be A-Z, it could be base 64—whatever we feel like. And instead of bags of size 5, let's make that arbitrary as well and call it $n$. Is there a general solution for this probability in terms of $b$ and $n$? It turns out that yes, there is, but it would be a couple of weeks before I found it. The general problem The first step was to realise that counting unordered sets was something I'd encountered before, while studying combinatorics at university. I dug out my lecture notes and found the multinomial coefficient, which gives an expression in terms of $n$ and $b$ for how many unordered multisets there are in total: $$ \binom{n+b-1}{b-1} = \frac{(n+b-1)!}{(\hspace{-1pt}n\hspace{1pt})! \, (b-1)!}.$$ Plugging in $n = 5$ and $b = 10$ gives us 2002, which matches the number we manually worked out earlier on. To see why it's true in general, we must first rephrase the problem-something which happens a lot in combinatorics proofs. Suppose that we choose to represent our sets in a different way. We'll agree an order for our alphabet (eg A-Z, or 0-9), and use it to write out a particular set as follows. Starting with the first character in the alphabet, we'll draw a star for any that are present in our set. Then, we'll draw a single line to show that we're onto the next character, and repeat the process. This is much easier to demonstrate with an example: With a little mathematical rigour, you can show that these two representations are equivalent, meaning we can instead choose to count how many ways we can draw out $n$ stars and $b-1$ lines. The total number of characters is $n+b-1$, and we must choose $b-1$ of them to be lines. This is our answer! This was brilliant—most of my work was done for me. Now I just needed to work out how to count the palindromic ones for the general case. Two tricks There were two 'tricks' that got me to the solution. The first was something I spotted almost straight away, but its significance didn't become clear to me until later. There is a noticeable pattern in the number of palindromic sets as $n$ oscillates between odd and even. To see what I mean, here are the numbers where $b = 10$ (the digits 0-9, say) and we let $n$ increase: Bag size, n 2 3 4 5 6 7 8 9 Palindromic sets 10 100 55 550 220 2200 715 7150 The number of palindromic sets multiplies by 10 every time we increase from an even number to an odd number. More generally, it multiplies by $b$, and when you think about it the reason becomes clear. In an even-sized set, every element must be paired for a palindrome to be possible. When we increase the size by one, we can put any of our $b$ elements in there and it'll still be palindromic-the new element can just go in the middle! Mathematically, if we label the set of palindromic sets over $n$ and $b$ as $P^{\, n}_b$ then the result is: $$ |P_b^{\, \, 2n \, + \, 1} | = b \, | P_b^{\, \, 2n}| \qquad \forall n,\, b.$$ This fact is interesting, but it feels like we still have the bulk of the work to do. All we've shown is that if we can solve one of the even or odd cases, then we've finished the problem. And that's where the next trick came in, focusing on the even case. Again, I'll illustrate this with a simple example. This time I'm going to restrict us to an alphabet of $\{0, 1\}$ to make things more manageable: (a) All bags for $n=3$ (b) All palindromic bags for $n=6$ What's illustrated here is that the number of palindromic bags of even size is the same as the total number of bags that are half the size. Just take every bag from the space half the size and double up all the elements-everything you've just created is unique and palindromic, and you can show that every palindromic set can be created this way. And, crucially, we know the total number of bags in the smaller space because that's the multinomial coefficient. We've done it! The final form is as follows, where $M(n,b)$ is the multinomial coefficient for $n$ and $b$: $$M(n,b) = \begin{cases} \cfrac{M(\frac{n}{2},b) }{M(n,b)},\text{ if }n\text{ is even,} \\\\ b \, \cfrac{M (\frac{n-1}{2},b) }{M(n,b)},\text{ if }n\text{ is odd.}\end{cases} $$ My final area of interest before putting this whole thing to bed was to explore the limits of my closed form in $n$ and $b$. I was able to graph the function (pictured on the next page), and then thought I might as well do some analysis too. As it turned out, I was in for one last surprise. The limit as $b$ increases isn't very interesting-if we think about it logically for a second it's obvious what should happen. For a fixed size of bag, if we just keep increasing the number of characters in our alphabet then obviously the probability of being able to make a palindrome converges to 0 (and pretty quickly, I might add). However, things weren't so simple when looking at the $n$ case. I expected this to go to 0 as well—it feels like by making the bag larger you're making it increasingly difficult to find palindromes. This intuition is correct, but only partially. The probability does decrease as the bag size increases (ignoring the fluctuation between odd/even, of course). However, it doesn't decrease to 0-there is a positive limit! This was exciting for me as it was completely unexpected. It also explained why my attempted proofs that it converged to 0 hadn't been going very well… Plots of the palindromic density function The limit in $n$ comes out as follows-something that can be shown using L'Hôpital's rule: $$\cfrac{1}{2^{b-1}},\mbox{ if }n\mbox{ is even,} \qquad \cfrac{b}{2^{b-1}},\mbox{ if }n\mbox{ is odd.}$$ But why is this? Can you come up with a logical reason for why this should be the case? An alternate problem To conclude, I'll leave you with an alternative version of the same problem. At the time of writing, I haven't delved into it myself yet, so there may be other interesting results to be found. The re-phrased problem is this: Given an alphabet of $b$ characters, pick $n$ elements at random (allowing duplicates). What is the probability that you can make a palindrome with the resulting set of elements? There is a subtle difference here, which I'll leave for you to figure out. Is there a general solution for this version? Does it have interesting limits? Enjoy! Alex Burlton Alex studied maths at the University of Warwick, graduating in 2013. He is a software developer for a clinical software company based in Leeds. He is a qualified day skipper—when he's not working or solving puzzles, you can find him sailing the Caribbean! + More articles by Alex Game, set, maths (no more tennis puns) No more Katie Steckles. Thinking outside outside the box Rob Eastaway joins the dots. Baking a Menger sponge sponge Sam Hartburn bakes your favourite fractal e to thee x Zoe Griffiths on the life of e Adventures in Chance-ylvania High stakes gambling with Paula Rowińska Routes: Edsger Dijkstra In this edition of the series, we instead learn about 'routes' and Edsger Dijkstra's shortest path algorithm. ← On the cover: Chladni figures of a square drum Comic: The Inverse Homotopy, part 6 →
CommonCrawl
Nano Express | Open | Published: 25 June 2019 Surface-Related Exciton and Lasing in CdS Nanostructures Xian Gao1,2, Guotao Pang1, Zhenhua Ni2 & Rui Chen ORCID: orcid.org/0000-0002-0445-78471 Nanoscale Research Lettersvolume 14, Article number: 216 (2019) | Download Citation In this report, comparative investigation of photoluminescence (PL) characteristics of CdS nanobelts (NBs) and nanowires (NWs) is presented. At low temperatures, emissions originate from radiative recombination of free exciton A, neutral donor bound exciton, neutral acceptor bound exciton and surface related exciton (SX) are observed and analyzed through power-dependent and temperature-dependent PL measurements. We found that SX emission takes a predominant role in emissions of CdS nanobelts and nanowires. There is a direct correlation between SX emission intensity and surface-to-volume ratio, which is the SX emission intensity is proportional to the superficial area of the nanostructures. At the same time, we found that the exciton-phonon interaction in the CdS NWs sample is weaker than that of CdS NBs sample. Furthermore, lasing action has been observed in CdS NBs sample at room temperature with lasing threshold of 608.13 mW/cm2. However, there is no lasing emission in CdS NWs sample. This phenomenon can be explained by the side effects (such as thermal effects) from surface deep level transitions caused the lower damage threshold in CdS NWs. Based on the observations and deductions presented here, SX emission significantly impact on the performance of nanostructures for lasing and light-emitting applications. Low-dimensional nanomaterials play an important role in photonic devices. Many research have been carried out to characterize their unprecedented properties derived from their quantum size in at least one dimension or strong anisotropy [1,2,3,4]. The richness of nanostructures facilitates the observation of various interesting phenomena, which allows the integration of functional nanomaterials into a wide range of applications. Due to the large surface-to-volume ratio, the optical properties of low-dimensional semiconductors are strongly affected by the material quality and surface morphology. Up to date, various low-dimensional semiconductors are used in micro/nano-devices, such as CdS, ZnO, ZnS, and GaAs, etc. [5,6,7]. As one of the most important applications, laser devices with low threshold, high reliability, and good stability are highly desired. In the past decade, research on the nanostructure-based laser devices has focused on the ability to generate lasers due to their optical gain media and natural optical cavities [1]. CdS is an important II–VI group semiconductor with a direct band gap of 2.47 eV at room temperature, which can be used as high-efficient optoelectronic material in the ultraviolet-visible range. So far, a large number of CdS nanostructures have been synthesized successfully, such as nanospheroids, nanorods, nanowires, nanotripods, nanocombs, and nanobelts [8]. In addition, low-dimensional CdS nanostructures have been proven to have potential applications in nano-optoelectronic devices, such as visible range photodetection [9], optical refrigeration [10], waveguide, and laser devices [11, 12]. In recent years, lasing phenomena in CdS nanobelts (NBs) and nanowires (NWs) have been discovered and studied [13,14,15,16,17]. It is worth noting that large surface-to-volume ratio and quantum confinement effects can strongly influence the band gap, density of states, and carrier dynamics in the low-dimensional CdS nanostructures. In this case, the influence of the surface state on carriers and phonons is also increasing. It can be proven that lattice vibration and excitons can be localized on the surfaces of nanostructures and can be called surface optical phonon mode [18, 19] and surface-related exciton, respectively. Surface excitons could be one kind of excitons bound at surface state, which could be related to Tamm states [20] and surface defects [21,22,23]. Therefore, the carrier dynamics of low-dimensional CdS nanostructures become more complex than bulk and thin film materials due to the surface states, thermal effect, and surface depletion [24, 25]. Although the optical properties of CdS nanostructures have been extensively studied by other researchers, the current understanding of the surface exciton and related lasing mechanisms is still far more complete. It is necessary to conduct detailed carrier kinetic studies on surface exciton to understand the mechanism of photoelectron properties in nanoscale materials for further application [26]. In this work, a systematic comparison of the optical properties of CdS NBs and NWs was performed. Surface states-related exciton emission in nanostructures is discussed by analyzing their photoluminescence (PL). High-density optical pumping experiments are used to clarify the effect of surface-to-volume ratio on lasing. Our results indicate that the surface states-related exciton in CdS nanostructures take an important role in its optical properties, and the associated lasing emission can be obtained at room temperature. These results also reveal the influence of quantum confinement effect and exciton-LO-phonon interaction in CdS NBs and NWs. Material Growth The CdS NBs and NWs were synthesized from pure CdS nanopowder (Alfa Aesar CdS powder) by physical evaporation using a solid tube furnace (MTI-OFT1200). The CdS NBs and CdS NWs were grown on Si (100) wafers, which were cut into 1 cm2 before the experiment. According to the SEM results, the CdS NB has a width of about 1 μm and a thickness of about 70 nm, and the diameter of the CdS NWs is about 90 nm (as shown in Additional file 1: Figure S1). Optical Characterization All PL spectral signals were dispersed by an Andor spectrometer, combined with a suitable optical filter, and then detected by a charge-coupled device (CCD) detector. A He-Cd laser with laser line of 325 nm was used as the excitation source for temperature and power-dependent PL measurements. For the optical pumping experiment, a pulsed 355 nm laser with a pulse width of 1 ns and a frequency of 20 Hz was employed as the excitation source. For the temperature-dependent PL measurement, the sample was mounted inside a helium closed-cycle cryostat (Cryo Industries of America), and the temperature of sample is controlled by a commercial temperature controller (Lakeshore 336 temperature controller). In the excitation power-dependent PL measurement, a variable neutral density filter was used to obtained different excitation power densities. To ensure comparability of PL results, optical alignment is fixed during the measurement. Figure 1 shows the low temperature (20 K) and room temperature PL spectra of CdS NBs and NWs samples. These PL spectra were all measured at an excitation power of 8 mW. For clarity, the PL spectral data in Fig. 1a is normalized and vertically offset. It can be seen that the spectrum of CdS NBs displays some exciton emission-related structures. The corresponding peaks located at 2.552, 2.539, and 2.530 eV can be labeled as free exciton A (FXA), neutral donor-bound exciton emission (D0X) and neutral acceptor bound exciton (A0X), respectively. These peaks can be reasonably assigned according to their characteristic emission energy [12, 27]. Significantly, we assume the emission at 2.510 eV is surface states-related exciton emission and labeled it as SX, and the detailed results will be discussed later. As is known, the surface-related exciton is a kind of bound exciton, which is associated with surface-related defects, such as the study of surface exciton in ZnO and other nanostructures [18,19,20]. Considering the energy of longitudinal optical (LO) phonon of CdS is about 38 meV, the lower energy side peak (2.471 eV) could be assigned to the first order LO phonon replica of SX. In contrast, CdS NWs sample showed an asymmetric emission peak with a peak position at 2.513 eV. This peak also can be assigned to the recombination of the surface states-related exciton (SX). Figure 1b displays the room temperature PL spectra of CdS NBs and NWs. Compared with CdS NBs, the peak position of SX shows a little blue shift. It is worth mentioning that the SX emission intensity of CdS NWs sample is about two times higher than that of CdS NBs sample. The CdS NWs sample has a larger surface-to-volume ratio than the CdS NBs sample, so the luminescence of the two nanostructures at room temperature could be related to surface, that is, related to surface exciton. Considering the SEM result in Additional file 1: Figure S1, we found it is difficult to find bared Si substrate in CdS NBs image, instead, a bared substrate can be seen in CdS NWs sample. This result means that the coverage of the CdS NBs sample per unit area is much larger than that of the CdS NWs sample (as shown in Additional file 1: Figure S1). At the same time, under the same measurement conditions, the reflection intensity of the laser in CdS NWs is 8.2 times that of CdS NBs. Therefore, CdS NWs samples should have higher PL efficiency, which is consistent with the speculation that PL emission is related to surface excitons. The PL spectra of CdS NBs and NWs (a) at 20 K and (b) at room temperature To reveal the evolution of the emission in CdS NBs and NWs samples, the temperature-dependent PL spectra were outperformed and analyzed. As depicted in Fig. 2a, the peaks of FXA, D0X, and A0X, all exhibit redshift with the increase of temperature, while in CdS NBs sample, SX emission dominates the emission on the temperature range of 20 to 295 K. The results show that the emission intensity of FXA, D0X, and A0X emission drops dramatically when the temperature rises, and their relative intensity decreases much faster than SX and disappears at around 100 K. The inset of Fig. 2a shows the plots of these peak positions evolution with the temperature. To understand the emission mechanism behind the PL results, we use the following empirical formula to describe the temperature induced bandgap shrinking [28]: $$ {E}_g(T)={E}_g(0)-\frac{\alpha \Theta}{\exp \left(\raisebox{1ex}{$\Theta $}\!\left/ \!\raisebox{-1ex}{$T$}\right.\right)-1} $$ a Temperature-dependent PL spectra of CdS NBs in the range from 20 K to 295 K, the inset is plots of FXA, A0X, and SX peaks as a function of temperature. b Temperature-dependent PL spectra of CdS NWs in the range from 20 K to 295 K, the inset is SX peak redshift with the temperature, and the solid red curve of the SX is corresponding to the fit result based on Varshni equation where Eg(0) is the bandgap at 0 K, α is coupling constant between the electron (or exciton) and phonon which is associated with the strength of exciton-phonon interaction, Θ is an averaged phonon energy, and T represents absolute temperature. The symbols in the inset of Fig. 2a are experimental data of FXA, D0X, and SX, and the solid lines represent the fitting curves of SX. In this case, SX shows redshift with the temperature increase, and it can be well fitted by the above formula. This result indicates that SX is near band gap radiative recombination. The fitting parameter Eg(0) of SX is approximately 2.512 eV in CdS NBs sample, which is located on the low energy side of FXA peak. The energy difference between SX and FXA is about 42 meV. The SX emission is gradually dominant when temperature rises, which also supports the SX emission attributable to a strong exciton. In comparison, the temperature-dependent PL spectra of CdS NWs are shown in Fig. 2b. It can be seen that the PL spectrum shows only one emission peak in the temperature range of 20 to 295 K. This peak located at 2.513 eV at 20 K, and it should be assigned to SX emission. This SX peak position is also well fitted by Eq. 1, which also confirmed SX emission is related to the near band gap transition. The parameter of the fitting results for CdS NBs and NWs are collected in Table 1. The difference value of Eg(0) between CdS NBs and NWs is 3 meV. Evidently, the exciton-phonon coupling constant α and averaged phonon energy Θ of the CdS NWs are smaller than those of the CdS NBs. This result also suggests that weakened exciton-LO-phonon coupling exists in the CdS NWs sample, which is caused by the long-range translational symmetry was partly destroyed [28]. Table 1 The fitting parameters for CdS NBs and NWs samples Figure 3a presents the power-dependent PL spectra of CdS NBs sample at room temperature. The emission peak at 2.44 eV is the radiative recombination of SX, while an emission band centered at 2.06 eV may be derived from the deep level defects such as Cd interstitial, dangling bonds, surface defects, or S vacancies [29,30,31]. The relationship between excitation power I0 and integrated intensity of emission I can be expressed as following [32]: $$ I=\eta {I}_0^{\alpha } $$ a PL spectra of CdS NBs under different excitation power at room temperature, the inset is the integrated intensities of SX with the excitation power. b PL spectra of CdS NWs under different excitation power at room temperature, the inset is the integrated intensities of SX with the excitation power where I0 is the power density of the excitation, η represents the emission efficiency and the exponent α indicates the mechanism of the recombination. The intensity of the emission peak keeps growing with the excitation power increasing. The inset of Fig. 3a depicts the PL intensity of SX emission in CdS NBs as a function of laser power density, and the solid line present the fitting result of Eq. 2. For SX emission, the exponent α is about 1, which indicate SX emission is still excitonic recombination at room temperature. In contrast to CdS NBs results, deep level emission (DLE) is more obvious in CdS NWs sample (as shown in Fig. 3b). This can be explained as CdS NWs have more surface defects due to its larger surface-to-volume ratio. The inset of Fig. 3b gives the integrated PL intensity plots as a function of excitation power, which can be fitted by Eq. 2. The fitting parameter α of CdS NWs sample is equal to 1.07, which also supports SX emission to be of exciton nature. Figure 4 displays the integrated PL intensity ratio of DLE and SX emission in CdS NBs and NWs sample, respectively. It is clearly that DLE in CdS NBs take a dominant role in PL spectra at low excitation condition because of the DLE/SX is higher than 1. Then, the value is decreased with the enhancement of excitation power, which means SX emission have higher rise ratio than DLE emission. On the other hand, the DLE of CdS NWs sample show a higher ratio up to 2.8 and dropped slowly with the raised excitation power. This result confirmed DLE emission dominated the spectra in CdS NWs. Although the larger surface-to-volume ratio can induce more SX emission, but DLE also became higher at the same time. It is clear that more carriers in higher energy states will first relax to DLE states and then do radiative recombination (DLE emission) in CdS NWs sample. The general side effect of the DLE emission is thermal effects, thus, it may influence the optical properties of CdS NBs and NWs. Integrated PL intensity ratio of DLE emission and SX emission in CdS NBs and NWs samples at room temperature Next, a 355-nm pulsed laser is used as excitation source to investigate the lasing action in CdS nanostructures. Figure 5 shows the power-dependent PL spectra of CdS NBs at room temperature. To obtain the lasing threshold, integrated PL intensities are plotted as a function of the average power density as shown in Fig. 5b. A superlinear increase of emission intensity and sharp features occurred when the average power density is about 608.13 mW/cm2. And the instantaneous power intensity of the lasing threshold is 3.04 GW/cm2. With further increases of pump density, the center of lasing peak has a trend of redshift (as shown in Fig. 5a), which suggest the lasing peak could be ascribed to electron-hole plasma (EHP) recombination [33, 34]. However, when the power density increases above 13 W/cm2 or more, the intensity of lasing peak tends to decrease. If further increase the power density, sample will be damaged at the excitation laser spot. It can be ascribed to the thermal effect raised with the pump density. Power-dependent lasing spectra of CdS NBs at room temperature, the inset a shows the trend of lasing emission peak, the inset b is integrated peak intensity as a function of excitation power, and the inset c represents the PL intensity of CdS NBs and NWs plots as a function of time, both samples excited under 355 nm pulsed laser with the power density of 12.8 W/cm2 Unfortunately, there is no lasing action that can be observed in CdS NWs sample. It is worth to mention that the damage threshold of the CdS NWs sample is about 2.65 mW/cm2, which is much lower than the lasing threshold in CdS NBs sample. This result can be ascribed by the side effect (thermal effects) of the massive DLE emission in CdS NWs. In the interest of observing the lasing emission stability in CdS NBs and SX emission stability in CdS NWs, Fig. 5c depicts the PL intensity of the two samples as a function of time (from 0 to 200 s) under the excitation power of 12.8 W/cm2. The CdS NBs sample showed stable laser emission, while the CdS NWs showed PL emission, and the PL intensity rapidly decreased with time from the beginning. These PL results mean the SX-related lasing emission is stable in CdS NBs sample, but a lower damage threshold to limit the emission performance in CdS NWs sample. In our case, the SX-related lasing emission could be enhanced by the larger surface-to-volume ratio, but the side effects (such as thermal effects) from surface deep level transitions could become a critical issue to hinder their lasing application. In conclusion, we have investigated the PL properties of CdS NBs and NWs by using temperature and power-dependent PL spectra. CdS NBs sample displays more detailed spectral structure than CdS NWs sample at 20 K. With the temperature increasing, the intensities of other emissions (such as FXA, A0X, and D0X) faded around 100 K, while SX emission (surface state-related exciton emission) is mainly governed by the PL broadening SX emission as can be observed. And we found that the exciton-LO-phonon interaction effect in CdS NWs sample is weak than that of CdS NBs, which caused the breakage of long-range translational symmetry. It is worth to note that the stable lasing emission can be observed in CdS NBs sample at room temperature, and the lasing threshold is about 608.13 mW/cm2 (average power density). However, there are no signs of lasing emission in CdS NWs sample. This may be due to its relative larger surface-to-volume ratio that increases side effects, such as thermal effects from surface deep level transition. These results also proved SX emission in CdS nanostructures can provide a convenient and high-efficiency channel for potential laser and light-emitting applications. The authors declare that materials and data are promptly available to readers without undue qualifications in material transfer agreements. All data generated in this study are included in this article. A0X: Neutral acceptor bound exciton CCD: Charge-coupled device D0X: Neutral donor bound exciton Deep level emission FXA : Free exciton A LO phonon: Longitudinal optical phonon NBs: Nanobelts NWs: Nanowires Photoluminescence SX: Surface-related exciton Eaton SW, Fu A, Wong AB, Ning CZ, Yang P (2016) Nature Mater. 1(6):16028 Glaser M, Kitzler A, Johannes A, Prucnal S, Potts H, Conesa-Boj S, Filipovic L, Kosina H, Skorupa W, Bertagnolli E, Ronning C, Morral AF, Lugstein A (2016) Nano Lett 16(6):3507–3513 Sarwar ATM, Carnevale SD, Yang F, Kent TF, Jamison JJ, McComb DW, Myers RC (2015) Smal 11(40):5402–5408 Oener SZ, van de Groep J, Macco B, Bronsveld PC, Kessels WMM, Polman A, Garnett EC (2016) Nano Lett 16(6):3689–3695 Zhai T, Li L, Wang X, Fang X, Bando Y, Golberg D (2010) Adv Funct Mater 20(24):4233–4248 Dai J, Xu C, Xu X, Guo J, Li J, Zhu G, Lin Y (2013) ACS Appl Mat Interfaces 5(19):9344–9348 Zap M, Ronning C, Röder R (2017) Appl Phys Lett 110(17):173103 Zhai T, Fang X, Li L, Bando Y, Golberg D (2010) Nanoscale 2(2):168–187 Zhou W, Peng Y, Yin Y, Zhou Y, Zhang Y, Tang D (2014) AIP Adv. 4(12):123005 Morozov YV, Draguta S, Zhang S, Cadranel A, Wang Y, Janko B, Kuno M (2017) J Phys Chem C 121(30):16607–16616 Barrelet CJ, Greytak AB, Lieber CM (2004) Nano Lett 4(10):1981–1985 Liu B, Chen R, Xu XL, Li DH, Zhao YY, Shen ZX, Xiong QH, Sun HD (2011) J Phys Chem C 115(26):12826–12830 Liu YK, Zapien JA, Geng CY, Shan YY, Lee CS, Lifshitz Y, Lee ST (2004) Appl Phys Lett 85(15):3241–3243 Cao BL, Jiang Y, Wang C, Wang WH, Wang LZ, Niu M, Lee ST (2007) Adv Funct Mater 17(9):1501–1506 Dai J, Xu C, Li J, Tian Z, Lin Y (2014) Mater Lett 124:43–46 Zhang Q, Zhu X, Li Y, Liang J, Chen T, Fan P, Pan A (2016) Laser Photon Rev 10(3):458–464 Röder R, Ploss D, Kriesch A, Buschlinger R, Geburt S, Peschel U, Ronning C (2014) J Phys D: Appl Phys 47(39):394012 Gupta R, Xiong Q, Mahan GD, Eklund PC (2003) Nano. Lett 3(12):1745–1750 Zhu JH, Ning JQ, Zheng CC, Xu SJ, Zhang SM, Yang H (2011) Appl Phys Lett. 99(11):113115 Ohno H, Mendez EE, Brum JA, Hong JM, Agulló-Rueda F, Chang LL, Esaki L (1990) Phys Rev Lett 64(21):2555 Travnikov VV, Freiberg A, Savikhin SF (1990) J Lumin 47(3):107–112 Wang J, Zheng C, Ning J, Zhang L, Li W, Ni Z, Wang J, Xu S (2015) Sci Rep 5:7687 Bao W, Su Z, Zheng C, Ning J, Xu S (2016) Sci. Rep. 6:34545 Li D, Zhang J, Xiong Q (2012) ACS nano. 6(6):5283–5290 Duan X, Huang Y, Agarwal R, Lieber CM (2003) Nature 421(6920):241 He J, Carmen ML (2008) Nano Lett 8(7):1798–1802 Wang C, Ip KM, Hark SK, Li Q (2005) J. Appl. Phys. 97(5):054303 Ning JQ, Zheng CC, Zhang XH, Xu SJ (2015) Nanoscale 7(41):17482–17487 Alqahtani MS, Hadia NMA, Mohamed SH (2017) Appl. Phys. A 123(4):298 Du KZ, Wang X, Zhang J, Liu X, Kloc C, Xiong Q (2016) Opt. Eng. 56(1):011109 Xu X, Zhao Y, Sie EJ, Lu Y, Liu B, Ekahana SA, Ju X, Jiang Q, Wang J, Sun HD, Sum TC, Huan CHA, Feng YP, Xiong QH (2011) ACS Nano 5(5):3660–3669 Bergman L, Chen XB, Morrison JL, Huso J, Purdy AP (2004) J Appl Phys 96(1):675–682 Özgür Ü, Alivov YI, Liu C, Teke A, Reshchikov MA, Doğan S, Avrutin V, Cho S -J, Morkoç H (2005) J Appl Phys 98(4):11(https://doi.org/10.1063/1.1992666). Dai G, Wan Q, Zhou C, Yan M, Zhang Q, Zou B (2010) Chem Phys Lett 497(1-3):85–88 R. C. acknowledges the support from National 1000 plan for Young Talent. Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, Guangdong, 518055, People's Republic of China Xian Gao , Guotao Pang & Rui Chen School of Physics, Southeast University, Nanjing, 211189, People's Republic of China & Zhenhua Ni Search for Xian Gao in: Search for Guotao Pang in: Search for Zhenhua Ni in: Search for Rui Chen in: This work presented here was performed in collaboration with all the authors. All authors read and approved the final manuscript. This work is supported by the National Natural Science Foundation of China (11574130 and 11404161), and Shenzhen Science and Technology Innovation Commission (Projects Nos.: KQJSCX20170726145748464, JCYJ20180305180553701, and KQTD2015071710313656). Correspondence to Zhenhua Ni or Rui Chen. Figure S1. SEM image of a CdS NBs sample and b CdS NWs sample are presented. (DOCX 477 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
CommonCrawl
Accuracy and reproducibility of simplified QSPECT dosimetry for personalized 177Lu-octreotate PRRT Michela Del Prete1,2, Frédéric Arsenault1,2, Nassim Saighi1,2, Wei Zhao3,4, François-Alexandre Buteau1,2, Anna Celler3,4 & Jean-Mathieu Beauregard ORCID: orcid.org/0000-0003-2123-145X1,2 Routine dosimetry is essential for personalized 177Lu-octreotate peptide receptor radionuclide therapy (PRRT) of neuroendocrine tumors (NETs), but practical and robust dosimetry methods are needed for wide clinical adoption. The aim of this study was to assess the accuracy and inter-observer reproducibility of simplified dosimetry protocols based on quantitative single-photon emission computed tomography (QSPECT) with a limited number of scanning time points. We also updated our personalized injected activity (IA) prescription scheme. Seventy-nine NET patients receiving 177Lu-octreotate therapy (with a total of 279 therapy cycles) were included in our study. Three-time-point (3TP; days 0, 1, and 3) QSPECT scanning was performed following each therapy administration. Dosimetry was obtained using small volumes of interest activity concentration sampling for the kidney, the bone marrow and the tumor having the most intense uptake. Accuracy of the simplified dosimetry based on two-time-point (2TP; days 1 and 3, monoexponential fit) or a single-time-point (1TPD3; day 3) scanning was assessed, as well as that of hybrid methods based on 2TP for the first cycle and 1TP (day 1 or 3; 2TP/1TPD1 and 2TP/1TPD3, respectively) or no imaging at all (based on IA only; 2TP/no imaging (NI)) for the subsequent induction cycles. The inter-observer agreement was evaluated for the 3TP, 2TP, and hybrid 2TP/1TPD3 methods using a subset of 60 induction cycles (15 patients). The estimated glomerular filtration rate (eGFR), body size descriptors (weight, body surface area (BSA), lean body weight (LBW)), and products of both were assessed for their ability to predict IA per renal absorbed dose at the first cycle. The 2TP dosimetry estimates correlated highly with those from the 3TP data for all tissues (Spearman r > 0.99, P < 0.0001) with small relative errors between the methods, particularly for the kidney and the tumor, with median relative errors not exceeding 2% and interdecile ranges spanning over less than 6% and 4%, respectively, for the per-cycle and cumulative estimates. For the bone marrow, the errors were slightly greater (median errors < 6%, interdecile ranges < 14%). Overall, the strength of correlations of the absorbed dose estimates from the simplified methods with those from the 3TP scans tended to progressively decrease, and the relative errors to increase, in the following order: 2TP, 2TP/1TPD3, 1TPD3, 2TP/1TPD1, and 2TP/NI. For the tumor, the 2TP/NI scenario was highly inaccurate due to the interference of the therapeutic response. There was an excellent inter-observer agreement between the three observers, in particular for the renal absorbed dose estimated using the 3TP and 2TP methods, with mean errors lesser than 1% and standard deviations of 5% or lower. The eGFR · LBW and eGFR · BSA products best predicted the ratio of IA to the renal dose (GBq/Gy) for the first cycle (Spearman r = 0.41 and 0.39, respectively; P < 0.001). For the first cycle, the personalized IA proportional to eGFR · LBW or eGFR · BSA decreased the range of delivered renal absorbed dose between patients as compared with the fixed IA. For the subsequent cycles, the optimal personalized IA could be determined based on the prior cycle renal GBq/Gy with an error of less than 21% in 90% of patients. A simplified dosimetry protocol based on two-time-point QSPECT scanning on days 1 and 3 post-PRRT provides reproducible and more accurate dose estimates than the techniques relying on a single time point for non-initial or all cycles and results in limited patient inconvenience as compared to protocols involving scanning at later time points. Renal absorbed dose over the 4-cycle induction PRRT course can be standardized by personalizing IA based on the product of eGFR with LBW or BSA for the first cycle and on prior renal dosimetry for the subsequent cycles. For patients with metastatic neuroendocrine tumors (NETs), peptide receptor radionuclide therapy (PRRT) with 177Lu-octreotate is an effective palliative treatment that rarely causes serious toxicity [1, 2]. PRRT has been mostly administered as a 4-cycle induction course using a fixed injected activity (IA) of not more than 7.4 GBq per cycle, in order to not exceed cumulative absorbed doses of 23 Gy to the kidney and 2 Gy to the bone marrow (BM) in the majority of patients [1,2,3,4]. However, it is well known that for these critical organs, and in particular for the kidney which is the dose-limiting organ for most patients, the absorbed dose per IA is highly variable and usually lower than 23 Gy per 4 cycles, resulting in most patients being undertreated with such an empiric PRRT regime [5, 6]. We and others have proposed personalized PRRT (P-PRRT) protocols in which the number of fixed IA cycles or the IA per cycle are modulated to deliver a safe prescribed renal absorbed dose, with the aim to maximize tumor irradiation while keeping the toxicity low [4, 6]. Such P-PRRT protocols require careful dosimetry monitoring, which is often perceived as a complex and resource-consuming process, therefore constituting a barrier for wide clinical adoption. As a result, the clinical practice of "one-size-fits-all" PRRT prevails, at the potential cost of delivering a suboptimal treatment to most patients. We have been routinely performing post-PRRT dosimetry using quantitative single-photon emission computed tomography (QSPECT) combined with the small-sphere volume of interest (VOI) activity concentration sampling [5, 6]. Aiming to simplify the dosimetry process and to reduce the clinical burden thereof, we examined the impact of reducing the number of QSPECT sessions on the accuracy and the inter-observer reproducibility of the resulting dose estimates. In parallel, based on a large dataset from our growing cohort of patients treated with PRRT, we updated our personalized IA determination scheme. Patients and PRRT cycles From November 2012 to December 2017, 81 patients with progressive metastatic and/or symptomatic NET were treated with PRRT at CHU de Québec—Université Laval. Two patients who underwent only 1 cycle were each excluded because of their incomplete dosimetric data, and therefore, only data from 79 patients was analyzed. This includes 23 patients who received only empiric PRRT (i.e., fixed IA of approximately 8 GBq, occasionally reduced) until March 2016, for whom the requirement for consent was waived due to the retrospective nature of the analysis. All other patients were enrolled in our P-PRRT trial (NCT02754297) and gave informed consent to participate (protocol described in [6]). Patient characteristics are reported in Table 1. Table 1 Patient characteristics Two hundred and eighty-four therapy cycles were administered during the study period. Five cycles in five patients were excluded from the analysis because of dosimetry protocol deviation or missing data. Among the 279 therapy cycles analyzed, 142 were empiric (median IA = 7.6 GBq; range, 3.8–9.1 GBq) and 137 were personalized (median IA = 9.0 GBq; range, 0.7–32.4 GBq). Anti-nausea premedication (ondansetron and dexamethasone) and a nephroprotective amino acid solution (lysine and arginine) were administered [1]. We administer a 4-cycle induction course for which the prescribed cumulative renal dose is 23 Gy (5 Gy at the first cycle; two-monthly intervals) and, in responders only, we offer consolidation, maintenance, and/or salvage cycles (prescribed renal dose of 6 Gy each; personalized intervals). As previously described, prescribed renal absorbed radiation doses were reduced in patients with renal or bone marrow impairment [6]. Reference dosimetry method At each cycle, after therapeutic administration of 177Lu-octreotate, QSPECT/computed tomography (QSPECT/CT) scans were performed at approximately 4 h (day 0), 24 h (day 1), and 72 h (day 3) using a Symbia T6 camera (Siemens Healthcare, Erlangen, Germany) (Fig. 1) [6, 7]. Following the same data processing as described in [7], the dead-time corrected reconstructed images were converted into the positron emission tomography (PET) DICOM format, which includes a "rescale slope" parameter that converts count data into Bq/mL and also enables display of QSPECT images in standardized uptake values normalized for body weight (standardized uptake value; SUV). Post-treatment serial QSPECT/CT was performed at (from left to right) 5, 24, and 70 h after a 22.0 GBq 177Lu-octreotate administration in a 55-year-old male with metastatic NET of unknown origin. Small volumes of interest (2-cm diameter) were placed over tissues of interest. Left kidney (red arrows), L5 bone marrow cavity (orange arrows), and dominant tumor (green arrows) VOIs are pointed on anterior maximum intensity projections (top row) and selected transaxial fusion slices (mid and bottom rows). QSPECT images are normalized using an upper SUV threshold of 7. During this consolidation cycle, the personalized injected activity allowed the delivery of 6.1 Gy (6.0 Gy prescribed) to the kidney These three imaging time points were initially selected for the following practical reasons: (1) the day 0 scan does not incur any additional hospital visit for the patient and allows capturing early kinetics of the radiopharmaceutical and (2) performing scans beyond day 3 would not be easy for logistical reasons (PRRT being administered on Tuesday, day 4 or 5 would fall on the weekend) and would inconvenience patients (in particular those out-of-city patients who would need to prolong their stay). As in our clinical practice, we routinely performed dosimetry based on the data acquired at these three time points (3TP), this approach constituted the reference method for the present analysis. In brief, at each time point, we sampled the activity concentration in tissues of interest (Fig. 1), including both kidneys (areas of representative parenchymal uptake), the BM (L4 and L5 vertebral bodies, or elsewhere when the latter were obviously affected by metastases), and the tumor having the most intense uptake (Tumormax), using 2-cm (4.2 cm3) spherical VOIs, as previously described [5, 6]. This was performed using either Hybrid Viewer (Hermes Medical Solutions, Stockholm, Sweden) or MIM Encore (MIM Software Inc., Cleveland, OH, USA) software. As previously described in [6], we also computed the total body retention for the purpose of computing the cross-dose component of the BM absorbed dose (BMcross), which we added to the self-dose component (BMself) to estimate the total BM absorbed dose (BMtotal). Based on these 3TP data, trapezoidal-monoexponential (3TPTM) time-activity curves (TACs) were drawn using the following procedure (Fig. 2). For each organ/tumor, a constant mean SUV was assumed from the time of 177Lu-octreotate injection until the time of the day 0 scan (approximately 4 h). This was followed by a linear (trapezoid) fit to the SUV corresponding to the day 0 and the day 1 scans. Then, a monoexponential curve was fit using the day 1 and day 3 data, resulting in an effective decay model being used from day 1 onwards (trapezoidal-monoexponential; 3TPTM; Fig. 2). However, in cases when the day 3 SUV was higher than that corresponding to day 1, we assumed a linear SUV variation between days 1 and 3, followed by the physical decay of activity (i.e., λbiol = 0, λeff = λphys) from day 3 to infinity (trapezoidal-constant; 3TPTC). Time-activity curves (TACs) of the renal (a), tumor (b), and bone marrow (c) activity concentrations and of the whole-body retention (d) over time for the patient case illustrated in Fig. 1. TACs in MBq/cc or MBq (red) and SUV or percentage of injected activity (%IA) (blue) are illustrated for the three-time-point (3TP; solid lines) and two-time-point (2TP; dashed lines) method Then, the area under each TAC curve was integrated and multiplied by the appropriate activity concentration dose factors (ACDF). The values of these factors have been derived from OLINDA/EXM software data (Vanderbilt University, Nashville, TN, USA), as previously described [6]: 84 mGy · g/MBq/h for Tumormax and 87 mGy · g/MBq/h for kidneys and BMself. For BMcross, we integrated the total body activity over time and multiplied it by a dose factor of 1.09 × 10−4 mGy/MBq/h for males or 1.29 × 10−4 mGy/MBq/h for females, i.e., to account for their different gamma fraction of energy deposition from the whole body to the BM [6]. Simplified dosimetry methods From our experience and as suggested by others, the day 0 data, although it captures the rapid kinetics of the radiopharmaceutical (which includes competing accumulation and rapid washout), contributes little to the area under the TAC, which is mostly determined by the slow washout kinetics and tends to follow a monoexponential decay beyond 24 h [8]. Accordingly, we eliminated the day 0 data from all simplified dosimetry approaches. A total of five methods were investigated, as detailed below. 2TP: Two-time-point (2TP) dosimetry estimates were obtained using VOI data from day 1 to day 3 scans. From time of administration to infinity, monoexponential (2TPM) effective decay was applied, except in cases of biological accumulation of activity, i.e., when the SUV of the tissue increased between day 1 and day 3. In such cases, we assumed a SUV equal to that of day 3 SUV (λbiol = 0), from time of treatment administration to infinity and thus applied only physical decay (λeff = λphys; 2TPC). The 2TP method is the combination of 2TPM and 2TPC. 1TPD3: As proposed by Hänscheid et al., we estimated doses using a single-time-point method based on the day 3 data (1TPD3) [8]. In this method, the activity concentration (MBq/cc) was multiplied by the time at which the day 3 scan was performed (h) and by 0.25 Gy · g/MBq/h (based on Eq. 8 in [8]). To compute BMcross, the total whole-body activity (MBq) was multiplied by imaging time (h) and by 3.2 × 10−7 Gy/MBq/h for males or 3.7 × 10−7 Gy/MBq/h for females, i.e., the gamma fraction of energy deposition from the whole body to the BM, multiplied by 0.25 Gy · g/MBq/h (from [8], as above), divided by 87 mGy · g/MBq/h (ACDF of the BM and kidney). 2TP/1TPD1: We evaluated a hybrid dosimetry protocol based on the 2TP method for the first cycle, as described above, and employed a single scan on day 1 for the subsequent induction cycles. In this scenario, the absorbed dose to a given tissue during the second and the subsequent induction cycles was obtained by applying the monoexponential curve corresponding to the effective decay, as determined for this tissue during the first cycle, to the activity concentration observed on day 1 of the subsequent cycle. 2TP/1TPD3: This is the same as 2TP/1TPD1, but the single scan on subsequent cycles was that performed on day 3. 2TP/NI: Similar to the two previous methods, this method was also based on 2TP scanning for the first cycle, but no imaging (NI) was performed for the subsequent cycles. For the latter, the absorbed dose per IA during the subsequent cycles was simply assumed to be equal to that delivered during the first cycle. Cumulative renal, BMtotal, and Tumormax absorbed doses were compiled for all patients who received three or four induction cycles (n = 65). Per-cycle and cumulative doses resulting from each of the simplified dosimetry methods were compared with those obtained using the reference (3TP) method, and relative errors were calculated. Inter-observer variability For 60 induction cycles in 15 patients, the dosimetry analysis was performed independently by three observers having different backgrounds and purposely varied levels of experience in internal dosimetry. Observer 1 (M.D.P.), a certified endocrinologist, current PRRT Fellow and Ph.D. student, performed 258 of the 279 primary analyses described in this paper and, as such, accumulated the most experience with this dosimetry procedure. Observer 2 (F.A.) was a certified nuclear medicine physician and current Nuclear Oncology fellow who performed 21 primary analyses. Observer 3 (N.S.) was an M.D. student who was new to both nuclear medicine and dosimetry and who received only a short training. Relative errors of per-cycle and cumulative absorbed doses between each pair of observers were computed for the reference method (3TP) and the two most accurate simplified methods. Personalized 177Lu-octreotate activity prescription We previously derived a model based on the body surface area (BSA) and the estimated glomerular filtration rate (eGFR; according to the CKD-EPI Creatinine Equation [9]) to determine the personalized 177Lu-octreotate activity to be administered at the first cycle [6]. Using data from our entire cohort of 79 patients, we aimed to formulate a simpler prescription equation. To this end, we correlated the ratio of IA to the renal absorbed dose estimated from the first cycle (GBq/Gy, obtained by the 3TP or the 2TP methods) with the patient's weight, lean body weight (LBW), BSA, eGFR, and the products of eGFR with each of the three body size descriptors. Then, for each of these seven correlations, we performed a linear regression forced through the origin (eliminating the intercept) and calculated the relative errors of the predicted renal GBq/Gy using the slope of the linear regression. We also compared the accuracy of predicting the renal GBq/Gy in any given non-initial cycle with that from the previous cycle or with the average renal GBq/Gy of the two previous cycles, as we have initially been doing in our P-PRRT trial [6]. Data are presented as median and interdecile range or as mean ± SD according to the data distribution using D'Agostino-Pearson omnibus normality test. Ranges are also reported. Pearson or Spearman correlations were used depending on the normality of the data. A difference was considered as statistically significant if the P value was below 0.05. Correlations and linear regressions were performed using GraphPad Prism software (version 7, GraphPad Software Inc., La Jolla, CA, USA). Accuracy of simplified dosimetry methods Tissue-specific effective half-lives derived from monoexponential fitting of the activity concentrations measured on days 1 and 3 are presented in Table 2. The per-cycle dosimetry results obtained with the 3TP and the 2TP methods are summarized in Table 3. For the kidney, there was only one patient case during which no biological elimination of activity between days 1 and 3 was observed. There were 30 such cases for the BMself and 26 for Tumormax. In these cases, the 3TPTC and 2TPC methods were applied, while 3TPTM and 2TPM methods were used for all other cases. The 3TP and 2TP data (i.e., 3TPTM pooled with 3TPTC, and 2TPM pooled with 2TPC) were very highly correlated (Spearman r > 0.99, P < 0.0001 for all tissues). The median relative errors between the methods were small, particularly for the kidney and the tumor (≤ 2%). Table 2 Tissue-specific effective half-lives derived from activity concentration at day 1 and day 3, and absorbed doses per injected activity for the 3TP reference method (n = 279) Table 3 Per-cycle dosimetry estimates obtained with three-, two-, and one-time-point methods (n = 279) The results of applying the single-measurement method proposed by Hänscheid et al. [8] to our day 3 QSPECT uptake data (1TPD3) are shown in column 8 of Table 3. We obtained the same median error for the kidney as Hänscheid (6% at 72 h) with a comparable interdecile range. Thus, our dosimetry results, based on tomographic data acquisition, validate this practical approach, which was devised using planar imaging data. Further, despite the different imaging techniques, we obtained a similar median effective half-life for the kidney (47 h, Table 2; vs. 51 h in [8]), although we observed a wider inter-patient variability. For Tumormax, the 1TPD3 technique was slightly less accurate when applied to our data, but the range of errors was comparable. Table 4 shows our results for the hybrid methods. In all cases, the 2TP method was applied in the first cycle. In this analysis, 2TP/1TPD3 was found to be more accurate than both 2TP/1TPD1 and 2TP/NI. The latter method yielded particularly inaccurate results for Tumormax, due to the interference of therapeutic response. Please note that for all tissues, we obtained median errors closer to zero with 2TP/1TPD3 than with 1TPD3 (Table 3). For the kidneys, among all the simplified dosimetry methods, 2TP was found to be the most accurate when compared to 3TP, on a per-cycle basis (Fig. 3). Table 4 Per-cycle dosimetry estimates obtained with hybrid methods based on two time points for the first cycle and one time point or no imaging at all for subsequent cycles (induction cycles only, n = 173) Comparison of the relative errors of per-cycle renal absorbed dose estimates obtained by the simplified methods relative to the three-time-point (3TP) method. Boxes represent the interquartile range, and whiskers the interdecile range (2TP and 1TPD3, n = 279; 2TP/1TPD1, 2TP/1TPD3, and 2TP/NI, n = 173) As the aim of the induction course of our P-PRRT regime is to deliver a given prescribed renal absorbed dose, e.g., 23 Gy in patients without significant bone marrow or renal function impairment, we compared the accuracy of the simplified dosimetry methods to that of the 3TP method, for the assessment of the cumulative dosimetry in patients having completed at least three of the four intended induction cycles (Table 5). From all the simplified approaches, the 2TP method was by far the most accurate, in particular for the kidneys (Fig. 4). Using the latter, the cumulative renal and Tumormax absorbed dose for all patients agreed to within only 9% and 5%, respectively, with the corresponding absorbed doses derived from 3TP scanning (Table 5) confirming the small influence of the day 0 scan and early kinetics on the precision of dosimetry estimates. Even the total BM dose was quite accurately estimated using the 2TP protocol. When compared with 2TP, median errors increased, and error ranges widened for all 1TP-based techniques and even more so if no imaging was done on subsequent cycles. Table 5 Cumulative dosimetry estimates obtained in patients having completed three or four evaluable induction cycles (n = 65) Comparison of the relative errors of per-induction course cumulative renal absorbed dose estimates obtained by the simplified methods relative to the three-time-point (3TP) method. Boxes represent the interquartile range, and whiskers the interdecile range (n = 65) Table 6 illustrates the inter-observer variability of the per-cycle and cumulative renal, BMtotal, and Tumormax absorbed doses assessed independently by the three observers in 15 patients (60 cycles) using three methods: 3TP, 2TP, and 2TP/1TPD3. There was an excellent inter-observer agreement between all observers for the kidney using the three methods, the best agreement being for the cumulative renal dose estimated using the 3TP and 2TP methods. The span of errors was larger for Tumormax, owing it to variations in the precise placement of the VOI over the most intense region of the dominant lesion. For both BMtotal and Tumormax, there was a trend towards a lesser inter-observer agreement when the least experienced observer (observer 3) was involved. BMtotal reproducibility suffered from the low-level and noisy uptake data in the BM compartment, and consequently BMself, the dominant component of BMtotal, was more sensitive to the position of the VOI than were the absorbed doses of the other tissues of interest. Nevertheless, the inter-observer agreement on the cumulative BMtotal dose was fair. Table 6 Inter-observer variability of dosimetry estimates in 60 induction cycles received by 15 patients Accuracy of activity prescription at first and subsequent cycles Correlations and linear regression slopes between the body size descriptors, the eGFR, or their products vs. the IA per renal absorbed dose at the first induction cycle are reported in Table 7. The strongest correlations were found when using either eGFR · LBW or eGFR · BSA (Fig. 5) as predictors of renal GBq/Gy, and both seem equally appropriate for personalized IA prescription at the first cycle (Fig. 6). We therefore elected to continue using eGFR and BSA for determining IA at the first cycle and updated our initial formula (found in [6]) with this simpler equation: $$ \mathrm{Personalized}\ \mathrm{AI}\ \left(\mathrm{GBq}\right)=K\cdotp \mathrm{eGFR}\ \left(\mathrm{mL}/\min /1.73{\mathrm{m}}^2\right)\cdotp \mathrm{BSA}\ \left({\mathrm{m}}^2\right)\cdotp \mathrm{Prescribed}\ \mathrm{renal}\ \mathrm{absorbed}\ \mathrm{dose}\ \left(\mathrm{Gy}\right) $$ Table 7 Correlation between body size predictors, eGFR, and the IA per renal absorbed dose (GBq/Gy) at the first induction cycle (n = 77) Injected activity per renal absorbed dose at the first cycle vs. the product of body surface area and estimated glomerular filtration rate (n = 77). There was a moderate correlation between variables (Spearman r = 0.39, P = 0.0005). The slope of the linear regression curve forced through origin (solid line; 95% confidence interval, dashed lines), which was 0.012 GBq · min × 1.73/mL/Gy, is to be used to adjust the injected activity at the first cycle in a personalized PRRT protocol Comparison of renal absorbed dose delivered during the first cycle of fixed injected activity (IA) vs. personalized PRRT regimes (n = 77). In the latter, the prescribed renal absorbed dose is 5 Gy and the IA is adjusted based on weight, lean body weight (LBW), body surface area (BSA), estimated glomerular filtration rate (eGFR), or the product of eGFR and of a body size descriptor. For comparison, a fixed IA of 9.1 GBq would yield a median renal absorbed renal absorbed dose of 5 Gy where K = 0.012 GBq · min · 1.73/mL/Gy, i.e., the slope of the linear regression (Fig. 5). For the subsequent cycles, the median error of the renal GBq/Gy relative to that of the previous cycle was − 0.7% (interdecile range, − 21.0 to 20.2%; range, − 49.1 to 54.0%; n = 194) for the 3TP method, − 0.8% (interdecile range, − 20.4 to 19.0%; range, − 49.4 to 50.2%; n = 194) for the 2TP method, and − 0.3% (interdecile range, − 20.5 to 18.6%; range, − 40.2 to 87.4%; n = 168) for the 2TP/1TPD3 method. When in the analysis of the third and fourth induction cycle the average of the renal GBq/Gy of the two prior cycles was used, the resulting errors were − 0.9% (interdecile range, − 21.3–19.1%; range, − 49.9–54.4%; n = 192), − 0.6% (interdecile range, − 21.4–18.6%; range, − 48.5–54.5%; n = 192) and − 0.6% (interdecile range, − 19.8–19.6%; range, − 45.7–75.3%; n = 166), respectively. Hence, unlike in our initial P-PRRT protocol, these results convinced us that averaging the renal GBq/Gy from the two prior cycles (instead of just one) does not significantly improve the precision of the IA prescription. The IA prescription for the subsequent cycle has been updated in our P-PRRT protocol as follows: $$ \mathrm{Personalized}\ \mathrm{IA}\ \left(\mathrm{GBq}\right)=\mathrm{Prior}\ \mathrm{cycle}\ \mathrm{IA}\ \mathrm{per}\ \mathrm{renal}\ \mathrm{dose}\ \left(\mathrm{GBq}/\mathrm{Gy}\right)\cdotp \mathrm{Prescribed}\ \mathrm{renal}\ \mathrm{dose}\ \left(\mathrm{Gy}\right) $$ The widely adopted one-size-fits-all PRRT protocol, i.e., four induction cycles of 7.4 GBq 177Lu-octreotate, has been initially devised in 2001 based on the dosimetry data from only five patients [3]. Since that time, dosimetry has not been routinely performed in most centers (including for PRRT administered in the NETTER-1 trial [2]). This fixed IA regime is known to yield highly variable absorbed doses to critical organs, but because these fall well below conservative safety thresholds (e.g., 23 Gy for the kidney) in the vast majority of patients, this confers to PRRT a very favorable safety profile [1, 2]. In parallel, current cure rates are marginal, suggesting that most patients are being undertreated with the empiric PRRT regime [1, 2]. Escalating tumor absorbed dose could potentially improve the efficacy of PRRT, although realistically, this cannot be done through a conventional empiric IA escalation without compromising the excellent safety record of PRRT. To optimize tumor irradiation at the patient level, we and others have proposed to optimize the renal absorbed dose by either personalizing the IA per cycle or the number of induction cycles [4, 6]. These two approaches resulted in increased cumulative IA and tumor absorbed dose in the majority of patients [4, 6]. Further, when administering personalized IA, our preliminary results suggest a similar short-term side effect and toxicity profile as those observed when using the empiric PRRT regime [10]. Although our outcome data is not yet mature enough to document significantly improved clinical outcomes, nevertheless, we believe that personalized radionuclide therapy is more faithful to the principles of radiation oncology, where the absorbed doses are prescribed and monitored. In internal radiotherapy, this could be done through IA personalization and routine dosimetry. Despite the fact that our imaging protocol did not include a late time point, we obtained similar median renal absorbed doses per IA (median, 0.54 Gy/GBq) to those observed by Sandstrom et al. (medians, 0.62 and 0.59 Gy/GBq for the right and the left kidneys, respectively), who also used QSPECT and small-VOI sampling, but scanned patients until day 7 [5]. The concordance between our results is consistent with the fact that the renal activity concentration decays moxoexponentially after 24 h, as demonstrated by Handshied et al. [8]. The median BM absorbed dose we obtained when using our QSPECT-based method (0.035 Gy/GBq) is well within the range of estimates published by others using various techniques based on imaging, blood, and urine sampling [5, 11]. Furthermore, this result is particularly close to the mean BM absorbed dose reported by Svensson et al. (0.027 Gy/GBq), which was derived from imaging data only and included a later time point [12, 13]. The correlation between QSPECT-based BM absorbed dose estimates and subacute thrombocytopenia provides initial clinical validation of our technique [6]. However, in patients with bone metastases, the BM dosimetry estimates may be less reliable, as even if obvious bone metastases were avoided when placing the BM VOIs, we cannot rule out the influence of non-apparent micrometastases or diffuse BM infiltration. Dosimetry is an essential component of P-PRRT but is often perceived by the medical community as being too complex, or by the physics community as not accurate enough. However, SPECT/CT cameras are now widely available, and simple 177Lu calibration methods have been proposed [7, 14], facilitating implementation of individualized dosimetry based on QSPECT in the clinics. Performing dosimetry calculation based on simplified activity concentration sampling methods, such as the small-VOI method used in this study, is more practical to perform than the full organ segmentation while yielding similar results and is more accurate than planar imaging-based dosimetry [15]. For these reasons, we have routinely been performing dosimetry using a 3TP QSPECT scanning schedule along with the small-VOI sampling. But still, many would see dosimetry as resource-consuming. This opinion would be based on the general beliefs that a minimum of three measurements are necessary [16] or that the scanning protocol must include late time points, such as up to 4 to 7 days [17]. Such requirements tend to increase both the clinical burden and the patient inconvenience when performing dosimetry, and as such constitute barriers to its wide clinical adoption. Conversely, simplified dosimetry approaches having a clinically relevant level of accuracy could facilitate making dosimetry a standard of care, not just for monitoring purposes, but also for personalizing radionuclide therapy. To overcome the issues discussed above, the primary objective of this study was to further simplify our dosimetry methods. The 2TP method offers an excellent accuracy for both the per-cycle and the cumulative absorbed dose estimates relative to the 3TP method, in particular for the kidney and the tumor, which convinced us to abandon the day 0 scan. Further, our results validated the 1TPD3 technique proposed by Hänscheid and co-workers [8]. While they advocate scanning on day 4 to achieve the best accuracy for both the kidney and the tumor, scanning on day 3 is considered more practical in our setting, offers about the same accuracy for the kidney dosimetry and a very reasonable accuracy for the tumor. This 1TPD3 method is an appealing alternative to 2TP, although accuracy could be slightly improved, at least for the kidney and Tumormax, by simply adding a second imaging time point during the first cycle and then applying the effective decay constant to the one-time samples during subsequent cycles (2TP/1TP). This hybrid method is more accurate when, for the non-initial cycles, the imaging is performed on day 3 (2TP/1TPD3) rather than on day 1 (2TP/1TPD1). This is likely because, in individual patients, day 3 measurements are better correlated with the integrated TACs (i.e., absorbed doses) than are those performed on day 1 and, as such, are less sensitive to small intra-patient cycle-to-cycle differences in tissue uptake and kinetics [8]. However, since in parallel we alter the IA prescription based on renal dosimetry, we prefer pursuing our P-PRRT program with the 2TP protocol, which offers, in our opinion, the best balance between high accuracy and practicality. Importantly, we would not recommend not imaging patients at subsequent cycles and extrapolating dosimetry from the first cycle (2TP/NI), assuming constant Gy/GBq in tissues (i.e., completely ignoring any cycle-to-cycle difference in uptake or kinetics), as this approach causes the uncertainty of the resulting absorbed dose estimates to increase, in particular for tumor, which can be affected by the therapeutic response. The clinical burden of the 2TP schedule in terms of the camera and personnel time is reasonable and comparable to that of performing an 111In-octreotide scan. Furthermore, performing the last scan on the third day limits the inconvenience for the out-of-city patients. A very good inter-observer agreement has already been reported for renal dosimetry, with median errors of less than 5% for the small-VOI dosimetry method [17]. Our results confirm these observations. Of note, we intentionally chose observers with different backgrounds and have shown that even dosimetry estimates from our novice observer (first-year medical student) were well in agreement with those from more experienced observers, suggesting that a reasonably reproducible activity concentration sampling technique is easily attainable with a relatively short training. Also, the whole processing of one patient case, including VOI drawing and data transfer to the spreadsheet or database, can be performed in about 15 to 20 min. We are contemplating to train nuclear medicine technologists to perform PRRT dosimetry under medical supervision, eventually making them sub-specialized as nuclear dosimetrists. Finally, we revisited our personalized IA prescription scheme for our P-PRRT protocol. For the first cycle, we derived a simpler equation to determine the personalized IA than the one we initially suggested [6]. The latter is still based on the product of eGFR and BSA, but eGFR · LBW would have provided a similar level of predictive accuracy. We acknowledge that this accuracy is at most moderate and comparable to that of an initial fixed IA in terms of interquartile or interdecile range (Fig. 6). However, the main advantage of personalizing the first cycle IA is to avoid extreme cases of overdosing, such as delivery 18 Gy to the kidney when administering a fixed IA of 9.1 GBq to every patient (Fig. 6). Rather, personalizing IA could limit the renal absorbed dose to 11 Gy, for the same median of 5 Gy. When 68Ga-octreotate PET will be routinely performed in all our PRRT patients, we will explore adding the pre-treatment tumor sink effect analysis into the prescription scheme, which could potentially improve the predictive accuracy of the model [18]. We propose a 177Lu dosimetry protocol based on two-time-point QSPECT imaging and the small-VOI sampling, which yields accurate dosimetry results, particularly for the kidney and the tumor, with a very high inter-observer reproducibility. Performing the last QSPECT/CT scan no later than on the third day post-PRRT increases patient convenience, particularly for the out-of-city patients who travel to receive PRRT. Pragmatic 177Lu dosimetry methods could facilitate the practice of personalized radionuclide therapies, including the rapidly emerging prostate-specific membrane antigen radioligand therapy. ACDF: Activity concentration dose factor BSA: Body surface area CT: eGFR: Estimated glomerular filtration rate IA: Injected activity LBW: Lean body weight NET(s): Neuroendocrine tumor(s) NI: No imaging PET: Positron emission tomography P-PRRT: Personalized PRRT PRRT: Peptide receptor radionuclide therapy QSPECT: Quantitative SPECT SPECT: Single-photon emission computed tomography SUV: Standardized uptake value TAC(s): Time-activity curve(s) TP: Time point VOI(s): Volume(s) of interest Kwekkeboom DJ, de Herder WW, Kam BL, van Eijck CH, van Essen M, Kooij PP, et al. Treatment with the radiolabeled somatostatin analog [177Lu-DOTA0,Tyr3] octreotate: toxicity, efficacy, and survival. J Clin Oncol. 2008;26(13):2124–30. https://doi.org/10.1200/JCO.2007.15.2553. Strosberg J, El-Haddad G, Wolin E, Hendifar A, Yao J, Chasen B, et al. Phase 3 trial of 177Lu-Dotatate for midgut neuroendocrine tumors. N Engl J Med. 2017;376(2):125–35. https://doi.org/10.1056/NEJMoa1607427. Kwekkeboom DJ, Bakker WH, Kooij PP, Konijnenberg MW, Srinivasan A, Erion JL, et al. [177Lu-DOTA0Tyr3] octreotate: comparison with [111In-DTPA0] octreotide in patients. Eur J Nucl Med. 2001;28(9):1319–25. Sundlov A, Sjogreen-Gleisner K, Svensson J, Ljungberg M, Olsson T, Bernhardt P, et al. Individualised 177Lu-DOTATATE treatment of neuroendocrine tumours based on kidney dosimetry. Eur J Nucl Med Mol Imaging. 2017;44(9):1480–9. https://doi.org/10.1007/s00259-017-3678-4. Sandstrom M, Garske-Roman U, Granberg D, Johansson S, Widstrom C, Eriksson B, et al. Individualized dosimetry of kidney and bone marrow in patients undergoing 177Lu-DOTA-octreotate treatment. J Nucl Med. 2013;54(1):33–41. https://doi.org/10.2967/jnumed.112.107524. Del Prete M, Buteau FA, Beauregard JM. Personalized 177Lu-octreotate peptide receptor radionuclide therapy of neuroendocrine tumours: a simulation study. Eur J Nucl Med Mol Imaging. 2017;44(9):1490–500. https://doi.org/10.1007/s00259-017-3688-2. Beauregard JM, Hofman MS, Pereira JM, Eu P, Hicks RJ. Quantitative 177Lu SPECT (QSPECT) imaging using a commercially available SPECT/CT system. Cancer Imaging. 2011;11:56–66. https://doi.org/10.1102/1470-7330.2011.0012. Hanscheid H, Lapa C, Buck AK, Lassmann M, Werner RA. Dose mapping after endoradiotherapy with 177Lu-DOTATATE/DOTATOC by a single measurement after 4 days. J Nucl Med. 2018;59(1):75–81. https://doi.org/10.2967/jnumed.117.193706. Levey AS, Stevens LA, Schmid CH, Zhang YL, Castro AF 3rd, Feldman HI, et al. A new equation to estimate glomerular filtration rate. Ann Intern Med. 2009;150(9):604–12. Del Prete M, Buteau FA, Beaulieu A, Beauregard JM. Personalized 177Lu-octreotate peptide receptor radionuclide therapy of neuroendocrine tumors: initial dosimetry and safety results of the P-PRRT trial. J Nucl Med. 2017;58(Suppl. 1):242. Cremonesi M, Botta F, Di Dia A, Ferrari M, Bodei L, De Cicco C, et al. Dosimetry for treatment with radiolabelled somatostatin analogues. A review. Q J Nucl Med Mol Imaging. 2010;54(1):37–51. Svensson J, Rydén T, Hagmarker L, Hemmingsson J, Wängberg B, Bernhardt P. A novel planar image-based method for bone marrow dosimetry in 177Lu-DOTATATE treatment correlates with haematological toxicity. EJNMMI Physics. 2016;3:21. https://doi.org/10.1186/s40658-016-0157-0. Hagmarker L, Svensson J, Rydén T, Gjertsson P, Bernhardt P. Segmentation of whole-body images into two compartments in model for bone marrow dosimetry increases the correlation with hematological response in 177Lu-DOTATATE treatments. Cancer Biother Radiopharm. 2017;32(9):335–43. https://doi.org/10.1089/cbr.2017.2317. Uribe CF, Esquinas PL, Tanguay J, Gonzalez M, Gaudin E, Beauregard JM, et al. Accuracy of 177Lu activity quantification in SPECT imaging: a phantom study. EJNMMI Phys. 2017;4(1):2. https://doi.org/10.1186/s40658-016-0170-3. Sandstrom M, Garske U, Granberg D, Sundin A, Lundqvist H. Individualized dosimetry in patients undergoing therapy with 177Lu-DOTA-D-Phe1-Tyr3-octreotate. Eur J Nucl Med Mol Imaging. 2010;37(2):212–25. https://doi.org/10.1007/s00259-009-1216-8. Lassmann M, Chiesa C, Flux G, Bardies M, Committee ED. EANM dosimetry committee guidance document: good practice of clinical dosimetry reporting. Eur J Nucl Med Mol Imaging. 2011;38(1):192–200. https://doi.org/10.1007/s00259-010-1549-3. Sandstrom M, Ilan E, Karlberg A, Johansson S, Freedman N, Garske-Roman U. Method dependence, observer variability and kidney volumes in radiation dosimetry of 177Lu-DOTATATE therapy in patients with neuroendocrine tumours. EJNMMI Phys. 2015;2(1):24. https://doi.org/10.1186/s40658-015-0127-y. Beauregard JM, Hofman MS, Kong G, Hicks RJ. The tumour sink effect on the biodistribution of 68Ga-DOTA-octreotate: implications for peptide receptor radionuclide therapy. Eur J Nucl Med Mol Imaging. 2012;39(1):50–6. https://doi.org/10.1007/s00259-011-1937-3. We are grateful to the nurses and nuclear medicine technologists at the CHU de Québec—Université Laval who provided care to PRRT patients, as well as to Marc Bazin, Ph.D., for his help with the writing of this manuscript. M.D.P. is supported by a Merit Scholarship for Foreign Students from the Ministère de l'éducation et de l'enseignement supérieur du Québec. This work was funded by the Canadian Institutes of Health Research (CIHR) operating grant MOP-142233 to J.M.B. Please contact the corresponding author, Jean-Mathieu Beauregard ([email protected]), for the data used in this manuscript. Department of Radiology and Nuclear Medicine and Cancer Research Center, Université Laval, Quebec City, Canada Michela Del Prete , Frédéric Arsenault , Nassim Saighi , François-Alexandre Buteau & Jean-Mathieu Beauregard Department of Medical Imaging and Oncology Branch of CHU de Québec Research Center, CHU de Québec – Université Laval, 11 côte du Palais, Quebec City, QC, G1R 2J6, Canada Medical Imaging Research Group, University of British Columbia, Vancouver, Canada Wei Zhao Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada Search for Michela Del Prete in: Search for Frédéric Arsenault in: Search for Nassim Saighi in: Search for Wei Zhao in: Search for François-Alexandre Buteau in: Search for Jean-Mathieu Beauregard in: MDP participated in the design of the analysis, collected and analyzed the data, and drafted the manuscript. FA and NS performed dosimetry analyses. FAB and JMB were responsible for the good conduct of the clinical study and data acquisition. WZ and AC participated in the design of the analysis and contributed to the manuscript editing. JMB designed the analysis, supervised the project, analyzed the data, and edited the manuscript. All authors read and approved the final manuscript. Correspondence to Jean-Mathieu Beauregard. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Specifically, for the 23 patients who received only empiric PRRT until March 2016, the requirement for consent was waived due to the retrospective nature of the analysis. All other patients were enrolled in our P-PRRT trial (NCT02754297) and gave informed consent to participate. Del Prete, M., Arsenault, F., Saighi, N. et al. Accuracy and reproducibility of simplified QSPECT dosimetry for personalized 177Lu-octreotate PRRT. EJNMMI Phys 5, 25 (2018) doi:10.1186/s40658-018-0224-9 Imaging and dosimetry for radionuclide based therapy
CommonCrawl
Research | Open | Published: 25 February 2019 Mohamed Iadh Ayari1, Zead Mustafa2 & Mohammed Mahmoud Jaradat2 Fixed Point Theory and Applicationsvolume 2019, Article number: 7 (2019) | Download Citation The primary objective of this paper is the study of the generalization of some results given by Basha (Numer. Funct. Anal. Optim. 31:569–576, 2010). We present a new theorem on the existence and uniqueness of best proximity points for proximal β-quasi-contractive mappings for non-self-mappings $S:M\rightarrow N$ and $T:N\rightarrow M$. Furthermore, as a consequence, we give a new result on the existence and uniqueness of a common fixed point of two self mappings. In 1969, Fan in [2] proposed the concept best proximity point result for non-self continuous mappings $T:A\longrightarrow X$ where A is a non-empty compact convex subset of a Hausdorff locally convex topological vector space X. He showed that there exists a such that $d(a,Ta)=d(Ta,A)$. Many extensions of Fan's theorems were established in the literature, such as in work by Reich [3], Sehgal and Singh [4] and Prolla [5]. In 2010, [1], Basha introduce the concept of best proximity point of a non-self mapping. Furthermore he introduced an extension of the Banach contraction principle by a best proximity theorem. Later on, several best proximity points results were derived (see e.g. [6,7,8,9,10,11,12,13,14,15,16,17,18,19]). Best proximity point theorems for non-self set valued mappings have been obtained in [20] by Jleli and Samet, in the context of proximal orbital completeness condition which is weaker than the compactness condition. The aim of this article is to generalize the results of Basha [21] by introducing proximal β-quasi-contractive mappings which involve suitable comparison functions. As a consequence of our theorem, we obtain the result of Basha in [21] and an analogous result on proximal quasi-contractions is obtained which was first introduced by Jleli and Samet in [20]. Preliminaries and definitions Let $(M,N)$ be a pair of non-empty subsets of a metric space $(X,d)$. The following notations will be used throughout this paper: $d(M,N):=\inf\{d(m,n):m\in M, n\in N\}$; $d(x,N):=\inf\{d(x,n):n\in N\}$. Definition 2.1 ([1]) Let $T:M\rightarrow N$ be a non-self-mapping. An element $a_{\ast}\in M$ is said to be a best proximity point of T if $d(a_{\ast },Ta_{\ast})=d(M,N)$. Note that in the case of self-mapping, a best proximal point is the normal fixed point, see [22, 23]. ([21]) Given non-self-mappings $S:M\rightarrow N$ and $T:N\rightarrow M $. The pair $(S,T)$ is said to form a proximal cyclic contraction if there exists a non-negative number $k<1$ such that $$ d(u,Sa)=d(M,N)\quad\mbox{and}\quad d(v,Tb)=d(M,N)\Longrightarrow d(u,v)\leq kd(a,b)+(1-k)d(M,N) $$ for all $u,a\in M$ and $v,b\in N$. A non-self-mapping $S: M\rightarrow N$ is said to be a proximal contraction of the first kind if there exists a non-negative number $\alpha<1 $ such that $$ d(u_{1},Sa_{1})=d(M,N) \quad\mbox{and}\quad d(u_{2},Sa_{2})=d(M,N) \Longrightarrow d(u_{1},u_{2})\le\alpha d(a_{1},a_{2}) $$ for all $u_{1},u_{2},a_{1},a_{2} \in M$. Let $\beta\in(0,+\infty)$. A β-comparison function is a map $\varphi:[0,+\infty)\rightarrow{}[0,+\infty)$ satisfying the following properties: $(P_{1})$ : φ is nondecreasing. $\lim_{n\rightarrow\infty}\varphi _{\beta }^{n}(t)=0$ for all $t>0$, where $\varphi_{\beta}^{n}$ denote the nth iteration of $\varphi_{\beta}$ and $\varphi_{\beta}(t)=\varphi (\beta t)$. There exists $s\in(0,+\infty)$ such that $\sum_{n=1}^{\infty}\varphi_{\beta}^{n}(s)<\infty$. $(\mathrm{id}-\varphi_{\beta} ) \circ\varphi _{\beta}(t) \leq\varphi_{\beta} \circ(\mathrm{id}-\varphi_{\beta})(t) \mbox{ for all } t \geq0$, where $\mathrm{id}: [0,\infty) \longrightarrow[0,\infty) $ is the identity function. Throughout this work, the set of all functions φ satisfying $(P_{1}), (P_{2})$ and $(P_{3})$ will be denoted by $\varPhi_{\beta}$. Remark 2.1 Let $\alpha,\beta\in(0,+\infty)$. If $\alpha<\beta$, then $\varPhi_{\beta}\subset\varPhi_{\alpha}$. We recall the following useful lemma concerning the comparison functions $\varPhi_{\beta}$. Lemma 2.1 Let $\beta\in(0,+\infty)$ and $\varphi\in \varPhi_{\beta}$. Then $\varphi_{\beta}$ is nondecreasing; $\varphi_{\beta} (t) < t$ for all $t > 0$; $\sum_{n=1}^{\infty}\varphi_{\beta}^{n}(t) < \infty$ for all $t > 0 $. A non-self-mapping $T:M\rightarrow N$ is said to be a proximal quasi-contraction if there exists a number $q\in {}[ 0,1)$ such that $$ d(u,v)\leq q\max\bigl\{ d(a,b),d(a,u),d(b,v),d(a,v),d(b,u)\bigr\} $$ whenever $a,b,u,v\in M$ satisfy the condition that $d(u,Ta)=d(M,N)$ and $d(v,Tb)=d(M,N)$. Main results and theorems Now, we start this section by introducing the following concept. Let $\beta\in(0,+\infty)$. A non-self mapping $T:M\rightarrow N$ is said to be a proximal β-quasi-contraction if and only if there exist $\varphi\in\varPhi_{\beta }$ and positive numbers $\alpha_{0},\ldots,\alpha_{4}$ such that $$ d(u,v)\leq\varphi\bigl(\max \bigl\{ \alpha_{0}d(a,b),\alpha _{1}d(a,u),\alpha _{2}d(b,v),\alpha_{3}d(a,v), \alpha_{4}d(b,u) \bigr\} \bigr). $$ For all $a,b,u,v\in M$ satisfying, $d(u,Ta)=d(M,N)$ and $d(v,Tb)=d(M,N)$. Let $(M,N)$ be a pair of non-empty subsets of a metric space $(X,d)$. The following notations will be used throughout this paper: $M_{0}:=\{u\in M:\text{ there exists }v\in N\text{ with }d(u,v)=d(M,N)\} $;$N_{0}:=\{v\in N:\text{ there exists }u\in M\text{ with }d(u,v)=d(M,N)\} $. Our main result is giving by the following best proximity point theorems. Theorem 3.1 Let $(M,N)$ be a pair of non-empty closed subsets of a complete metric space $(X,d)$ such that $M_{0}$ and $N_{0}$ are non-empty. Let $S:M\longrightarrow N$ and $T:N\longrightarrow M$ be two mappings satisfying the following conditions: $(C_{1})$ : $S(M_{0})\subset N_{0}$ and $T(N_{0})\subset M_{0}$; there exist $\beta_{1}, \beta_{2}\geq\max\{ \alpha_{0},\alpha_{1},\alpha_{2},\alpha_{3}, 2\alpha_{4}\}$ such that S is a proximal $\beta_{1}$-quasi-contraction mapping (say, $\psi\in\varPhi_{\beta_{1}}$) and T is a proximal $\beta_{2} $-quasi-contraction mapping (say, $\phi\in\varPhi_{\beta_{2}}$). The pair $(S,T)$ forms a proximal cyclic contraction. Moreover, one of the following two assertions holds: ψ and ϕ are continuous; $\beta_{1},\beta_{2}>\max\{\alpha_{2},\alpha _{3}\} $. Then S has a unique best proximity point $a_{\ast}\in M$ and T has a unique best proximity point $b_{\ast}\in N$. Also these best proximity points satisfy $d(a_{\ast},b_{\ast})=d(M,N)$. Since $M_{0}$ is a non-empty set, $M_{0}$ contains at least one element, say $a_{0}\in M_{0}$. Using the first hypothesis of the theorem, there exists $a_{1}\in M_{0}$ such that $d(a_{1},Sa_{0})=d(M,N)$. Again, since $S(M_{0})\subset N_{0}$, there exists $a_{2}\in M_{0}$ such that $d(a_{2},Sa_{1})=d(M,N)$. Continuing this process in a similar fashion to find $a_{n+1}\in M_{0}$ such that $d(a_{n+1},Sa_{n})=d(M,N)$. Since S is a proximal $\beta_{1}$-quasi-contraction mapping for $\psi\in\varPhi_{\beta_{1}}$ and since $$ d(a_{n+1},Sa_{n})=d(a_{n},Sa_{n-1})=d(M,N) \text{,} $$ then by Definition 3.1 we have d ( a n + 1 , a n ) ≤ ψ ( max { α 0 d ( a n , a n − 1 ) , α 1 d ( a n , a n + 1 ) , α 2 d ( a n , a n − 1 ) , α 4 d ( a n + 1 , a n − 1 ) } ) ≤ ψ ( max { α 0 d ( a n , a n − 1 ) , α 1 d ( a n , a n + 1 ) , α 2 d ( a n , a n − 1 ) α 4 d ( a n − 1 , a n ) + α 4 d ( a n , a n + 1 ) } ) ≤ ψ ( max { α 0 d ( a n , a n − 1 ) , α 1 d ( a n , a n + 1 ) , α 2 d ( a n , a n − 1 ) 2 α 4 max { d ( a n − 1 , a n ) , d ( a n , a n + 1 ) } } ) ≤ ψ ( β 1 max { d ( a n , a n − 1 ) , d ( a n , a n + 1 ) } ) = ψ β 1 ( max { d ( a n , a n − 1 ) , d ( a n , a n + 1 ) } ) . Now, if $\max\{ d(a_{n},a_{n-1}), d(a_{n},a_{n+1})\}= d(a_{n},a_{n+1})$, then by Lemma 2.1 the above inequality becomes $$d(a_{n+1},a_{n})\leq\psi_{\beta_{1}}\bigl(d(a_{n+1},a_{n}) \bigr)< d(a_{n+1},a_{n}), $$ which is a contradiction. Thus, $\max\{ d(a_{n},a_{n-1}), d(a_{n},a_{n+1}) \}= d(a_{n},a_{n-1})$, then the above inequality (2) becomes $$ d(a_{n+1},a_{n})\leq \psi_{\beta_{1}}\bigl(d(a_{n-1},a_{n}) \bigr)). $$ By applying induction on n, the above inequality gives $$ d(a_{n+1},a_{n})\leq \psi_{\beta_{1}}^{n} \bigl(d(a_{0},a_{1})\bigr)\quad \forall n\geq1. $$ Now, from the axioms of metric and Eq. (3), for positive integers $n< m$, we get $$ d(a_{n},a_{m})\leq\sum_{k=n}^{m-1}d(a_{k},a_{k+1}) \leq \sum_{k=n}^{m-1}\psi_{\beta_{1}}^{k} \bigl(d(a_{1},a_{0})\bigr)\leq \sum _{k=1}^{\infty}\psi_{\beta_{1}}^{k} \bigl(d(a_{1},a_{0})\bigr)< \infty. $$ Hence, for every $\epsilon>0$ there exists $N>0$ such that $$ d(a_{n},a_{m})\leq\sum_{k=n}^{m-1}d(a_{k},a_{k+1})< \epsilon \quad\text{for all }m>n>N. $$ Therefore, $d(a_{n},a_{m})<\epsilon$ for all $m>n>N$. That is $\{ a_{n}\}$ is a Cauchy sequence in M. But M is a closed subset of the complete metric space X, then $\{a_{n}\}$ converges to some element $a_{\ast}\in M$. Since $T(N_{0})\subset M_{0}$, by using a similar argument as above, there exists a sequence $\{b_{n}\}\subset N_{0}$ such that $d(b_{n+1},Tb_{n})=d(M,N)$ for each n. Since T is a proximal $\beta _{2}$-quasi-contraction mapping (say $\phi\in\varPhi_{\beta_{2}}$) and since $d(b_{n+1},Tb_{n})=d(b_{n},Tb_{n-1})=d(M,N)$, we deduce from Definition 3.1 that d ( b n + 1 , b n ) ≤ ϕ ( max { α 0 d ( b n , b n − 1 ) , α 1 d ( b n , b n + 1 ) , α 2 d ( b n , b n − 1 ) , α 4 d ( b n − 1 , b n + 1 ) } ) ≤ ϕ ( max { α 0 d ( b n , b n − 1 ) , α 1 d ( b n , b n + 1 ) , α 2 d ( b n , b n − 1 ) , α 4 d ( b n − 1 , b n ) + α 4 d ( b n , b n + 1 ) } ) ≤ ϕ ( max { α 0 d ( b n , b n − 1 ) , α 1 d ( b n , b n + 1 ) , α 2 d ( b n , b n − 1 ) , 2 α 4 max { d ( b n − 1 , b n ) , d ( b n , b n + 1 ) } } ) ≤ ϕ ( β 2 max { d ( b n , b n − 1 ) , d ( b n , b n + 1 ) } ) = ϕ β 2 ( max { d ( b n , b n − 1 ) , d ( b n , b n + 1 ) } ) . Using a similar argument as in the case of $\{a_{n}\}$, one can show that $\{b_{n}\}$ is a Cauchy sequence in the closed subset N of the complete space X. Thus $\{b_{n}\}$ converges to $b_{\ast}\in N$. Now we shall show that $a_{\ast}$ and $b_{\ast}$ are best proximal points of S and T, respectively. As the pair $(S,T)$ forms a proximal cyclic contraction, it follows that $$ d(a_{n+1},b_{n+1})\leq kd(a_{n},b_{n})+(1-k)d(M,N). $$ Taking the limit as $n\longrightarrow+\infty$, in Eq. (4) we get $d(a_{\ast},b_{\ast})\leq kd(a_{\ast},b_{\ast})+(1-k)d(M,N)$, and so, $(1-k) d(a_{\ast},b_{\ast})\leq (1-k)d(M,N)$. This implies $$ d(a_{\ast},a_{\ast})\leq d(M,N). $$ Using the fact that $d(M,N)\leq d(a_{\ast},b_{\ast})$ and (5), we get $d(a_{\ast},b_{\ast})=d(M,N)$. Therefore, we conclude that $a_{\ast }\in M_{0}$ and $b_{\ast}\in N_{0}$. From one hand, since $S(M_{0})\subset N_{0}$ and $T(N_{0})\subset M_{0}$, there exist $u\in M$ and $v\in N$ such that $$ d(u,Sa_{\ast})=d(v,Tb_{\ast})=d(M,N). $$ On the other hand, by (1), (6) and using the hypothesis of the theorem that S is a proximal $\beta_{1}$-quasi-contraction mapping, we deduce that $$\begin{aligned} &d(a_{n+1},u) \\ &\quad\leq\psi\bigl(\max\bigl\{ \alpha_{0}d(a_{n},a_{\ast}), \alpha _{1}d(a_{n},a_{n+1}),\alpha_{2}d(a_{\ast},u), \alpha _{3}d(a_{n},u),\alpha _{4}d(a_{\ast},a_{n+1}) \bigr\} \bigr). \end{aligned}$$ For simplicity, we denote $$\rho=d(a_{\ast},u) $$ $$ A_{n}=\max\bigl\{ \alpha _{0}d(a_{n},a_{\ast}), \alpha_{1}d(a_{n},a_{n+1}),\alpha_{2}d(a_{\ast },u), \alpha_{3}d(a_{n},u),\alpha_{4}d(a_{\ast},a_{n+1}) \bigr\} . $$ $$ {\lim_{n\longrightarrow+\infty}A_{n}=\max\{\alpha_{2}, \alpha_{3}\} \rho}. $$ Now, we show by contradiction that $\rho=0$. Suppose that $\rho>0$. First, we consider the case where the assertion (i) of $(C_{4})$ is satisfied, that is, ψ is continuous. Then, taking the limit as $n\rightarrow\infty$ in (7) and using (8) and Lemma 2.1, we obtain $$ \rho\leq\psi\bigl(\max\{\alpha_{2},\alpha_{3}\}\rho\bigr) \leq\psi(\beta _{1}\rho)=\psi_{\beta_{1}} (\rho) < \rho, $$ which is a contradiction. Now, we assume the case where the assertion (ii) of $(C_{4})$ is satisfied, that is, $\beta_{1}>\max\{\alpha_{2},\alpha _{3}\}$. Then there exist $\epsilon>0$ and integer $N>0$ such that, for all $n>N$, we have $$ A_{n}< \bigl(\max\{\alpha_{2},\alpha_{3}\}+ \epsilon\bigr)\rho\quad\text{and}\quad \beta_{1}>\max\{ \alpha_{2},\alpha_{3}\}+\epsilon. $$ Therefore, the inequality (7) turns into the following inequality: $$\begin{aligned} d(a_{n+1},u) &\leq\psi(A_{n}) \\ &\leq\psi\bigl(\bigl(\max\{\alpha_{2},\alpha_{3}\}+ \epsilon\bigr)\rho\bigr)=\psi _{\beta_{1}}\biggl(\frac{\max\{\alpha_{2},\alpha_{3}\}+\epsilon}{\beta _{1}}\rho\biggr). \end{aligned}$$ Since $\psi\in\varPhi_{\beta_{1}}$, by Lemma 2.1 we have $$ d(a_{n+1},u)< \frac{\max\{\alpha_{2},\alpha_{3}\}+\epsilon}{\beta _{1}}\rho< \rho. $$ By letting $n\rightarrow\infty$, the above inequality yields $$ \rho\leq\frac{\max\{\alpha_{2},\alpha_{3}\}+\epsilon}{\beta _{1}}\rho < \rho, $$ which is a contradiction as well. Thus, in both two cases we get $0=\rho =d(a_{\ast},u)$, which means that $u=a_{\ast}$ and so from equation (6) we get $d(a_{\ast},Sa_{\ast})=d(M,N)$. That is $a_{\ast}$ is a best proximity point for S. Similarly, by using word by word the above argument after replacing u by v, S by T, $\beta_{1}$ by $\beta_{2}$ and ψ by ϕ, we get that $v=b_{\ast}$ and hence by (6) $b_{\ast}$ is a best proximity point for the non-self mapping T. Now, we shall prove that the obtained best proximity points $a_{\ast}$ of S is unique. Assume to the contrary that there exists $x\in M $ such that $d(x,Sx)=d(M,N)$ and $x\neq a_{\ast}$. Since S is a proximal $\beta_{1}$-quasi-contractive mapping, we obtain $$\begin{aligned} d(a_{\ast},x)& \leq\psi\bigl(\max \bigl\{ \alpha_{0}d(a_{\ast},x), \alpha _{1}d(x,x),\alpha _{2}d(a_{\ast},a_{\ast}), \alpha_{3}d(a_{\ast},x),\alpha _{4}d(a_{\ast},x) \bigr\} \bigr) \\ & \leq\psi \bigl(\max\{\alpha_{0},\alpha_{3}, \alpha_{4}\} d(a_{\ast },x) \bigr) \\ & \leq\psi\bigl(\beta_{1}d(a_{\ast},x) \bigr)= \psi_{\beta_{1}} \bigl(d(a_{\ast },x) \bigr) \\ & < d(a_{\ast},x), \end{aligned}$$ which is a contradiction. Similarly, using the same as above and the fact that T is a proximal $\beta_{2}$-quasi-contractive mapping, we see that the best proximity point $b_{\ast}$ of T is unique. □ In Theorem 3.1 by taking $\alpha_{0}=\alpha_{1}=\alpha_{2}=\alpha_{3}=0 , \alpha _{4}=1,\beta_{1}=\beta_{2}=1$ and $\psi(t)=\phi(t)=qt$ which is a continuous function and belongs to $\varPhi_{1}$, we obtain Corollary 3.3 in [21]. Corollary 3.1 Let $(M,N)$ be a pair of non-empty closed subsets of a complete metric space $(X,d)$ such that $M_{0}$ and $M_{0}$ are non-empty. Let $S:M\longrightarrow N$ and $T:N\longrightarrow M$ be mappings satisfy the following conditions: $(d_{1})$ : $S(A_{0})\subset M_{0}$ and $T(M_{0})\subset N_{0}$. S and T are proximal quasi-contractions. The pair $(S,T)$ form a proximal cyclic contraction. Then S has a unique best proximity point $a_{\ast }\in M$ such that $d(a_{\ast},Sa_{\ast})=d(M,N)$ and T has a unique best proximity point $b_{\ast}\in N$ such that $d(b_{\ast},Tb_{\ast })=d(M,N)$. Also, these best proximity points satisfies $d(a_{\ast },b_{\ast })=d(M,N)$. The result follows immediately from Theorem 3.1 by taking $\alpha _{0} = \alpha_{1} =\alpha_{2} =\alpha_{3} = 1 $ and $\alpha_{4} = \frac {1}{2} $, $\beta_{1}=\beta_{2} = 1$ and $\psi(t)=\phi(t)=qt$. □ The following definition, which was introduced in [24], is needed to derive a fixed point result as a consequence of our main theorem. Let X be a non-empty set. A mapping $T:X\longrightarrow X$ is called β-quasi-contractive, if there exist $\beta>0$ and $\varphi \in\varPhi_{\beta}$ such that $$ d(Ta,Tb)\leq\varphi\bigl(H_{T}(a,b)\bigr), $$ $$ H_{T}(a,b)=\max\bigl\{ \alpha_{0}d(a,b), \alpha_{1}d(a,Ta),\alpha _{2}d(b,Tb),\alpha_{3}d(a,Tb), \alpha_{4}d(b,Ta)\bigr\} , $$ with $\alpha_{i}\geq0$ for $i=0, 1,2,3,4$. Let $(X,d)$ be a complete metric space. Let $S,T:X\longrightarrow X$ be two self-mappings satisfying the following conditions: $(E_{1})$ : S is $\beta_{1}$-quasi-contractive ( say, $\psi\in\varPhi_{\beta_{1}}$) and T is $\beta_{2}$-quasi-contractive (say, $\phi\in\varPhi_{\beta_{2}}$). For all $a,b\in X,d(Sa,Tb)\leq kd(a,b)$ for some $k\in(0,1)$. Moreover, one of the following assertions holds: $\beta_{1},\beta_{2}>\max\{\alpha _{2},\alpha_{3}\}$. Then S and T have a common unique fixed point. This result follows from Theorem 3.1 by taking $M=N=X$ and noticing that the hypotheses $(E_{1})$ and $(E_{2})$ of the corollary coincide with the first, second and the third conditions of Theorem 3.1. □ Example 3.1 Let $X=\mathbb{R}$ with the metric $d(x,y)=|x-y|$, then $(X,d)$ is complete metric space. Let $M=[0,1]$ and $N=[2,3]$. Also, let $S:M\longrightarrow N$ and $T:N\longrightarrow M$ be defined by $S(x)=3-x$ and $T(y)=3-y$. Then it is easy to see that $d(M,N)=1$, $M_{0}=\{1\}$ and $N_{0}=\{2\}$. Thus, $S(M_{0}) = S(\{1\}) = \{2\} = N_{0}$ and $T(M_{0}) = T(\{2\}) = \{1\} = M_{0}$. Now we show that the pair $(S,T)$ forms a proximal cyclic contraction. $d(u,Sa) = d(M,N) =1$ implies that $u=a=1 \in M$ and $d(v,Tb = d(M,N) =1$ implies that $v=b=2 \in N$. Now, since $d(u,Sa)=d(1,S(1))= d(1,2)=1=d(M,N)$ and $d(v,Tb)=d(2,T(2))= d(2,1)=1=d(M,N)$. Therefore, $$\begin{aligned} 1&= d(u,v) = d(1,2) \\ &\leq k \bigl(d(1,2)\bigr) + (1-k) d(M,N) \\ &= k + (1-k) = 1. \end{aligned}$$ So, $(S,T)$ are proximal cyclic contraction for any $0\leq k<1$. Now we shall show that S is proximal $\beta_{1}$-quasi-contraction mapping with $\psi(t)=\frac{1}{7}t,\beta_{1}=2$ and $\alpha_{i}=\frac{1}{5}$ for$i=0,1,2,3$ and $\alpha_{4} = \frac{1}{100}$. Note that $\psi(t)= \frac{1}{7}t \in\varPhi_{2} $ since $\psi_{\beta_{1}}t= \psi_{2}t= \frac{2}{7} t $. As above the only $a,b,u,v\in M$ such that $d(u,Sa)=d(M,N)=1=d(v,Sb)$ is $a=b=u=v =1 \in M$. But $$\begin{aligned} 0&=d(u,v) =d(1,1) \\ &\leq\frac{1}{7}\max\biggl\{ \frac{1}{6}d(a,b), \frac{1}{6}d(a,u),\frac{1}{6} d(b,v), \frac{1}{6}d(a,v),\frac{1}{100}d(b,u)\biggr\} \\ &= \psi\biggl(\max\biggl\{ \frac{1}{6}d(1,1),\frac{1}{100}d(1,1) \biggr\} \biggr) \\ &= \psi\bigl(\max\{0,0,0,0,0\}\bigr) \\ &= 0. \end{aligned}$$ So, S is a proximal $\beta_{1}$-quasi-contraction mapping. We deduce using our Theorem 3.1, that S has a unique best proximity point which is $a_{\ast} =1$ in this example. Similarly, by using the same argument as above, we can show that T is proximal $\beta_{2}$-quasi-contraction mapping with $\phi(t)=\frac{1}{8}t,\beta_{2}=3$ and $\alpha_{i}=\frac{1}{6}$ for$i=0,1,2,3$ and $\alpha_{4} = \frac{1}{100}$. Note that $\phi(t)= \frac{1}{8}t \in\varPhi_{3} $ since $\phi_{\beta_{2}}t= \phi_{3}(t)= \frac{3}{8} t $. As above the only $a,b,u,v\in N$ such that $d(u,Ta)=d(M,N)=1=d(v,Tb)$ is $a=b=u=v =2 \in M$. But $$\begin{aligned} 0&=d(u,v) =d((2,2) \\ &\leq\frac{1}{8}\max\biggl\{ \frac{1}{6}d(a,b), \frac{1}{6}d(a,u),\frac {1}{6}d(b,v),\frac{1}{6}d(a,v), \frac{1}{100}d(b,u)\biggr\} \\ &= \phi\biggl(\max\biggl\{ \frac{1}{6}d(2,2), \frac{1}{100}d(2,2) \biggr\} \biggr) \\ &= \phi\bigl(\max\{0,0,0,0,0\}\bigr) \\ &= 0. \end{aligned}$$ So, T is a proximal $\beta_{2}$-quasi-contraction mapping. We deduce, using Theorem 3.1, that T has a unique best proximity point which is $b_{\ast} =2$. Finally, $\psi(t)$ and $\phi(t)$ are continuous mappings as well as $\beta_{1}, \beta_{2} > \max_{0\leq i \leq3}\{\alpha_{i} \} $. Therefore $$ d(a_{\ast},b_{\ast})=d(1,2)=1=d(M,N). $$ Improvements to some best proximity point theorems are proposed. In particular, the result due to Basha [21] for proximal contractions of first kind is generalized. Furthermore, we propose a similar result on existence and uniqueness of best proximity point of proximal quasi-contractions introduced by Jleli and Samet in [20]. This has been achieved by introducing β-quasi-contractions involving β-comparison functions introduced in [24]. Basha, S.S.: Extensions of Banach's contraction principle. Numer. Funct. Anal. Optim. 31, 569–576 (2010) Fan, K.: Extension of two fixed point theorems of F.E Browder. Math. Z. 112, 234–240 (1969) Reich, S.: Approximate selections, best approximations, fixed points and invariant sets. J. Math. Anal. Appl. 62, 104–113 (1978) Sehgal, V.M., Singh, S.P.: A generalization to multifunctions of Fan's best approximation theorem. Proc. Am. Math. Soc. 102, 534–537 (1988) Prolla, J.B.: Fixed point theorems for set valued mappings and existence of best approximations. Numer. Funct. Anal. Optim. 5, 449–455 (1983) Basha, S.S.: Best proximity point theorems generalizing the contraction principle. J. Nonlinear Anal. Optim., Theory Appl. 74, 5844–5850 (2011) Basha, S.S.: Best proximity point theorems an exploration of a comon solution to approximation and optimization problems. Appl. Math. Comput. 218, 9773–9780 (2012) Basha, S.S., Sahrazad, N.: Best proximity point theorems for generalized proximal contractions. Fixed Point Theory Appl. 2012, 42 (2012) Sadiq Basha, S., Veeramani, P.: Best approximations and best proximity pairs. Acta Sci. Math. 63, 289–300 (1997) Sadiq Basha, S., Veeramani, P., Pai, D.V.: Best proximity pair theorems. Indian J. Pure Appl. Math. 32, 1237–1246 (2001) Sadiq Basha, S., Veeramani, P.: Best proximity pair theorems for multifunctions with open fibres. J. Approx. Theory 103, 119–129 (2000) Raj, V.S.: A best proximity point theorem for weakly contractive non-self-mappings. Nonlinear Anal. 74, 4804–4808 (2011) Karapinar, E.: Best proximity points of Kannan type-cyclic weak phi-contractions in ordered metric spaces. An. Ştiinţ. Univ. 'Ovidius' Constanţa 20, 51–64 (2012) Vetro, C.: Best proximity points: convergence and existence theorems for P-cyclic-mappings. Nonlinear Anal. 73, 2283–2291 (2010) Samet, B., Vetro, C., Vetro, P.: Fixed points theorems for α-ψ-contractive type mappings. Nonlinear Anal. 75, 2154–2165 (2012) Jleli, M., Karapinar, E., Samet, B.: Best proximity points for generalized α-ψ proximal contractives type mapping. J. Appl. Math. 2013, Article ID 534127 (2013) Aydi, H., Felhi, A.: On best proximity points for various α-proximal contractions on metric like spaces. J. Nonlinear Sci. Appl. 9(8), 5202–5218 (2016) Aydi, H., Felhia, A., Karapinar, E.: On common best proximity points for generalized α-ψ-proximal contractions. J. Nonlinear Sci. Appl. 9(5), 2658–2670 (2016) Ayari, M.I.: Best proximity point theorems for generalized α-β-proximal quasi-contractive mappings. Fixed Point Theory Appl. 2017, 16 (2017) Jleli, M., Samet, B.: An optimisation problem involving proximal quasi-contraction mapping. Fixed Point Theory Appl. 2014, 141 (2014) Sadiq, S.: Basha best proximity point theorems generalizing the contraction principle. Nonlinear Anal. 74, 5844–5850 (2011) Shatanawi, W., Mustafa, Z., Tahat, N.: Some coincidence point theorems for nonlinear contraction in ordered metric spaces. Fixed Point Theory Appl. 2011, 68 (2011) Shatanawi, W., Postolache, M., Mustafa, Z., Taha, N.: Some theorems for Boyd–Wong type contractions in ordered metric spaces. Abstr. Appl. Anal. 2012, Article ID 359054 (2012). https://doi.org/10.1155/2012/359054 Ayari, M.I., Berzig, M., Kedim, I.: Coincidence and common fixed point results for β-quasi contractive mappings on metric spaces endowed with binary relation. Math. Sci. 10(3), 105–114 (2016) Please contact the authors for data requests. Institute National Des Sciences Appliquée et de Technologie, de Tunis, Carthage University, Tunis, Tunisie Mohamed Iadh Ayari Department of Mathematics, Satistics and Physics, Qatar University, Doha, Qatar Zead Mustafa & Mohammed Mahmoud Jaradat Search for Mohamed Iadh Ayari in: Search for Zead Mustafa in: Search for Mohammed Mahmoud Jaradat in: The authors contributed equally to the preparation of the paper. The authors read and approved the final manuscript. Correspondence to Mohamed Iadh Ayari. Best proximity points Proximal β-quasi-contractive mappings on metric spaces and proximal cyclic contraction
CommonCrawl
The Story of Grade 8 Unit 3: Linear Relationships By Ashli Black and Elisa Smith Grade 8 is a year marked by shifts in mathematical focus. Where grades 6 and 7 introduce students to negative numbers and using them in operations, grade 8 introduces them to irrational numbers and using them with the Pythagorean Theorem. Where grades 6 and 7 dig into one-variable statistics, grade 8 begins the work with two-variable statistics. And where grades 6 and 7 emphasize ratios and proportional relationships, grade 8 turns its attention toward functions, particularly linear ones. For this last shift, the question becomes how do you go from a foundation in proportional relationships to an understanding of all types of linear relationships? Well, you start with geometry. The Geometry of Lines So let's focus on lines for a bit from a geometric perspective. A line drawn on the coordinate plane has features students can use to describe it, including its slope. From left to right what's going on with that line? Can that change be quantified? Students using IM 6–8 Math are positioned to examine these questions critically by starting the year with congruence and similarity. The study of similar triangles—slope triangles—builds foundational ideas about slope. (https://curriculum.illustrativemathematics.org/MS/teachers/3/2/10/index.html) For the diagonal line shown here, if one student picks two points and makes a slope triangle and another picks two points and makes a slope triangle, they will end up with similar triangles. How do they know they are similar? Well, they can pull from earlier thinking, developed in Unit 1 and the first half of Unit 2, and use a sequence of rigid transformations and a dilation. Or they can say that since the triangles have two pairs of congruent angles then they must be similar. Regardless of how students determine the similarity of the triangles, they can conclude that the vertical length divided by the horizontal length for any slope triangle of a line has the same value and this value is a way to quantify the slope of the line. (And by doing the vertical divided by the horizontal they get a number that gets greater for steeper looking lines, which is nice.) Once a method for calculating the slope of a line is established, students gain a way to write an equation for the line based on this calculation. (https://curriculum.illustrativemathematics.org/MS/teachers/3/2/11/preparation.html) The slope of the line shown here is $\frac{1}{3}$. Since the slope between any two points on the line is the same, we can also say that for the points $(1,1)$ and $(x,y)$ the value of vertical length, $y-1$, divided by the horizontal length, $x-1$, must equal $\frac{1}{3}$. That is, $\frac{y-1}{x-1}=\frac{1}{3}$. From quotients of side lengths of similar triangles, we can make an equation and use it as a point-tester. If a point makes the equation true, it must be on the line. All of this means that students start Unit 3: Linear Relationships with a robust geometric understanding of the slope of a line built from the work with slope triangles. But not all the pieces are there yet to be able to generalize to all linear relationships: slope has only been positive thus far, and linear relationships have only been proportional. Connections between rate of change, slope, and the constant of proportionality still need to be made. From Ladybugs and Ants to Stacking Cups To help students make the connections between rate of change, slope, and the constant of proportionality, let's return to contexts and start from the more familiar proportional relationships. The unit starts by having students attend to precision while they label axes, choose appropriate axes scales, and compare proportional relationships, such as the ladybug and ant who start at the same time but move at different paces, as shown here. (https://curriculum.illustrativemathematics.org/MS/teachers/3/3/1/index.html) In grade 7, students connected the constant of proportionality to the steepness of a graph of a proportional relationship. Now, they change focus from the constant of proportionality to rate of change, which is about how the output changes at a constant rate with respect to the input. They also practice identifying the rate of change in graphs, tables, equations, and contexts for different proportional relationships. To help students take the next step toward working with non-proportional linear relationships in the second section of the unit, they begin by stacking cups. Stacking cups has two variables for students to keep track of: number of cups and height of the stack. The context is meant to be straightforward so students can switch fluidly between the concrete situation and the abstract calculation for the rate of change and connect the two. Here the rate of change has a tangible meaning: add 1 cup to the stack, increase the height of the stack by a fixed amount. Add 2 cups, increase the height by double that fixed amount. Plotting the pairs of (number of cups in the stack, height in centimeters of the stack) results in a set of points that a single straight line can go through. The connection between the slope of the line, the rate of change, and the context is made. At the heart of Unit 3 is establishing that non-proportional linear relationships exist, that they can be graphed as a line with an associated slope, that they have a rate of change just like proportional relationships from grades 6 and 7, and that the value of the slope is the same as the value of the rate of change. This last point is something remarkable. A connection between the quotients of sides of triangles and the change in output divided by the change in input. Whether students think geometrically to calculate the slope of a line or algebraically to calculate the rate of change, they will get the same value. The next lessons offer a second way to approach linear relationships by expressing regularity in repeated calculations. Students start by connecting a set of graphs to their matching contexts while paying close attention to the meaning of the vertical intercept and the value of the slope (Lesson 6, Activity 2). Only after this thinking do students consider how adding identical objects one after the other to a water-filled cylinder with a known initial height could be represented with an equation. It's here that the $y = mx + b$ form is introduced with $m$ as the rate of change and $b$ as the initial amount. But let's not forget where all this started: geometry. The third way students conceptualize linear relationships is through transformations. Calling back to Unit 1, any line in the plane can be considered a vertical translation of a line through the origin. This work happens through contexts, visually with graphs, and in thinking through the difference between equations $y = mx$ and $y = mx + b$. Refining Thinking: Calculating Slope and Solutions to Linear Equations Rounding out the unit, students take their first steps into working with contexts that have a negative rate of change, which in turn leads to refining how to calculate slope. They also consider what happens when one of the two variables does not vary while the other one can take any value, leading to vertical and horizontal lines. The final lessons of the unit continue the work of connecting contexts and their equations while purposefully thinking about what it means for a point to be a solution to an equation. This work builds a strong foundation for the next units on systems of equations, functions, and modeling scatter plots with lines of fit. Unlocking the full range of linear relationships opens multiple avenues of study for the future, not least of which is that systems of equations in Unit 4 are far more interesting when you have more than just proportional relationships to consider. Where else can you see the work of this unit playing out? In particular, how does the work in Unit 3 making connections between multiple representations to reason about linear relationships set students up for success as they reason about linear functions in Unit 5? Elisa Smith Curriculum Specialist at Illustrative Mathematics Elisa Smith is a Curriculum Specialist at Illustrative Mathematics. Before joining the IM team, Elisa spent 21 years teaching elementary, middle, and high school math, and providing professional development to secondary teachers in all subjects. She earned her National Board Certification in Mathematics and has since renewed that certification and gone on to lead cohorts of other teachers seeking certification. Recently she has worked with the Louisiana Department of Education as an author in their Accelerate Math Tutoring Program and also appeared on Louisiana Public Broadcasting teaching several of IM's Algebra 1 lessons. Elisa and her family currently live in Baton Rouge, LA. Ashli Black Director 6–12 Curriculum at Illustrative Mathematics Ashli began teaching over 10 years ago in Washington State. In 2011 she was named Teacher of the Year at her school and earned her National Board Certification in mathematics the following fall. 2011 is also the year she began working for Illustrative Mathematics running social media and helping get more tasks published on the website. Ashli has presented at regional and national NCTM conferences on teacher collaboration, the Math-Twitter-Blogo-Sphere (MTBoS), and creating space in the classroom for students to surprise you. She is an alumna of the Park City Mathematics Institute (2010-2015) as both a participant and staff and has facilitated professional development across the US on middle grades mathematics and teacher professional collaboration. in 2015 she began working for Illustrative Mathematics full time on the 6-8 curriculum where she is the Grade 8 Lead.
CommonCrawl
Parallel Circuit: Definition and Examples Electronics & ElectricalElectronDigital Electronics When the resistances are connected with each other such that one end of each resistance is joined to a common point and the other end of each resistance is joined to another common point so that the number paths for the current flow is equal to the number of resistances, it is called a parallel circuit. Consider three resistors R1, R2 and R3 connected across a source of voltage V as shown in the circuit diagram. The total current (I) divides in three parts – I1 flowing through R1, I2 flowing through R2 and I3 flowing through R3. As, it can be seen that the voltage across each resistance is the same. By Ohm's law, current through each resistor is given by, $$\mathrm{\mathit{I}_{1}=\frac{\mathit{V}}{\mathit{R}_{1}};\:\:\:\mathit{I}_{2}=\frac{\mathit{V}}{\mathit{R}_{2}}\:\:\:and\:\:\:I_{3}=\frac{\mathit{V}}{\mathit{R}_{3}};}$$ Referring the circuit, the total current is, $$\mathrm{\mathit{I}=\mathit{I}_{1}\:+\:\mathit{I}_{2}\:+\:\mathit{I}_{3}}$$ $$\mathrm{\Rightarrow\:\mathit{I}=\frac{\mathit{V}}{\mathit{R}_{1}}+\frac{\mathit{V}}{\mathit{R}_{2}}+\frac{\mathit{V}}{\mathit{R}_{3}}=\mathit{V}(\frac{1}{\mathit{R}_{1}}+\frac{1}{\mathit{R}_{2}}+\frac{1}{\mathit{R}_{3}})}$$ $$\mathrm{\Rightarrow\:\frac{1}{\mathit{V}}=(\frac{1}{\mathit{R}_{1}}+\frac{1}{\mathit{R}_{2}}+\frac{1}{\mathit{R}_{3}})}$$ $$\mathrm{∵\:\frac{1}{\mathit{V}}=\frac{1}{\mathit{R}_{p}}}$$ $$\mathrm{∵\:\frac{1}{\mathit{R}_{p}}=(\frac{1}{\mathit{R}_{1}}+\frac{1}{\mathit{R}_{2}}+\frac{1}{\mathit{R}_{3}})}$$ Hence, when a number of resistance are connected in parallel, the reciprocal of total resistance is equal to the sum of the reciprocals of the individual resistances. $$\mathrm{∵\:\frac{1}{\mathit{R}}=\mathit{G}(Conductance)}$$ $$\mathrm{∵\:\mathit{G}_{p}=\mathit{G}_{1}+\mathit{G}_{2}+\mathit{G}_{3}}$$ Thus, total conductance GP of resistors which are connected in parallel is equal to the sum of their individual conductances. The total power dissipated in the circuit is equal to sum of powers dissipated in the individual resistances. Thus, $$\mathrm{\frac{1}{\mathit{R}_{p}}=(\frac{1}{\mathit{R}_{1}}+\frac{1}{\mathit{R}_{2}}+\frac{1}{\mathit{R}_{3}})}$$ $$\mathrm{\Rightarrow\frac{\mathit{V}^{2}}{\mathit{R}_{p}}=\frac{\mathit{V}^{2}}{\mathit{R}_{1}}+\frac{\mathit{V}^{2}}{\mathit{R}_{2}}+\frac{\mathit{V}^{2}}{\mathit{R}_{3}}}$$ $$\mathrm{\Rightarrow\:\mathit{P}_{p}=\mathit{P}_{1}+\mathit{P}_{2}+\mathit{P}_{3}}$$ Important Points about Parallel Circuit The voltage across each branch is same. As the number of parallel branches is increased, the total resistance of the circuit is decreased. The total resistance of the circuit is less than the smallest of the resistances. The total conductance is equal to sum of their individual conductances. The total power dissipated in the circuit is equal to sum of powers dissipated in the individual resistances. Numerical Example What is the potential difference between points A and B, also find the three branch currents in the circuit shown below? The equivalent resistance (RP) of the three parallel connected resistors is $$\mathrm{\frac{1}{\mathit{R}_{p}}=\frac{1}{2}+\frac{1}{4}+\frac{1}{5}=\frac{19}{20}}$$ $$\mathrm{\Rightarrow\:{\mathit{R}_{p}}=1.053 \:Ω}$$ Therefore, the voltage V across the terminals A and B is $$\mathrm{\mathit{V}=I\mathit{R}_{p}=24×1.053=25.27\:Volts}$$ Now, the branch currents are $$\mathrm{Current\:\mathit{I}_{1}=\frac{\mathit{V}}{\mathit{R}_{1}}=\frac{25.27}{2}=12.64\:A}$$ $$\mathrm{Current\:\mathit{I}_{2}=\frac{\mathit{V}}{\mathit{R}_{2}}=\frac{25.27}{4}=6.32\:A}$$ Manish Kumar Saini Published on 18-Jun-2021 12:30:28 Series-Parallel Circuit: Definition and Examples Magnetic Circuit – Series and Parallel Magnetic Circuit Parallel RLC Circuit: Analysis and Example Problems What is Parallel Testing? (Definition, Approach, Example) Magnetic Circuit with Air Gap: Explanation and Examples Endurance Testing (Definition, Types, Examples) Magnetic Reluctance: Definition, Formula & Examples What is Module Testing? (Definition, Examples) What is Soak Testing? Definition, Meaning, Examples What is Sandwich Testing (Definition, Types, Examples)? Knowledge Economy – Definition, Examples & Issues What is Pilot Testing? Definition, Meaning, Examples What is System Testing? (Definition, Types, Examples) Equivalent Circuit of a Transformer Explained with Examples Short Circuit Ratio of a Synchronous Machine – Definition & Calculation
CommonCrawl
Many of the food-derived ingredients that are often included in nootropics—omega-3s in particular, but also flavonoids—do seem to improve brain health and function. But while eating fatty fish, berries and other healthy foods that are high in these nutrients appears to be good for your brain, the evidence backing the cognitive benefits of OTC supplements that contain these and other nutrients is weak. Adderall is a mix of 4 amphetamine salts (FDA adverse events), and not much better than the others (but perhaps less addictive); as such, like caffeine or methamphetamine, it is not strictly a nootropic but a cognitive enhancer and can be tricky to use right (for how one should use stimulants, see How To Take Ritalin Correctly). I ordered 10x10mg Adderall IR off Silk Road (Wikipedia). On the 4th day after confirmation from seller, the package arrived. It was a harmless looking little padded mailer. Adderall as promised: 10 blue pills with markings, in a double ziplock baggy (reasonable, it's not cocaine or anything). They matched pretty much exactly the descriptions of the generic I had found online. (Surprisingly, apparently both the brand name and the generic are manufactured by the same pharmacorp.) For the sake of organizing the review, we have divided the literature according to the general type of cognitive process being studied, with sections devoted to learning and to various kinds of executive function. Executive function is a broad and, some might say, vague concept that encompasses the processes by which individual perceptual, motoric, and mnemonic abilities are coordinated to enable appropriate, flexible task performance, especially in the face of distracting stimuli or alternative competing responses. Two major aspects of executive function are working memory and cognitive control, responsible for the maintenance of information in a short-term active state for guiding task performance and responsible for inhibition of irrelevant information or responses, respectively. A large enough literature exists on the effects of stimulants on these two executive abilities that separate sections are devoted to each. In addition, a final section includes studies of miscellaneous executive abilities including planning, fluency, and reasoning that have also been the subjects of published studies. In most cases, cognitive enhancers have been used to treat people with neurological or mental disorders, but there is a growing number of healthy, "normal" people who use these substances in hopes of getting smarter. Although there are many companies that make "smart" drinks, smart power bars and diet supplements containing certain "smart" chemicals, there is little evidence to suggest that these products really work. Results from different laboratories show mixed results; some labs show positive effects on memory and learning; other labs show no effects. There are very few well-designed studies using normal healthy people. Vitamin B12 is also known as Cobalamin and is a water-soluble essential vitamin. A (large) deficiency of Vitamin B12 will ultimately lead to cognitive impairment [52]. Older people and people who don't eat meat are at a higher risk than young people who eat more meat. And people with depression have less Vitamin B12 than the average population [53]. The greatly increased variance, but only somewhat increased mean, is consistent with nicotine operating on me with an inverted U-curve for dosage/performance (or the Yerkes-Dodson law): on good days, 1mg nicotine is too much and degrades performance (perhaps I am overstimulated and find it hard to focus on something as boring as n-back) while on bad days, nicotine is just right and improves n-back performance. Several chemical influences can completely disconnect those circuits so they're no longer able to excite each other. "That's what happens when we're tired, when we're stressed." Drugs like caffeine and nicotine enhance the neurotransmitter acetylcholine, which helps restore function to the circuits. Hence people drink tea and coffee, or smoke cigarettes, "to try and put [the] prefrontal cortex into a more optimal state". It's not clear that there is much of an effect at all. This makes it hard to design a self-experiment - how big an effect on, say, dual n-back should I be expecting? Do I need an arduous long trial or an easy short one? This would principally determine the value of information too; chocolate seems like a net benefit even if it does not affect the mind, but it's also fairly costly, especially if one likes (as I do) dark chocolate. Given the mixed research, I don't think cocoa powder is worth investigating further as a nootropic. Finally, a workforce high on stimulants wouldn't necessarily be more productive overall. "One thinks 'are these things dangerous?' – and that's important to consider in the short term," says Huberman. "But there's also a different question, which is: 'How do you feel the day afterwards?' Maybe you're hyper-focused for four hours, 12 hours, but then you're below baseline for 24 or 48." Another class of substances with the potential to enhance cognition in normal healthy individuals is the class of prescription stimulants used to treat attention-deficit/hyperactivity disorder (ADHD). These include methylphenidate (MPH), best known as Ritalin or Concerta, and amphetamine (AMP), most widely prescribed as mixed AMP salts consisting primarily of dextroamphetamine (d-AMP), known by the trade name Adderall. These medications have become familiar to the general public because of the growing rates of diagnosis of ADHD children and adults (Froehlich et al., 2007; Sankaranarayanan, Puumala, & Kratochvil, 2006) and the recognition that these medications are effective for treating ADHD (MTA Cooperative Group, 1999; Swanson et al., 2008). Increasing incidences of chronic diseases such as diabetes and cancer are also impacting positive growth for the global smart pills market. The above-mentioned factors have increased the need for on-site diagnosis, which can be achieved by smart pills. Moreover, the expanding geriatric population and the resulting increasing in degenerative diseases has increased demand for smart pills Some critics argue that Modafinil is an expression of that, a symptom of a new 24/7 work routine. But what if the opposite is true? Let's say you could perform a task in significantly less time than usual. You could then use the rest of your time differently, spending it with family, volunteering, or taking part in a leisure activity. And imagine that a drug helped you focus on clearing your desk and inbox before leaving work. Wouldn't that help you relax once you get home? If you haven't seen the movie, imagine unfathomable brain power in capsule form. Picture a drug from another universe. It can transform an unsuccessful couch potato into a millionaire financial mogul. Ingesting the powerful smart pill boosts intelligence and turns you into a prodigy. Its results are instant. Sounds great, right? If only it were real. Didn't seem very important to me. Trump's ability to discern importance in military projects, sure, why not. Shanahan may be the first honest cabinet head; it could happen. With the record this administration has I'd need some long odds to bet that way. Does anyone doubt he got the loyalty spiel and then the wink and nod that anything he could get away with was fine. monies Much better than I had expected. One of the best superhero movies so far, better than Thor or Watchmen (and especially better than the Iron Man movies). I especially appreciated how it didn't launch right into the usual hackneyed creation of the hero plot-line but made Captain America cool his heels performing & selling war bonds for 10 or 20 minutes. The ending left me a little nonplussed, although I sort of knew it was envisioned as a franchise and I would have to admit that showing Captain America wondering at Times Square is much better an ending than something as cliche as a close-up of his suddenly-opened eyes and then a fade out. (The movie continued the lamentable trend in superhero movies of having a strong female love interest… who only gets the hots for the hero after they get muscles or powers. It was particularly bad in CA because she knows him and his heart of gold beforehand! What is the point of a feminist character who is immediately forced to do that?)↩ While the mechanism is largely unknown, one commonly mechanism possibility is that light of the relevant wavelengths is preferentially absorbed by the protein cytochrome c oxidase, which is a key protein in mitochondrial metabolism and production of ATP, substantially increasing output, and this extra output presumably can be useful for cellular activities like healing or higher performance. Most of the most solid fish oil results seem to meliorate the effects of age; in my 20s, I'm not sure they are worth the cost. But I would probably resume fish oil in my 30s or 40s when aging really becomes a concern. So the experiment at most will result in discontinuing for a decade. At $X a year, that's a net present value of sum $ map (\n -> 70 / (1 + 0.05)^n) [1..10] = $540.5. Exercise and nutrition also play an important role in neuroplasticity. Many vitamins and ingredients found naturally in food products have been shown to have cognitive enhancing effects. Some of these include vitamins B6 and B12, caffeine, phenethylamine found in chocolate and l-theanine, found in green tea, whose combined effects with caffeine are more extensively researched. The above are all reasons to expect that even if I do excellent single-subject design self-experiments, there will still be the old problem of internal validity versus external validity: an experiment may be wrong or erroneous or unlucky in some way (lack of internal validity) or be right but not matter to anyone else (lack of external validity). For example, alcohol makes me sad & depressed; I could run the perfect blind randomized experiment for hundreds of trials and be extremely sure that alcohol makes me less happy, but would that prove that alcohol makes everyone sad or unhappy? Of course not, and as far as I know, for a lot of people alcohol has the opposite effect. So my hypothetical alcohol experiment might have tremendous internal validity (it does prove that I am sadder after inebriating), and zero external validity (someone who has never tried alcohol learns nothing about whether they will be depressed after imbibing). Keep this in mind if you are minded to take the experiments too seriously. This formula presents a relatively high price and one bottle of 60 tables, at the recommended dosage of two tablets per day with a meal, a bottle provides a month's supply. The secure online purchase is available on the manufacturer's site as well as at several online retailers. Although no free trials or money back guarantees are available at this time, the manufacturer provides free shipping if the desired order exceeds a certain amount. With time different online retailers could offer some advantages depending on the amount purchased, so an online research is advised before purchase, as to assess the market and find the best solution. Nicotine absorption through the stomach is variable and relatively reduced in comparison with absorption via the buccal cavity and the small intestine. Drinking, eating, and swallowing of tobacco smoke by South American Indians have frequently been reported. Tenetehara shamans reach a state of tobacco narcosis through large swallows of smoke, and Tapirape shams are said to eat smoke by forcing down large gulps of smoke only to expel it again in a rapid sequence of belches. In general, swallowing of tobacco smoke is quite frequently likened to drinking. However, although the amounts of nicotine swallowed in this way - or in the form of saturated saliva or pipe juice - may be large enough to be behaviorally significant at normal levels of gastric pH, nicotine, like other weak bases, is not significantly absorbed. "Love this book! Still reading and can't wait to see what else I learn…and I am not brain injured! Cavin has already helped me to take steps to address my food sensitivity…seems to be helping and I am only on day 5! He has also helped me to help a family member who has suffered a stroke. Thank you Cavin, for sharing all your knowledge and hard work with us! This book is for anyone that wants to understand and implement good nutrition with all the latest research to back it up. Highly recommend!" Flaxseed oil is, ounce for ounce, about as expensive as fish oil, and also must be refrigerated and goes bad within months anyway. Flax seeds on the other hand, do not go bad within months, and cost dollars per pound. Various resources I found online estimated that the ALA component of human-edible flaxseed to be around 20% So Amazon's 6lbs for $14 is ~1.2lbs of ALA, compared to 16fl-oz of fish oil weighing ~1lb and costing ~$17, while also keeping better and being a calorically useful part of my diet. The flaxseeds can be ground in an ordinary food processor or coffee grinder. It's not a hugely impressive cost-savings, but I think it's worth trying when I run out of fish oil. The experiment then is straightforward: cut up a fresh piece of gum, randomly select from it and an equivalent dry piece of gum, and do 5 rounds of dual n-back to test attention/energy & WM. (If it turns out to be placebo, I'll immediately use the remaining active dose: no sense in wasting gum, and this will test whether nigh-daily use renders nicotine gum useless, similar to how caffeine may be useless if taken daily. If there's 3 pieces of active gum left, then I wrap it very tightly in Saran wrap which is sticky and air-tight.) The dose will be 1mg or 1/4 a gum. I cut up a dozen pieces into 4 pieces for 48 doses and set them out to dry. Per the previous power analyses, 48 groups of DNB rounds likely will be enough for detecting small-medium effects (partly since we will be only looking at one metric - average % right per 5 rounds - with no need for multiple correction). Analysis will be one-tailed, since we're looking for whether there is a clear performance improvement and hence a reason to keep using nicotine gum (rather than whether nicotine gum might be harmful). Racetams, specifically Piracetam, an ingredient popular in over-the-counter nootropics, are synthetic stimulants designed to improve brain function. Patel notes Piracetam is the granddaddy of all racetams, and the term "nootropic" was originally coined to describe its effects. However, despite its popularity and how long it's been around and in use, researchers don't know what its mechanism of action is. Patel explained that the the most prominent hypothesis suggests Piracetam enhances neuronal function by increasing membrane fluidity in the brain, but that hasn't been confirmed yet. And Patel elaborated that most studies on Piracetam aren't done with the target market for nootropics in mind, the young professional: Supplements, medications, and coffee certainly might play a role in keeping our brains running smoothly at work or when we're trying to remember where we left our keys. But the long-term effects of basic lifestyle practices can't be ignored. "For good brain health across the life span, you should keep your brain active," Sahakian says. "There is good evidence for 'use it or lose it.'" She suggests brain-training apps to improve memory, as well as physical exercise. "You should ensure you have a healthy diet and not overeat. It is also important to have good-quality sleep. Finally, having a good work-life balance is important for well-being." Try these 8 ways to get smarter while you sleep. Now, what is the expected value (EV) of simply taking iodine, without the additional work of the experiment? 4 cans of 0.15mg x 200 is $20 for 2.1 years' worth or ~$10 a year or a NPV cost of $205 (\frac{10}{\ln 1.05}) versus a 20% chance of $2000 or $400. So the expected value is greater than the NPV cost of taking it, so I should start taking iodine. Adrafinil is a prodrug for Modafinil, which means it can be metabolized into Modafinil to give you a similar effect. And you can buy it legally just about anywhere. But there are a few downsides. Patel explains that you have to take a lot more to achieve a similar effect as Modafinil, wait longer for it to kick in (45-60 minutes), there are more potential side effects, and there aren't any other benefits to taking it. Despite some positive findings, a lot of studies find no effects of enhancers in healthy subjects. For instance, although some studies suggest moderate enhancing effects in well-rested subjects, modafinil mostly shows enhancing effects in cases of sleep deprivation. A recent study by Martha Farah and colleagues found that Adderall (mixed amphetamine salts) had only small effects on cognition but users believed that their performance was enhanced when compared to placebo. "There seems to be a growing percentage of intellectual workers in Silicon Valley and Wall Street using nootropics. They are akin to intellectual professional athletes where the stakes and competition is high," says Geoffrey Woo, the CEO and co-founder of nutrition company HVMN, which produces a line of nootropic supplements. Denton agrees. "I think nootropics just make things more and more competitive. The ease of access to Chinese, Russian intellectual capital in the United States, for example, is increasing. And there is a willingness to get any possible edge that's available." Other drugs, like cocaine, are used by bankers to manage their 18-hour workdays [81]. Unlike nootropics, dependency is very likely and not only mentally but also physically. Bankers and other professionals who take drugs to improve their productivity will become dependent. Almost always, the negative consequences outweigh any positive outcomes from using drugs. With so many different ones to choose from, choosing the best nootropics for you can be overwhelming at times. As usual, a decision this important will require research. Study up on the top nootropics which catch your eye the most. The nootropics you take will depend on what you want the enhancement for. The ingredients within each nootropic determine its specific function. For example, some nootropics contain ginkgo biloba, which can help memory, thinking speed, and increase attention span. Check the nootropic ingredients as you determine what end results you want to see. Some nootropics supplements can increase brain chemicals such as dopamine and serotonin. An increase in dopamine levels can be very useful for memory, alertness, reward and more. Many healthy adults, as well as college students take nootropics. This really supports the central nervous system and the brain. We'd want 53 pairs, but Fitzgerald 2012's experimental design called for 32 weeks of supplementation for a single pair of before-after tests - so that'd be 1664 weeks or ~54 months or ~4.5 years! We can try to adjust it downwards with shorter blocks allowing more frequent testing; but problematically, iodine is stored in the thyroid and can apparently linger elsewhere - many of the cited studies used intramuscular injections of iodized oil (as opposed to iodized salt or kelp supplements) because this ensured an adequate supply for months or years with no further compliance by the subjects. If the effects are that long-lasting, it may be worthless to try shorter blocks than ~32 weeks. Sounds too good to be true? Welcome to the world of 'Nootropics' popularly known as 'Smart Drugs' that can help boost your brain's power. Do you recall the scene from the movie Limitless, where Bradley Cooper's character uses a smart drug that makes him brilliant? Yes! The effect of Nootropics on your brain is such that the results come as a no-brainer. Does little alone, but absolutely necessary in conjunction with piracetam. (Bought from Smart Powders.) When turning my 3kg of piracetam into pills, I decided to avoid the fishy-smelling choline and go with 500g of DMAE (Examine.com); it seemed to work well when I used it before with oxiracetam & piracetam, since I had no piracetam headaches, and be considerably less bulky. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 5/25/18) and Privacy Policy and Cookie Statement (updated 5/25/18). Your California Privacy Rights. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. "Smart Drugs" are chemical substances that enhance cognition and memory or facilitate learning. However, within this general umbrella of "things you can eat that make you smarter," there are many variations as far as methods of action within the body, perceptible (and measurable) effects, potential for use and abuse, and the spillover impact on the body's non-cognitive processes. For Malcolm Gladwell, "the thing with doping is that it allows you to train harder than you would have done otherwise." He argues that we cannot easily call someone a cheater on the basis of having used a drug for this purpose. The equivalent, he explains, would be a student who steals an exam paper from the teacher, and then instead of going home and not studying at all, goes to a library and studies five times harder. Smart pills are defined as drugs or prescription medication used to treat certain mental disorders, from milder ones such as brain fog, to some more severe like ADHD. They are often referred to as 'nootropics' but even though the two terms are often used interchangeably, smart pills and nootropics represent two different types of cognitive enhancers. The demands of university studies, career, and family responsibilities leaves people feeling stretched to the limit. Extreme stress actually interferes with optimal memory, focus, and performance. The discovery of nootropics and vitamins that make you smarter has provided a solution to help college students perform better in their classes and professionals become more productive and efficient at work. The easiest way to use 2mg was to use half a gum; I tried not chewing it but just holding it in my cheek. The first night I tried, this seemed to work well for motivation; I knocked off a few long-standing to-do items. Subsequently, I began using it for writing, where it has been similarly useful. One difficult night, I wound up using the other half (for a total of 4mg over ~5 hours), and it worked but gave me a fairly mild headache and a faint sensation of nausea; these may have been due to forgetting to eat dinner, but this still indicates 3mg should probably be my personal ceiling until and unless tolerance to lower doses sets in. More recently, the drug modafinil (brand name: Provigil) has become the brain-booster of choice for a growing number of Americans. According to the FDA, modafinil is intended to bolster "wakefulness" in people with narcolepsy, obstructive sleep apnea or shift work disorder. But when people without those conditions take it, it has been linked with improvements in alertness, energy, focus and decision-making. A 2017 study found evidence that modafinil may enhance some aspects of brain connectivity, which could explain these benefits. Nootropics are a broad classification of cognition-enhancing compounds that produce minimal side effects and are suitable for long-term use. These compounds include those occurring in nature or already produced by the human body (such as neurotransmitters), and their synthetic analogs. We already regularly consume some of these chemicals: B vitamins, caffeine, and L-theanine, in our daily diets. Hall, Irwin, Bowman, Frankenberger, & Jewett (2005) Large public university undergraduates (N = 379) 13.7% (lifetime) 27%: use during finals week; 12%: use when party; 15.4%: use before tests; 14%: believe stimulants have a positive effect on academic achievement in the long run M = 2.06 (SD = 1.19) purchased stimulants from other students; M = 2.81 (SD = 1.40) have been given stimulants by other studentsb There is no clear answer to this question. Many of the smart drugs have decades of medical research and widespread use behind them, as well as only minor, manageable, or nonexistent side effects, but are still used primarily as a crutch for people already experiencing cognitive decline, rather than as a booster-rocket for people with healthy brains. Unfortunately, there is a bias in Western medicine in favor of prescribing drugs once something bad has already begun, rather than for up-front prevention. There's also the principle of "leave well enough alone" – in this case, extended to mean, don't add unnecessary or unnatural drugs to the human body in place of a normal diet. [Smart Drug Smarts would argue that the average human diet has strayed so far from what is physiologically "normal" that leaving well enough alone is already a failed proposition.] Phenylpiracetam (Phenotropil) is one of the best smart drugs in the racetam family. It has the highest potency and bioavailability among racetam nootropics. This substance is almost the same as Piracetam; only it contains a phenyl group molecule. The addition to its chemical structure improves blood-brain barrier permeability. This modification allows Phenylpiracetam to work faster than other racetams. Its cognitive enhancing effects can last longer as well. Four of the studies focused on middle and high school students, with varied results. Boyd, McCabe, Cranford, and Young (2006) found a 2.3% lifetime prevalence of nonmedical stimulant use in their sample, and McCabe, Teter, and Boyd (2004) found a 4.1% lifetime prevalence in public school students from a single American public school district. Poulin (2001) found an 8.5% past-year prevalence in public school students from four provinces in the Atlantic region of Canada. A more recent study of the same provinces found a 6.6% and 8.7% past-year prevalence for MPH and AMP use, respectively (Poulin, 2007). Exercise is also important, says Lebowitz. Studies have shown it sharpens focus, elevates your mood and improves concentration. Likewise, maintaining a healthy social life and getting enough sleep are vital, too. Studies have consistently shown that regularly skipping out on the recommended eight hours can drastically impair critical thinking skills and attention. With subtle effects, we need a lot of data, so we want at least half a year (6 blocks) or better yet, a year (12 blocks); this requires 180 actives and 180 placebos. This is easily covered by $11 for Doctor's Best Best Lithium Orotate (5mg), 200-Count (more precisely, Lithium 5mg (from 125mg of lithium orotate)) and $14 for 1000x1g empty capsules (purchased February 2012). For convenience I settled on 168 lithium & 168 placebos (7 pill-machine batches, 14 batches total); I can use them in 24 paired blocks of 7-days/1-week each (48 total blocks/48 weeks). The lithium expiration date is October 2014, so that is not a problem One claim was partially verified in passing by Eliezer Yudkowsky (Supplementing potassium (citrate) hasn't helped me much, but works dramatically for Anna, Kevin, and Vassar…About the same as drinking a cup of coffee - i.e., it works as a perker-upper, somehow. I'm not sure, since it doesn't do anything for me except possibly mitigate foot cramps.) Tuesday: I went to bed at 1am, and first woke up at 6am, and I wrote down a dream; the lucid dreaming book I was reading advised that waking up in the morning and then going back for a short nap often causes lucid dreams, so I tried that - and wound up waking up at 10am with no dreams at all. Oops. I take a pill, but the whole day I don't feel so hot, although my conversation and arguments seem as cogent as ever. I'm also having a terrible time focusing on any actual work. At 8 I take another; I'm behind on too many things, and it looks like I need an all-nighter to catch up. The dose is no good; at 11, I still feel like at 8, possibly worse, and I take another along with the choline+piracetam (which makes a total of 600mg for the day). Come 12:30, and I disconsolately note that I don't seem any better, although I still seem to understand the IQ essays I am reading. I wonder if this is tolerance to modafinil, or perhaps sleep catching up to me? Possibly it's just that I don't remember what the quasi-light-headedness of modafinil felt like. I feel this sort of zombie-like state without change to 4am, so it must be doing something, when I give up and go to bed, getting up at 7:30 without too much trouble. Some N-backing at 9am gives me some low scores but also some pretty high scores (38/43/66/40/24/67/60/71/54 or ▂▂▆▂▁▆▅▇▄), which suggests I can perform normally if I concentrate. I take another pill and am fine the rest of the day, going to bed at 1am as usual. If you could take a drug to boost your brainpower, would you? This question, faced by Bradley Cooper's character in the big-budget movie Limitless, is now facing students who are frantically revising for exams. Although they are nowhere near the strength of the drug shown in the film, mind-enhancing drugs are already on the pharmacy shelves, and many people are finding the promise of sharper thinking through chemistry highly seductive. Looking at the prices, the overwhelming expense is for modafinil. It's a powerful stimulant - possibly the single most effective ingredient in the list - but dang expensive. Worse, there's anecdotal evidence that one can develop tolerance to modafinil, so we might be wasting a great deal of money on it. (And for me, modafinil isn't even very useful in the daytime: I can't even notice it.) If we drop it, the cost drops by a full $800 from $1761 to $961 (almost halving) and to $0.96 per day. A remarkable difference, and if one were genetically insensitive to modafinil, one would definitely want to remove it. "Where can you draw the line between Red Bull, six cups of coffee and a prescription drug that keeps you more alert," says Michael Schrage of the MIT Center for Digital Business, who has studied the phenomenon. "You can't draw the line meaningfully - some organizations have cultures where it is expected that employees go the extra mile to finish an all-nighter. " The title question, whether prescription stimulants are smart pills, does not find a unanimous answer in the literature. The preponderance of evidence is consistent with enhanced consolidation of long-term declarative memory. For executive function, the overall pattern of evidence is much less clear. Over a third of the findings show no effect on the cognitive processes of healthy nonelderly adults. Of the rest, most show enhancement, although impairment has been reported (e.g., Rogers et al., 1999), and certain subsets of participants may experience impairment (e.g., higher performing participants and/or those homozygous for the met allele of the COMT gene performed worse on drug than placebo; Mattay et al., 2000, 2003). Whereas the overall trend is toward enhancement of executive function, the literature contains many exceptions to this trend. Furthermore, publication bias may lead to underreporting of these exceptions. Let's start with the basics of what smart drugs are and what they aren't. The field of cosmetic psychopharmacology is still in its infancy, but the use of smart drugs is primed to explode during our lifetimes, as researchers gain increasing understanding of which substances affect the brain and how they do so. For many people, the movie Limitless was a first glimpse into the possibility of "a pill that can make you smarter," and while that fiction is a long way from reality, the possibilities - in fact, present-day certainties visible in the daily news - are nevertheless extremely exciting. I have a needle phobia, so injections are right out; but from the images I have found, it looks like testosterone enanthate gels using DMSO resemble other gels like Vaseline. This suggests an easy experimental procedure: spoon an appropriate dose of testosterone gel into one opaque jar, spoon some Vaseline gel into another, and pick one randomly to apply while not looking. If one gel evaporates but the other doesn't, or they have some other difference in behavior, the procedure can be expanded to something like and then half an hour later, take a shower to remove all visible traces of the gel. Testosterone itself has a fairly short half-life of 2-4 hours, but the gel or effects might linger. (Injections apparently operate on a time-scale of weeks; I'm not clear on whether this is because the oil takes that long to be absorbed by surrounding materials or something else.) Experimental design will depend on the specifics of the obtained substance. As a controlled substance (Schedule III in the US), supplies will be hard to obtain; I may have to resort to the Silk Road. (We already saw that too much iodine could poison both adults and children, and of course too little does not help much - iodine would seem to follow a U-curve like most supplements.) The listed doses at iherb.com often are ridiculously large: 10-50mg! These are doses that seems to actually be dangerous for long-term consumption, and I believe these are doses that are designed to completely suffocate the thyroid gland and prevent it from absorbing any more iodine - which is useful as a short-term radioactive fallout prophylactic, but quite useless from a supplementation standpoint. Fortunately, there are available doses at Fitzgerald 2012's exact dose, which is roughly the daily RDA: 0.15mg. Even the contrarian materials seem to focus on a modest doubling or tripling of the existing RDA, so the range seems relatively narrow. I'm fairly confident I won't overshoot if I go with 0.15-1mg, so let's call this 90%. Never heard of OptiMind before? This supplement promotes itself as an all-natural nootropic supplement that increases focus, improves memory, and enhances overall mental drive. The product first captured our attention when we noticed that their supplement blend contains a few of the same ingredients currently present in our editor's #1 choice. So, of course, we grew curious to see whether their formula was as (un)successful as their initial branding techniques. Keep reading to find out what we discovered… Learn More... Elaborating on why the psychological side effects of testosterone injection are individual dependent: Not everyone get the same amount of motivation and increased goal seeking from the steroid and most people do not experience periods of chronic avolition. Another psychological effect is a potentially drastic increase in aggression which in turn can have negative social consequences. In the case of counterfactual Wedrifid he gets a net improvement in social consequences. He has observed that aggression and anger are a prompt for increased ruthless self-interested goal seeking. Ruthless self-interested goal seeking involves actually bothering to pay attention to social politics. People like people who do social politics well. Most particularly it prevents acting on contempt which is what Wedrifid finds prompts the most hostility and resentment in others. Point is, what is a sanity promoting change in one person may not be in another. Phenotropil is an over-the-counter supplement similar in structure to Piracetam (and Noopept). This synthetic smart drug has been used to treat stroke, epilepsy and trauma recovery. A 2005 research paper also demonstrated that patients diagnosed with natural lesions or brain tumours see improvements in cognition. Phenylpiracetam intake can also result in minimised feelings of anxiety and depression. This is one of the more powerful unscheduled Nootropics available. Ongoing studies are looking into the possible pathways by which nootropic substances function. Researchers have postulated that the mental health advantages derived from these substances can be attributed to their effects on the cholinergic and dopaminergic systems of the brain. These systems regulate two important neurotransmitters, acetylcholine and dopamine.
CommonCrawl
How to convert regular expressions into predicate logic? I've been given the problem to describe the language of, "The set of strings beginning with ab." with predicate logic using quantifiers. The issue I don't have a systematic approach to do this with. I first defined a boolean predicate, "Pred" which just returns a boolean value if one variable is a predecessor of another. $$Pred(y, x) = ( y < x ) \land \forall z: ( z \leq y) \land (z \ge x)$$ Another predicate is just to determine if the first letter is A is : $$Initial(a) = \exists x: C_a(x) \land \forall y: x \leq y$$ Where $C_a(x)$ means that "the character at position x is the character a." From here, I need to combine the two definitions I've created to come up with a language that describes the set of strings beginning with ab. discrete-mathematics predicate-logic regular-language Digital VeerDigital Veer You need a small correction to your predecessor definition. For instance: $$\operatorname{Pred}(y,x) := (y < x) \wedge \forall z \,.\, (z \leq y) \vee (x \leq z) \enspace. $$ Instead of $\operatorname{Initial}(a)$, you may want to define $$ \operatorname{Initial}(x) := \forall y \,.\, x \leq y \enspace. $$ From these two predicates you can build second, third, .... For the language of all words that start with $ab$, you could write $$ \forall x \,.\, \forall y \,.\, \operatorname{Initial}(x) \rightarrow \Big(C_a(x) \wedge \big(\operatorname{Pred}(x,y) \rightarrow C_b(y)\big)\Big) \enspace. $$ In general, a word language over a finite alphabet is first-order definable (with signature $<$) if and only if it is star-free. Hence you won't find a systematic procedure to translate an arbitrary regular expression into a first-order formula. If, however, you are given a star-free extended regular expression, then translation is algorithmic and not too hard. First, "start-free expression" means that the expression doesn't contain any Kleene stars. Second, "extended" means that complementation is allowed. The translation proceeds recursively. Let $\Sigma$ be the alphabet. The FOL formula for the empty language is $\bot$ (false). This means that the formula for $\Sigma^*$ is $\top$ (true). (Even though $\Sigma^*$ is not a star-free expression, the language it defines is star-free, because it can also be described as the complement of the empty language.) A formula for the expression $a$, with $a \in \Sigma$, is $$ \exists x \,.\, (\forall w \,.\, w = x) \wedge C_a(x) \enspace. $$ A formula for the empty string $\epsilon$ is $$ \forall x \,.\, \bigwedge_{\sigma \in \Sigma} \neg C_\sigma(x) \enspace. $$ These are all the base cases. For the recursive step, union maps to disjunction, intersection maps to conjunction, and complementation maps to negation. Since there is no star, this only leaves concatenation, for which we introduce the relativizations of a formula $\varphi$ with respect to variable $z$: $$ [\varphi]_{<z} ~~~~ [\varphi]_{\geq z} \enspace, $$ which restrict the quantifiers in $\varphi$ to the natural numbers that are less than $z$, or greater than or equal to $z$, respectively. Then, if $L_i$ translates to $\varphi_i$, then $L_1 \cdot L_2$ translates to $$ \exists z \,.\, [\varphi_1]_{<z} \wedge [\varphi_2]_{\geq z} \enspace. $$ As an example, if $\Sigma = \{a,b\}$ and we wanted the formula for $\epsilon a$, we'd start with $$ \begin{align*} \varphi_1 &:= \forall x \,.\, \neg C_a(x) \wedge \neg C_b(x) \\ \varphi_2 &:= \exists y \,.\, (\forall w \,.\, w = y) \wedge C_a(y) \enspace. \end{align*} $$ We would then compute $$ \begin{align*} [\varphi_1]_{<z} &:= \forall x \,.\, x < z \rightarrow (\neg C_a(x) \wedge \neg C_b(x)) \\ [\varphi_2]_{\geq z} &:= \exists y \,.\, z \leq y \wedge (\forall w \,.\, z \leq w \rightarrow w = y) \wedge C_a(y) \enspace. \end{align*} $$ Finally, $$ \begin{align*} \exists z \,.\, [\varphi_1]_{<z} \wedge [\varphi_2]_{\geq z} &:= \exists z \,.\, \Big(\forall x \,.\, x < z \rightarrow (\neg C_a(x) \wedge \neg C_b(x))\Big) \,\wedge \\ &\enspace\quad \Big(\exists y \,.\, z \leq y \wedge (\forall w \,.\, z \leq w \rightarrow w = y) \wedge C_a(y)\Big) \enspace. \end{align*} $$ This is rather horrible-looking--a bit like the regular expressions or MSO formulae one mechanically extracts from automata--but if we notice that the first "half" of the formula implies $z=0$, things work out OK and eventually we get $\varphi_2$ back. Fabio SomenziFabio Somenzi Not the answer you're looking for? Browse other questions tagged discrete-mathematics predicate-logic regular-language or ask your own question. What is a formal definition of "predicate logic"? Using expressions like $ \langle x,y \rangle$ in predicate logic formulas Write expressions w/out quantifiers (convert to AND/OR expressions) Why we have 2 quantifiers in predicate logic? Predicate logic in set theory: what components can we write in an expression? Predicate logic using sets instead of predicate functions Predicate Logic Translation, Implications and singularity
CommonCrawl
MSC Classifications MSC 2010: Mechanics of Deformable Solids 74Hxx Last 3 years (5) The ANZIAM Journal (4) Proceedings of the Royal Society of Edinburgh Section A: Mathematics (3) Advances in Applied Mathematics and Mechanics (1) Bulletin of the Australian Mathematical Society (1) Communications in Computational Physics (1) European Journal of Applied Mathematics (1) Proceedings of the Edinburgh Mathematical Society (1) Australian Mathematical Society Inc (5) Global Science Press (2) 12 results in 74Hxx Expansions for the linear-elastic contribution to the self-interaction force of dislocation curves Dynamical problems Plastic materials, materials of stress-rate and internal-variable type PATRICK VAN MEURS Journal: European Journal of Applied Mathematics , First View Published online by Cambridge University Press: 20 October 2021, pp. 1-30 The self-interaction force of dislocation curves in metals depends on the local arrangement of the atoms and on the non-local interaction between dislocation curve segments. While these non-local segment–segment interactions can be accurately described by linear elasticity when the segments are further apart than the atomic scale of size $\varepsilon$ , this model breaks down and blows up when the segments are $O(\varepsilon)$ apart. To separate the non-local interactions from the local contribution, various models depending on $\varepsilon$ have been constructed to account for the non-local term. However, there are no quantitative comparisons available between these models. This paper makes such comparisons possible by expanding the self-interaction force in these models in $\varepsilon$ beyond the O(1)-term. Our derivation of these expansions relies on asymptotic analysis. The practical use of these expansions is demonstrated by developing numerical schemes for them, and by – for the first time – bounding the corresponding discretisation error. Spectral properties of a beam equation with eigenvalue parameter occurring linearly in the boundary conditions Qualitative theory Boundary value problems Ordinary differential operators General theory of linear operators Ziyatkhan S. Aliyev, Gunay T. Mamedova Journal: Proceedings of the Royal Society of Edinburgh Section A: Mathematics , First View Published online by Cambridge University Press: 30 July 2021, pp. 1-22 In this paper, we consider an eigenvalue problem for ordinary differential equations of fourth order with a spectral parameter in the boundary conditions. The location of eigenvalues on real axis, the structure of root subspaces and the oscillation properties of eigenfunctions of this problem are investigated, and asymptotic formulas for the eigenvalues and eigenfunctions are found. Next, by the use of these properties, we establish sufficient conditions for subsystems of root functions of the considered problem to form a basis in the space $L_p,1 < p < \infty$. A derivation of the Liouville equation for hard particle dynamics with non-conservative interactions Two-phase and multiphase flows Benjamin D. Goddard, Tim D. Hurst, Mark Wilkinson Journal: Proceedings of the Royal Society of Edinburgh Section A: Mathematics / Volume 151 / Issue 3 / June 2021 Published online by Cambridge University Press: 28 December 2020, pp. 1040-1074 Print publication: June 2021 The Liouville equation is of fundamental importance in the derivation of continuum models for physical systems which are approximated by interacting particles. However, when particles undergo instantaneous interactions such as collisions, the derivation of the Liouville equation must be adapted to exclude non-physical particle positions, and include the effect of instantaneous interactions. We present the weak formulation of the Liouville equation for interacting particles with general particle dynamics and interactions, and discuss the results using two examples. THE ALTERNATIVE KIRCHHOFF APPROXIMATION IN ELASTODYNAMICS WITH APPLICATIONS IN ULTRASONIC NONDESTRUCTIVE TESTING Special subfields of solid mechanics L. J. FRADKIN, A. K. DJAKOU, C. PRIOR, M. DARMON, S. CHATILLON, P.-F. CALMON Journal: The ANZIAM Journal / Volume 62 / Issue 4 / October 2020 Print publication: October 2020 The Kirchhoff approximation is widely used to describe the scatter of elastodynamic waves. It simulates the scattered field as the convolution of the free-space Green's tensor with the geometrical elastodynamics approximation to the total field on the scatterer surface and, therefore, cannot be used to describe nongeometrical phenomena, such as head waves. The aim of this paper is to demonstrate that an alternative approximation, the convolution of the far-field asymptotics of the Lamb's Green's tensor with incident surface tractions, has no such limitation. This is done by simulating the scatter of a critical Gaussian beam of transverse motions from an infinite plane. The results are of interest in ultrasonic nondestructive testing. GLOBAL EXISTENCE OF WEAK SOLUTIONS FOR STRONGLY DAMPED WAVE EQUATIONS WITH NONLINEAR BOUNDARY CONDITIONS AND BALANCED POTENTIALS Equations of mathematical physics and other areas of application Hyperbolic equations and systems JOSEPH L. SHOMBERG Journal: Bulletin of the Australian Mathematical Society / Volume 99 / Issue 3 / June 2019 Published online by Cambridge University Press: 07 February 2019, pp. 432-444 We demonstrate the global existence of weak solutions to a class of semilinear strongly damped wave equations possessing nonlinear hyperbolic dynamic boundary conditions. The associated linear operator is $(-\unicode[STIX]{x1D6E5}_{W})^{\unicode[STIX]{x1D703}}\unicode[STIX]{x2202}_{t}u$, where $\unicode[STIX]{x1D703}\in [\frac{1}{2},1)$ and $\unicode[STIX]{x1D6E5}_{W}$ is the Wentzell–Laplacian. A balance condition is assumed to hold between the nonlinearity defined on the interior of the domain and the nonlinearity on the boundary. This allows for arbitrary (supercritical) polynomial growth of each potential, as well as mixed dissipative/antidissipative behaviour. Lifespan of solutions to wave equations in de Sitter spacetime Weiping Yan Journal: Proceedings of the Royal Society of Edinburgh Section A: Mathematics / Volume 148 / Issue 6 / December 2018 Published online by Cambridge University Press: 10 April 2018, pp. 1313-1330 Print publication: December 2018 We consider the finite-time blow-up of solutions for the following two kinds of nonlinear wave equation in de Sitter spacetime: This proof is based on a new blow-up criterion, which generalizes that by Sideris. Furthermore, we give the lifespan estimate of solutions for the problems. DYNAMIC RELATIONSHIP BETWEEN THE MUTUAL INTERFERENCE AND GESTATION DELAYS OF A HYBRID TRITROPHIC FOOD CHAIN MODEL Smooth dynamical systems: general theory Difference and functional equations, recurrence relations Nonlinear dynamics RASHMI AGRAWAL, DEBALDEV JANA, RANJIT KUMAR UPADHYAY, V. SREE HARI RAO Journal: The ANZIAM Journal / Volume 59 / Issue 3 / January 2018 Print publication: January 2018 We have proposed a three-species hybrid food chain model with multiple time delays. The interaction between the prey and the middle predator follows Holling type (HT) II functional response, while the interaction between the top predator and its only food, the middle predator, is taken as a general functional response with the mutual interference schemes, such as Crowley–Martin (CM), Beddington–DeAngelis (BD) and Hassell–Varley (HV) functional responses. We analyse the model system which employs HT II and CM functional responses, and discuss the local and global stability analyses of the coexisting equilibrium solution. The effect of gestation delay on both the middle and top predator has been studied. The dynamics of model systems are affected by both factors: gestation delay and the form of functional responses considered. The theoretical results are supported by appropriate numerical simulations, and bifurcation diagrams are obtained for biologically feasible parameter values. It is interesting from the application point of view to show how an individual delay changes the dynamics of the model system depending on the form of functional response. ON THE WELL-POSEDNESS OF A NONLINEAR HIERARCHICAL SIZE-STRUCTURED POPULATION MODEL YAN LIU, ZE-RONG HE Journal: The ANZIAM Journal / Volume 58 / Issue 3-4 / April 2017 Print publication: April 2017 We analyse a nonlinear hierarchical size-structured population model with time-dependent individual vital rates. The existence and uniqueness of nonnegative solutions to the model are shown via a comparison principle. Our investigation extends some results in the literature. On the Dynamics of the Weak Fréedericksz Transition for Nematic Liquid Crystals Foundations, constitutive equations, rheology Peder Aursand, Gaetano Napoli, Johanna Ridder Journal: Communications in Computational Physics / Volume 20 / Issue 5 / November 2016 Published online by Cambridge University Press: 02 November 2016, pp. 1359-1380 Print publication: November 2016 We propose an implicit finite-difference method to study the time evolution of the director field of a nematic liquid crystal under the influence of an electric field with weak anchoring at the boundary. The scheme allows us to study the dynamics of transitions between different director equilibrium states under varying electric field and anchoring strength. In particular, we are able to simulate the transition to excited states of odd parity, which have previously been observed in experiments, but so far only analyzed in the static case. Nonlinear Vibration Analysis of Functionally Graded Nanobeam Using Homotopy Perturbation Method Material properties given special treatment Elastic materials Equilibrium (steady-state) problems Majid Ghadiri, Mohsen Safi Journal: Advances in Applied Mathematics and Mechanics / Volume 9 / Issue 1 / February 2017 Published online by Cambridge University Press: 11 October 2016, pp. 144-156 Print publication: February 2017 In this paper, He's homotopy perturbation method is utilized to obtain the analytical solution for the nonlinear natural frequency of functionally graded nanobeam. The functionally graded nanobeam is modeled using the Eringen's nonlocal elasticity theory based on Euler-Bernoulli beam theory with von Karman nonlinearity relation. The boundary conditions of problem are considered with both sides simply supported and simply supported-clamped. The Galerkin's method is utilized to decrease the nonlinear partial differential equation to a nonlinear second-order ordinary differential equation. Based on numerical results, homotopy perturbation method convergence is illustrated. According to obtained results, it is seen that the second term of the homotopy perturbation method gives extremely precise solution. Stochastic Models for Chladni Figures Thin bodies, structures Stochastic analysis Jaime Arango, Carlos Reyes Journal: Proceedings of the Edinburgh Mathematical Society / Volume 59 / Issue 2 / May 2016 Published online by Cambridge University Press: 10 August 2015, pp. 287-300 Chladni figures are formed when particles scattered across a plate move due to an external harmonic force resonating with one of the natural frequencies of the plate. Chladni figures are precisely the nodal set of the vibrational mode corresponding to the frequency resonating with the external force. We propose a plausible model for the movement of the particles that explains the formation of Chladni figures in terms of the stochastic stability of the equilibrium solutions of stochastic differential equations. THE MECHANICS OF HEARING: A COMPARATIVE CASE STUDY IN BIO-MATHEMATICAL MODELLING A. R. CHAMPNEYS, D. AVITABILE, M. HOMER, R. SZALAI A synthesis is presented of two recent studies on modelling the nonlinear neuro-mechanical hearing processes in mosquitoes and in mammals. In each case, a hierarchy of models is considered in attempts to understand data that shows nonlinear amplification and compression of incoming sound signals. The insect's hearing is tuned to the vicinity of a single input frequency. Nonlinear response occurs via an arrangement of many dual capacity neuro-mechanical units called scolopidia within the Johnston's organ. It is shown how the observed data can be captured by a simple nonlinear oscillator model that is derived from homogenization of a more complex model involving a radial array of scolopidia. The physiology of the mammalian cochlea is much more complex, with hearing occurring via a travelling wave along a tapered, compartmentalized tube. Waves travel a frequency-dependent distance along the tube, at which point they are amplified and "heard". Local models are reviewed for the pickup mechanism, within the outer hair cells of the organ of Corti. The current debate in the literature is elucidated, on the relative importance of two possible nonlinear mechanisms: active hair bundles and somatic motility. It is argued that the best experimental agreement can be found when the nonlinear terms include longitudinal coupling, the physiological basis of which is described. A discussion section summarizes the lessons learnt from both studies and attempts to shed light on the more general question of what constitutes a good mathematical model of a complex physiological process.
CommonCrawl
Bifurcation analysis of an enzyme-catalyzed reaction system with branched sink DCDS-B Home Semidefinite approximations of invariant measures for polynomial systems December 2019, 24(12): 6771-6782. doi: 10.3934/dcdsb.2019166 Remarks on basic reproduction ratios for periodic abstract functional differential equations Tianhui Yang 1, and Lei Zhang 2,, School of Mathematical Sciences, University of Science and Technology of China, Hefei, Anhui 230026, China Department of Mathematics, Harbin Institute of Technology in Weihai, Weihai, Shandong 264209, China * Corresponding author Received September 2018 Revised March 2019 Published July 2019 Fund Project: Our research were supported by National Natural Science Foundation of China (11571334 and 11801232) and Natural Science Foundation of Shandong Province (ZR2019QA006) In this paper, we extend the theory of basic reproduction ratios $ \mathcal{R}_0 $ in [Liang, Zhang, Zhao, JDDE], which concerns with abstract functional differential systems in a time-periodic environment. We prove the threshold dynamics, that is, the sign of $ \mathcal{R}_0-1 $ determines the dynamics of the associated linear system. We also propose a direct and efficient numerical method to calculate $ \mathcal{R}_0 $. Keywords: Basic reproduction ratio, abstract functional differential system, time-periodic, threshold dynamics, numerical method. Mathematics Subject Classification: Primary: 34K20, 35K57; Secondary: 37B55. Citation: Tianhui Yang, Lei Zhang. Remarks on basic reproduction ratios for periodic abstract functional differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6771-6782. doi: 10.3934/dcdsb.2019166 N. Bacaër and E. H. Ait Dads, Genealogy with seasonality, the basic reproduction number, and the influenza pandemic, J. Math. Biol., 62 (2011), 741-762. doi: 10.1007/s00285-010-0354-8. Google Scholar N. Bacaër and E. H. Ait Dads, On the biological interpretation of a definition for the parameter R0 in periodic population models, J. Math. Biol., 65 (2012), 601-621. doi: 10.1007/s00285-011-0479-4. Google Scholar N. Bacaër and S. Guernaoui, The epidemic threshold of vector-borne diseases with seasonality, J. Math. Biol., 53 (2006), 421-436. doi: 10.1007/s00285-006-0015-0. Google Scholar L. Burlando, Monotonicity of spectral radius for positive operators on ordered Banach spaces, Arch. Math. (Basel), 56 (1991), 49-57. doi: 10.1007/BF01190081. Google Scholar D. Daners and P. K. Medina, Abstract Evolution Equations, Periodic Problems and Applications, vol. 279 of Pitman Res. Notes Math. Ser., Longman Scientific & Technical, Harlow, UK, 1992. Google Scholar K. Deimling, Nonlinear Functional Analysis, Springer-Verlag, Berlin, Heidelberg, 1985. doi: 10.1007/978-3-662-00547-7. Google Scholar O. Diekmann, J. Heesterbeek and J. A. Metz, On the definition and the computation of the basic reproduction ratio R0 in models for infectious diseases in heterogeneous populations, J. Math. Biol., 28 (1990), 365-382. doi: 10.1007/BF00178324. Google Scholar Z. Guo, F.-B. Wang and X. Zou, Threshold dynamics of an infective disease model with a fixed latent period and non-local infections, J. Math. Biol., 65 (2012), 1387-1410. doi: 10.1007/s00285-011-0500-y. Google Scholar H. Inaba, On a new perspective of the basic reproduction number in heterogeneous environments, J. Math. Biol., 65 (2012), 309-348. doi: 10.1007/s00285-011-0463-z. Google Scholar T. Kato, Perturbation Theory for Linear Operators, Classics in Mathematics, Reprint of the 1980 edition, Springer-Verlag, Berlin, Heidelberg, 1995. Google Scholar X. Liang, L. Zhang and X.-Q. Zhao, Basic reproduction ratios for periodic abstract functional differential equations (with application to a spatial model for lyme disease), J. Dynam. Differential Equations doi: 10.1007/s10884-017-9601-7. Google Scholar Y. Lou and X.-Q. Zhao, A reaction–diffusion malaria model with incubation period in the vector population, J. Math. Biol., 62 (2011), 543-568. doi: 10.1007/s00285-010-0346-8. Google Scholar R. Martin and H. Smith, Abstract functional-differential equations and reaction-diffusion systems, Trans. Amer. Math. Soc., 321 (1990), 1-44. doi: 10.2307/2001590. Google Scholar H. Mckenzie, Y. Jin, J. Jacobsen and M. Lewis, R0 analysis of a spatiotemporal model for a stream population, SIAM J. Appl. Dyn. Syst., 11 (2012), 567-596. doi: 10.1137/100802189. Google Scholar D. Posny and J. Wang, Computing the basic reproductive numbers for epidemiological models in nonhomogeneous environments, Appl. Math. Comput., 242 (2014), 473-490. doi: 10.1016/j.amc.2014.05.079. Google Scholar H. R. Thieme, Spectral bound and reproduction number for infinite-dimensional population structure and time heterogeneity, SIAM J. Appl. Math., 70 (2009), 188-211. doi: 10.1137/080732870. Google Scholar P. van den Driessche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Math. Biosci., 180 (2002), 29-48. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar B.-G. Wang and X.-Q. Zhao, Basic reproduction ratios for almost periodic compartmental epidemic models, J. Dynam. Differential Equations, 25 (2013), 535-562. doi: 10.1007/s10884-013-9304-7. Google Scholar W. Wang and X.-Q. Zhao, Threshold dynamics for compartmental epidemic models in periodic environments, J. Dynam. Differential Equations, 20 (2008), 699-717. doi: 10.1007/s10884-008-9111-8. Google Scholar W. Wang and X.-Q. Zhao, Basic reproduction numbers for reaction-diffusion epidemic models, SIAM J. Appl. Dyn. Syst., 11 (2012), 1652-1673. doi: 10.1137/120872942. Google Scholar X. Yu and X.-Q. Zhao, A nonlocal spatial model for Lyme disease, J. Differential Equations, 261 (2016), 340-372. doi: 10.1016/j.jde.2016.03.014. Google Scholar Y. Zhang and X.-Q. Zhao, A reaction-diffusion Lyme disease model with seasonality, SIAM J. Appl. Math., 73 (2013), 2077-2099. doi: 10.1137/120875454. Google Scholar X.-Q. Zhao, Basic reproduction ratios for periodic compartmental models with time delay, J. Dynam. Differential Equations, 29 (2017), 67-82. doi: 10.1007/s10884-015-9425-2. Google Scholar Figure 1. Comparison of results using two methods by ODEs Figure 2. Comparison of results using two methods by Reaction-Diffusion systems Figure 3. Comparison of results using two methods by DDEs Figure 4. Comparison of results using two methods by Reaction-Diffusion systems with time-delay Table 1. Mean values and relative errors under different partitions m Mean numerical value Relative error(%) 500 1.7599 0.5681 Mostafa Fazly, Mahmoud Hesaaraki. Periodic solutions for a semi-ratio-dependent predator-prey dynamical system with a class of functional responses on time scales. Discrete & Continuous Dynamical Systems - B, 2008, 9 (2) : 267-279. doi: 10.3934/dcdsb.2008.9.267 Hui Cao, Yicang Zhou. The basic reproduction number of discrete SIR and SEIS models with periodic parameters. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 37-56. doi: 10.3934/dcdsb.2013.18.37 Jean-Jérôme Casanova. Existence of time-periodic strong solutions to a fluid–structure system. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3291-3313. doi: 10.3934/dcds.2019136 Xinjian Wang, Guo Lin. Asymptotic spreading for a time-periodic predator-prey system. Communications on Pure & Applied Analysis, 2019, 18 (6) : 2983-2999. doi: 10.3934/cpaa.2019133 Lin Zhao, Zhi-Cheng Wang, Liang Zhang. Threshold dynamics of a time periodic and two–group epidemic model with distributed delay. Mathematical Biosciences & Engineering, 2017, 14 (5&6) : 1535-1563. doi: 10.3934/mbe.2017080 Yijun Lou, Xiao-Qiang Zhao. Threshold dynamics in a time-delayed periodic SIS epidemic model. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 169-186. doi: 10.3934/dcdsb.2009.12.169 Xiongxiong Bao, Wan-Tong Li, Zhi-Cheng Wang. Uniqueness and stability of time-periodic pyramidal fronts for a periodic competition-diffusion system. Communications on Pure & Applied Analysis, 2020, 19 (1) : 253-277. doi: 10.3934/cpaa.2020014 Seiji Ukai. Time-periodic solutions of the Boltzmann equation. Discrete & Continuous Dynamical Systems - A, 2006, 14 (3) : 579-596. doi: 10.3934/dcds.2006.14.579 Martin Heida, Alexander Mielke. Averaging of time-periodic dissipation potentials in rate-independent processes. Discrete & Continuous Dynamical Systems - S, 2017, 10 (6) : 1303-1327. doi: 10.3934/dcdss.2017070 Peter Giesl, Holger Wendland. Approximating the basin of attraction of time-periodic ODEs by meshless collocation. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1249-1274. doi: 10.3934/dcds.2009.25.1249 Yi Wang, Dun Zhou. Transversality for time-periodic competitive-cooperative tridiagonal systems. Discrete & Continuous Dynamical Systems - B, 2015, 20 (6) : 1821-1830. doi: 10.3934/dcdsb.2015.20.1821 Marcos Lizana, Julio Marín. On the dynamics of a ratio dependent Predator-Prey system with diffusion and delay. Discrete & Continuous Dynamical Systems - B, 2006, 6 (6) : 1321-1338. doi: 10.3934/dcdsb.2006.6.1321 Wen-Bin Yang, Yan-Ling Li, Jianhua Wu, Hai-Xia Li. Dynamics of a food chain model with ratio-dependent and modified Leslie-Gower functional responses. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2269-2290. doi: 10.3934/dcdsb.2015.20.2269 Nguyen Thieu Huy, Ngo Quy Dang. Dichotomy and periodic solutions to partial functional differential equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3127-3144. doi: 10.3934/dcdsb.2017167 Jitai Liang, Ben Niu, Junjie Wei. Linearized stability for abstract functional differential equations subject to state-dependent delays with applications. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 6167-6188. doi: 10.3934/dcdsb.2019134 Zhenguo Bai. Threshold dynamics of a periodic SIR model with delay in an infected compartment. Mathematical Biosciences & Engineering, 2015, 12 (3) : 555-564. doi: 10.3934/mbe.2015.12.555 Xiao Wang, Zhaohui Yang, Xiongwei Liu. Periodic and almost periodic oscillations in a delay differential equation system with time-varying coefficients. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6123-6138. doi: 10.3934/dcds.2017263 Nguyen Minh Man, Nguyen Van Minh. On the existence of quasi periodic and almost periodic solutions of neutral functional differential equations. Communications on Pure & Applied Analysis, 2004, 3 (2) : 291-300. doi: 10.3934/cpaa.2004.3.291 Richard H. Rand, Asok K. Sen. A numerical investigation of the dynamics of a system of two time-delay coupled relaxation oscillators. Communications on Pure & Applied Analysis, 2003, 2 (4) : 567-577. doi: 10.3934/cpaa.2003.2.567 Wei-Jie Sheng, Wan-Tong Li. Multidimensional stability of time-periodic planar traveling fronts in bistable reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2681-2704. doi: 10.3934/dcds.2017115 Tianhui Yang Lei Zhang
CommonCrawl
Representation Theory Published by the American Mathematical Society, the Representation Theory (ERT) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. The 2020 MCQ for Representation Theory is 0.7. Journals Home eContent Search About ERT Editorial Board Author and Submission Information Journal Policies Subscription Information Commutative quantum current operators, semi-infinite construction and functional models by Jintai Ding and Boris Feigin PDF Represent. Theory 4 (2000), 330-341 Request permission We construct the commutative current operator $\bar x^+(z)$ inside $U_q(\hat {\mathfrak {sl}}(2))$. With this operator and the condition of quantum integrability on the quantum currents of $U_q(\hat {\mathfrak {sl}}(2))$, we derive the quantization of the semi-infinite construction of integrable modules of $\hat {\mathfrak {sl}}(2)$ which has been previously obtained by means of the current operator $e(z)$ of $\hat {\mathfrak {sl}}(2)$. The quantization of the functional models for $\hat {\mathfrak {sl}}(2)$ is also given. [DM] DM J. Ding and T. Miwa Zeros and poles of quantum current operators and the condition of quantum integrability, q-alg/9608001,RIMS-1092. [DI] DI J. Ding and K. Iohara Generalization and deformation of the quantum affine algebras, Rims-1090, q-alg/9608002. V. G. Drinfel′d, Hopf algebras and the quantum Yang-Baxter equation, Dokl. Akad. Nauk SSSR 283 (1985), no. 5, 1060–1064 (Russian). MR 802128 V. G. Drinfel′d, Quantum groups, Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Berkeley, Calif., 1986) Amer. Math. Soc., Providence, RI, 1987, pp. 798–820. MR 934283 V. G. Drinfel′d, A new realization of Yangians and of quantum affine algebras, Dokl. Akad. Nauk SSSR 296 (1987), no. 1, 13–17 (Russian); English transl., Soviet Math. Dokl. 36 (1988), no. 2, 212–216. MR 914215 [FS1]FS1 B. L. Feigin and A. V. Stoyanovsky, Quasi-particle models for the representations of Lie algebras and the geometry of the flag manifold, RIMS-942. A. V. Stoyanovskiĭ and B. L. Feĭgin, Functional models of the representations of current algebras, and semi-infinite Schubert cells, Funktsional. Anal. i Prilozhen. 28 (1994), no. 1, 68–90, 96 (Russian, with Russian summary); English transl., Funct. Anal. Appl. 28 (1994), no. 1, 55–72. MR 1275728, DOI 10.1007/BF01079010 A. V. Stoyanovskiĭ and B. L. Feĭgin, Realization of a modular functor in the space of differentials, and geometric approximation of the manifold of moduli of $G$-bundles, Funktsional. Anal. i Prilozhen. 28 (1994), no. 4, 42–65, 95 (Russian, with Russian summary); English transl., Funct. Anal. Appl. 28 (1994), no. 4, 257–275 (1995). MR 1318339, DOI 10.1007/BF01076110 James Lepowsky and Mirko Primc, Structure of the standard modules for the affine Lie algebra $A^{(1)}_1$, Contemporary Mathematics, vol. 46, American Mathematical Society, Providence, RI, 1985. MR 814303, DOI 10.1090/conm/046 G. Lusztig, Quantum deformations of certain simple modules over enveloping algebras, Adv. in Math. 70 (1988), no. 2, 237–249. MR 954661, DOI 10.1016/0001-8708(88)90056-4 Retrieve articles in Representation Theory of the American Mathematical Society with MSC (2000): 17B37 Retrieve articles in all journals with MSC (2000): 17B37 Jintai Ding Affiliation: Department of Mathematical Sciences, University of Cincinnati, Cincinnati, Ohio 45221-0025 Email: [email protected] Boris Feigin Affiliation: Landau Institute of Theoretical Physics, Moscow, Russia Received by editor(s): April 17, 1998 Received by editor(s) in revised form: January 14, 2000 Published electronically: August 1, 2000 Journal: Represent. Theory 4 (2000), 330-341 MSC (2000): Primary 17B37 DOI: https://doi.org/10.1090/S1088-4165-00-00047-9
CommonCrawl
> hep-th > arXiv:2011.00637v3 nlin nlin.SI Title: $\mathrm{T}\overline{\mathrm{T}}$-deformed 1d Bose gas Authors: Yunfeng Jiang (Submitted on 1 Nov 2020 (v1), last revised 28 Feb 2022 (this version, v3)) Abstract: $\mathrm{T}\overline{\mathrm{T}}$ deformation was originally proposed as an irrelevant solvable deformation for 2d relativistic quantum field theories (QFTs). The same family of deformations can also be defined for integrable quantum spin chains which was first studied in the context of integrability in AdS/CFT. In this paper, we construct such deformations for yet another type of models, which describe a collection of particles moving in 1d and interacting in an integrable manner. The prototype of such models is the Lieb-Liniger model. This shows that such deformations can be defined for a very wide range of systems. We study the finite volume spectrum and thermodynamics of the $\mathrm{T}\overline{\mathrm{T}}$-deformed Lieb-Liniger model. We find that for one sign of the deformation parameter $(\lambda<0)$, the deformed spectrum becomes complex when the volume of the system is smaller than certain critical value, signifying the break down of UV physics. For the other sign $(\lambda>0)$, there exists an upper bound for the temperature, similar to the Hagedorn behavior of the $\mathrm{T}\overline{\mathrm{T}}$ deformed QFTs. Both behaviors can be attributed to the fact that $\mathrm{T}\overline{\mathrm{T}}$ deformation changes the size the particles. We show that for $\lambda>0$, the deformation increases the spaces between particles which effectively increases the volume of the system. For $\lambda<0$, $\mathrm{T}\overline{\mathrm{T}}$ deformation fattens point particles to finite size hard rods. This is similar to the observation that the action of $\mathrm{T}\overline{\mathrm{T}}$-deformed free boson is the Nambu-Goto action, which describes bosonic strings -- also an extended object with finite size. Comments: A comment added Subjects: High Energy Physics - Theory (hep-th); Statistical Mechanics (cond-mat.stat-mech); Exactly Solvable and Integrable Systems (nlin.SI) Journal reference: SciPost Phys. 12, 191 (2022) DOI: 10.21468/SciPostPhys.12.6.191 Report number: CERN-TH-2020-183 From: Yunfeng Jiang [view email] [v1] Sun, 1 Nov 2020 22:38:08 GMT (344kb,D) [v2] Tue, 14 Dec 2021 01:54:17 GMT (350kb,D) [v3] Mon, 28 Feb 2022 02:32:41 GMT (350kb,D)
CommonCrawl
The curved symmetric $ 2 $– and $ 3 $–center problem on constant negative surfaces Hartmut Pecher Fakultät für Mathematik und Naturwissenschaften, Bergische Universität Wuppertal, Gaußstr. 20, 42119 Wuppertal, Germany Received December 2020 Revised May 2021 Published September 2021 Early access June 2021 The local well-posedness problem for the Maxwell-Klein-Gordon system in Coulomb gauge as well as Lorenz gauge is treated in two space dimensions for data with minimal regularity assumptions. In the classical case of data in $ L^2 $-based Sobolev spaces $ H^s $ and $ H^l $ for the electromagnetic field $ \phi $ and the potential $ A $, respectively, the minimal regularity assumptions are $ s > \frac{1}{2} $ and $ l > \frac{1}{4} $, which leaves a gap of $ \frac{1}{2} $ and $ \frac{1}{4} $ to the critical regularity with respect to scaling $ s_c = l_c = 0 $. This gap can be reduced for data in Fourier-Lebesgue spaces $ \widehat{H}^{s, r} $ and $ \widehat{H}^{l, r} $ to $ s> \frac{21}{16} $ and $ l > \frac{9}{8} $ for $ r $ close to $ 1 $, whereas the critical exponents with respect to scaling fulfill $ s_c \to 1 $, $ l_c \to 1 $ as $ r \to 1 $. Here $ \|f\|_{\widehat{H}^{s, r}} : = \| \langle \xi \rangle^s \tilde{f}\|_{L^{r'}_{\tau \xi}} \, , \, 1 < r \le 2 \, , \, \frac{1}{r}+\frac{1}{r'} = 1 \, . $ Thus the gap is reduced for $ \phi $ as well as $ A $ in both gauges. Keywords: Maxwell-Klein-Gordon, local well-posedness, Lorenz gauge, Coulomb gauge. Mathematics Subject Classification: 35Q40, 35L70. Citation: Hartmut Pecher. Improved well-posedness results for the Maxwell-Klein-Gordon system in 2D. Communications on Pure & Applied Analysis, 2021, 20 (9) : 2965-2989. doi: 10.3934/cpaa.2021091 P. d'Ancona, D. Foschi and S. Selberg, Product estimates for wave-Sobolev spaces in 2+1 and 1+1 dimensions, Contemp. Math., 526 (2010), 125-150. doi: 10.1090/conm/526/10379. Google Scholar P. d'Ancona, D. Foschi and S. Selberg, Null structure and almost optimal local regularity for the Dirac-Klein-Gordon system, J. EMS, 9 (2007), 877-898. doi: 10.4171/JEMS/100. Google Scholar S. Cuccagna, On the local existence for the Maxwell-Klein-Gordon system in $\mathbb{R}^{3+1}$, Commun. Partial Differ. Equ., 24 (1999), 851-867. doi: 10.1080/03605309908821449. Google Scholar M. Czubak and N. Pikula, Low regularity well-posedness for the 2D Maxwell-Klein-Gordon equation in the Coulomb gauge., Commun. Pure Appl. Anal., 13 (2014), 1669-1683. doi: 10.3934/cpaa.2014.13.1669. Google Scholar D. Foschi and S. Klainerman, Bilinear space-time estimates for homogeneous wave equations., Ann. Sc. ENS., 33 (2000), 211-274. doi: 10.1016/S0012-9593(00)00109-9. Google Scholar V. Grigoryan and A. Nahmod, Almost critical well-posedness for nonlinear wave equation with $Q_{\mu \nu}$ null forms in 2D., Math. Res. Lett., 21 (2014), 313-332. doi: 10.4310/MRL.2014.v21.n2.a9. Google Scholar V. Grigoryan and A. Tanguay, Improved well-posedness for the quadratic derivative nonlinear wave equation in 2D, J. Math. Anal. Appl., 475 (2019), 1578-1595. doi: 10.1016/j.jmaa.2019.03.032. Google Scholar A. Grünrock, An improved local well-posedness result for the modified KdV equation., Int. Math. Res. Not., 61 (2004), 3287-3308. doi: 10.1155/S1073792804140981. Google Scholar A. Grünrock, On the wave equation with quadratic nonlinearities in three space dimensions., Hyperbolic Differ. Equ., 8 (2011), 1-8. doi: 10.1142/S0219891611002305. Google Scholar A. Grünrock and L. Vega, Local well-posedness for the modified KdV equation in almost critical $\hat{H}^r_s$ -spaces., Trans. Amer. Mat. Soc., 361 (2009), 5681-5694. doi: 10.1090/S0002-9947-09-04611-X. Google Scholar M. Keel, T. Roy and T. Tao, Global well-posedness of the Maxwell-Klein-Gordon equation below the energy norm, Discrete Cont. Dyn. Syst., 30 (2011), 573-621. doi: 10.3934/dcds.2011.30.573. Google Scholar S. Klainerman and M. Machedon, On the Maxwell-Klein-Gordon equation with finite energy, Duke Math. J., 74 (1994), 19-44. doi: 10.1215/S0012-7094-94-07402-4. Google Scholar S. Klainerman and S. Selberg, Bilinear estimates and applications to nonlinear wave equations, Commun. Contemp. Math., 4 (2002), 223-295. doi: 10.1142/S0219199702000634. Google Scholar M. Machedon and J. Sterbenz, Almost optimal local well-posedness for the (3+1)- dimensional Maxwell-Klein-Gordon equations, J. AMS, 17 (2004), 297-359. doi: 10.1090/S0894-0347-03-00445-4. Google Scholar H. Pecher, Low regularity local well-posedness for the Maxwell-Klein-Gordon equations in Lorenz gauge, Adv. Differ. Equ., 19 (2014), 359-386. Google Scholar H. Pecher, Almost optimal local well-posedness for the Maxwell-Klein-Gordon system in Fourier-Lebesgue spaces, Commun. Pure Appl. Anal., 19 (2020), 3303-3321. doi: 10.3934/cpaa.2020146. Google Scholar S. Selberg, Almost optimal local well-posedness of the Maxwell-Klein-Gordon equations in 1+4 dimensions, Commun. Partial Differ. Equ., 27 (2002), 1183-1227. doi: 10.1081/PDE-120004899. Google Scholar S. Selberg and A. Tesfahun, Finite-energy global well-posedness of the Maxwell-Klein-Gordon system in Lorenz gauge, Commun. Partial Differ. Equ., 35 (2010), 1029-1057. doi: 10.1080/03605301003717100. Google Scholar T. Tao, Multilinear weighted convolutions of $L^2$-functions, and applications to non-linear dispersive equations, Amer. J. Math., 123 (2001), 839-908. Google Scholar Magdalena Czubak, Nina Pikula. Low regularity well-posedness for the 2D Maxwell-Klein-Gordon equation in the Coulomb gauge. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1669-1683. doi: 10.3934/cpaa.2014.13.1669 Hartmut Pecher. Almost optimal local well-posedness for the Maxwell-Klein-Gordon system with data in Fourier-Lebesgue spaces. Communications on Pure & Applied Analysis, 2020, 19 (6) : 3303-3321. doi: 10.3934/cpaa.2020146 Jianjun Yuan. On the well-posedness of Maxwell-Chern-Simons-Higgs system in the Lorenz gauge. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 2389-2403. doi: 10.3934/dcds.2014.34.2389 Hartmut Pecher. Low regularity solutions for the (2+1)-dimensional Maxwell-Klein-Gordon equations in temporal gauge. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2203-2219. doi: 10.3934/cpaa.2016034 M. Keel, Tristan Roy, Terence Tao. Global well-posedness of the Maxwell-Klein-Gordon equation below the energy norm. Discrete & Continuous Dynamical Systems, 2011, 30 (3) : 573-621. doi: 10.3934/dcds.2011.30.573 Hartmut Pecher. Local solutions with infinite energy of the Maxwell-Chern-Simons-Higgs system in Lorenz gauge. Discrete & Continuous Dynamical Systems, 2016, 36 (4) : 2193-2204. doi: 10.3934/dcds.2016.36.2193 Hartmut Pecher. Local well-posedness for the Klein-Gordon-Zakharov system in 3D. Discrete & Continuous Dynamical Systems, 2021, 41 (4) : 1707-1736. doi: 10.3934/dcds.2020338 Shinya Kinoshita. Well-posedness for the Cauchy problem of the Klein-Gordon-Zakharov system in 2D. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1479-1504. doi: 10.3934/dcds.2018061 Isao Kato. Well-posedness for the Cauchy problem of the Klein-Gordon-Zakharov system in four and more spatial dimensions. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2247-2280. doi: 10.3934/cpaa.2016036 Hartmut Pecher. Low regularity well-posedness for the 3D Klein - Gordon - Schrödinger system. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1081-1096. doi: 10.3934/cpaa.2012.11.1081 Radhia Ghanmi, Tarek Saanouni. Well-posedness issues for some critical coupled non-linear Klein-Gordon equations. Communications on Pure & Applied Analysis, 2019, 18 (2) : 603-623. doi: 10.3934/cpaa.2019030 Nikolaos Bournaveas, Timothy Candy, Shuji Machihara. A note on the Chern-Simons-Dirac equations in the Coulomb gauge. Discrete & Continuous Dynamical Systems, 2014, 34 (7) : 2693-2701. doi: 10.3934/dcds.2014.34.2693 Jishan Fan, Yueling Jia. Local well-posedness of the full compressible Navier-Stokes-Maxwell system with vacuum. Kinetic & Related Models, 2018, 11 (1) : 97-106. doi: 10.3934/krm.2018005 E. Compaan, N. Tzirakis. Low-regularity global well-posedness for the Klein-Gordon-Schrödinger system on $ \mathbb{R}^+ $. Discrete & Continuous Dynamical Systems, 2019, 39 (7) : 3867-3895. doi: 10.3934/dcds.2019156 Jianjun Yuan. Global solutions of two coupled Maxwell systems in the temporal gauge. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1709-1719. doi: 10.3934/dcds.2016.36.1709 Boris Kolev. Local well-posedness of the EPDiff equation: A survey. Journal of Geometric Mechanics, 2017, 9 (2) : 167-189. doi: 10.3934/jgm.2017007 Piero D'Ancona, Mamoru Okamoto. Blowup and ill-posedness results for a Dirac equation without gauge invariance. Evolution Equations & Control Theory, 2016, 5 (2) : 225-234. doi: 10.3934/eect.2016002 Hartmut Pecher. Infinite energy solutions for the (3+1)-dimensional Yang-Mills equation in Lorenz gauge. Communications on Pure & Applied Analysis, 2019, 18 (2) : 663-688. doi: 10.3934/cpaa.2019033 Pietro d'Avenia, Lorenzo Pisani, Gaetano Siciliano. Klein-Gordon-Maxwell systems in a bounded domain. Discrete & Continuous Dynamical Systems, 2010, 26 (1) : 135-149. doi: 10.3934/dcds.2010.26.135 Pierre-Damien Thizy. Klein-Gordon-Maxwell equations in high dimensions. Communications on Pure & Applied Analysis, 2015, 14 (3) : 1097-1125. doi: 10.3934/cpaa.2015.14.1097
CommonCrawl
Integrable systems with BMS$_{3}$ Poisson structure and the dynamics of locally flat spacetimes (1711.02646) Oscar Fuentealba, Javier Matulich, Alfredo Pérez, Miguel Pino, Pablo Rodríguez, David Tempo, Ricardo Troncoso Nov. 7, 2017 hep-th, math-ph, math.MP, gr-qc, nlin.SI We construct a hierarchy of integrable systems whose Poisson structure corresponds to the BMS$_{3}$ algebra, and then discuss its description in terms of the Riemannian geometry of locally flat spacetimes in three dimensions. The analysis is performed in terms of two-dimensional gauge fields for $isl(2,R)$. Although the algebra is not semisimple, the formulation can be carried out \`a la Drinfeld-Sokolov because it admits a nondegenerate invariant bilinear metric. The hierarchy turns out to be bi-Hamiltonian, labeled by a nonnegative integer $k$, and defined through a suitable generalization of the Gelfand-Dikii polynomials. The symmetries of the hierarchy are explicitly found. For $k\geq 1$, the corresponding conserved charges span an infinite-dimensional Abelian algebra without central extensions, and they are in involution; while in the case of $k=0$, they generate the BMS$_{3}$ algebra. In the special case of $k=1$, by virtue of a suitable field redefinition and time scaling, the field equations are shown to be equivalent to a specific type of the Hirota-Satsuma coupled KdV systems. For $k\geq 1$, the hierarchy also includes the so-called perturbed KdV equations as a particular case. A wide class of analytic solutions is also explicitly constructed for a generic value of $k$. Remarkably, the dynamics can be fully geometrized so as to describe the evolution of spacelike surfaces embedded in locally flat spacetimes. Indeed, General Relativity in 3D can be endowed with a suitable set of boundary conditions, so that the Einstein equations precisely reduce to the ones of the hierarchy aforementioned. The symmetries of the integrable systems then arise as diffeomorphisms that preserve the asymptotic form of the spacetime metric, and therefore, they become Noetherian. The infinite set of conserved charges is recovered from the corresponding surface integrals in the canonical approach. Asymptotic structure of $\mathcal{N}=2$ supergravity in 3D: extended super-BMS$_3$ and nonlinear energy bounds (1706.07542) Oscar Fuentealba, Javier Matulich, Ricardo Troncoso Sept. 16, 2017 hep-th, gr-qc The asymptotically flat structure of $\mathcal{N}=(2,0)$ supergravity in three spacetime dimensions is explored. The asymptotic symmetries are spanned by an extension of the super-BMS$_3$ algebra, with two independent $\hat{u}(1)$ currents of electric and magnetic type. These currents are associated to $U(1)$ fields being even and odd under parity, respectively. Remarkably, although the $U(1)$ fields do not generate a backreaction on the metric, they provide nontrivial Sugawara-like contributions to the BMS$_3$ generators, and hence to the energy and the angular momentum. The entropy of flat cosmological spacetimes with $U(1)$ fields then acquires a nontrivial dependence on the $\hat{u}(1)$ charges. If the spin structure is odd, the ground state corresponds to Minkowski spacetime, and although the anticommutator of the canonical supercharges is linear in the energy and in the electric-like $\hat{u}(1)$ charge, the energy becomes bounded from below by the energy of the ground state shifted by the square of the electric-like $\hat{u}(1)$ charge. If the spin structure is even, the same bound for the energy generically holds, unless the absolute value of the electric-like charge is less than minus the mass of Minkowski spacetime in vacuum, so that the energy has to be nonnegative. The explicit form of the Killing spinors is found for a wide class of configurations that fulfills our boundary conditions, and they exist precisely when the corresponding bounds are saturated. It is also shown that the spectra with periodic or antiperiodic boundary conditions for the fermionic fields are related by spectral flow, in a similar way as it occurs for the $\mathcal{N}=2$ super-Virasoro algebra. Indeed, our super-BMS$_3$ algebra can be recovered from the flat limit of the superconformal algebra with $\mathcal{N}=(2,2)$, truncating the fermionic generators of the right copy. Log corrections to entropy of three dimensional black holes with soft hair (1705.10605) Daniel Grumiller, Alfredo Perez, David Tempo, Ricardo Troncoso June 3, 2017 hep-th, gr-qc We calculate log corrections to the entropy of three-dimensional black holes with "soft hairy" boundary conditions. Their thermodynamics possesses some special features that preclude a naive direct evaluation of these corrections, so we follow two different approaches. The first one exploits that the BTZ black hole belongs to the spectrum of Brown-Henneaux as well as soft hairy boundary conditions, so that the respective log corrections are related through a suitable change of the thermodynamic ensemble. In the second approach the analogue of modular invariance is considered for dual theories with anisotropic scaling of Lifshitz type with dynamical exponent z at the boundary. On the gravity side such scalings arise for KdV-type boundary conditions, which provide a specific 1-parameter family of multi-trace deformations of the usual AdS3/CFT2 setup, with Brown-Henneaux corresponding to z=1 and soft hairy boundary conditions to the limiting case z=0. Both approaches agree in the case of BTZ black holes for any non-negative z. Finally, for soft hairy boundary conditions we show that not only the leading term, but also the log corrections to the entropy of black flowers endowed with affine u(1) soft hair charges exclusively depend on the zero modes and hence coincide with the ones for BTZ black holes. Soft hairy horizons in three spacetime dimensions (1611.09783) Hamid Afshar, Daniel Grumiller, Wout Merbis, Alfredo Perez, David Tempo, Ricardo Troncoso Dec. 21, 2016 hep-th, gr-qc We discuss some aspects of soft hairy black holes and a new kind of "soft hairy cosmologies", including a detailed derivation of the metric formulation, results on flat space, and novel observations concerning the entropy. Remarkably, like in the case with negative cosmological constant, we find that the asymptotic symmetries for locally flat spacetimes with a horizon are governed by infinite copies of the Heisenberg algebra that generate soft hair descendants. It is also shown that the generators of the three-dimensional Bondi-Metzner-Sachs algebra arise from composite operators of the affine u(1) currents through a twisted Sugawara-like construction. We then discuss entropy macroscopically, thermodynamically and microscopically and discover that a microscopic formula derived recently for boundary conditions associated to the Korteweg-de Vries hierarchy fits perfectly our results for entropy and ground state energy. We conclude with a comparison to related approaches. Higher Spin Black Holes with Soft Hair (1607.05360) Daniel Grumiller, Alfredo Perez, Stefan Prohazka, David Tempo, Ricardo Troncoso July 19, 2016 hep-th, gr-qc We construct a new set of boundary conditions for higher spin gravity, inspired by a recent "soft Heisenberg hair"-proposal for General Relativity on three-dimensional Anti-de Sitter. The asymptotic symmetry algebra consists of a set of affine $\hat u(1)$ current algebras. Its associated canonical charges generate higher spin soft hair. We focus first on the spin-3 case and then extend some of our main results to spin-$N$, many of which resemble the spin-2 results: the generators of the asymptotic $W_3$ algebra naturally emerge from composite operators of the $\hat u(1)$ charges through a twisted Sugawara construction; our boundary conditions ensure regularity of the Euclidean solutions space independently of the values of the charges; solutions, which we call "higher spin black flowers", are stationary but not necessarily spherically symmetric. Finally, we derive the entropy of higher spin black flowers, and find that for the branch that is continuously connected to the BTZ black hole, it depends only on the affine purely gravitational zero modes. Using our map to $W$-algebra currents we recover well-known expressions for higher spin entropy. We also address higher spin black flowers in the metric formalism and achieve full consistency with previous results. Boundary conditions for General Relativity on AdS$_{3}$ and the KdV hierarchy (1605.04490) Alfredo Pérez, David Tempo, Ricardo Troncoso June 17, 2016 hep-th, math-ph, math.MP, gr-qc It is shown that General Relativity with negative cosmological constant in three spacetime dimensions admits a new family of boundary conditions being labeled by a nonnegative integer $k$. Gravitational excitations are then described by "boundary gravitons" that fulfill the equations of the $k$-th element of the KdV hierarchy. In particular, $k=0$ corresponds to the Brown-Henneaux boundary conditions so that excitations are described by chiral movers. In the case of $k=1$, the boundary gravitons fulfill the KdV equation and the asymptotic symmetry algebra turns out to be infinite-dimensional, abelian and devoid of central extensions. The latter feature also holds for the remaining cases that describe the hierarchy ($k>1$). Our boundary conditions then provide a gravitational dual of two noninteracting left and right KdV movers, and hence, boundary gravitons possess anisotropic Lifshitz scaling with dynamical exponent $z=2k+1$. Remarkably, despite spacetimes solving the field equations are locally AdS, they possess anisotropic scaling being induced by the choice of boundary conditions. As an application, the entropy of a rotating BTZ black hole is precisely recovered from a suitable generalization of the Cardy formula that is compatible with the anisotropic scaling of the chiral KdV movers at the boundary, in which the energy of AdS spacetime with our boundary conditions depends on $z$ and plays the role of the central charge. The extension of our boundary conditions to the case of higher spin gravity and its link with different classes of integrable systems is also briefly addressed. Soft Heisenberg hair on black holes in three dimensions (1603.04824) Hamid Afshar, Stephane Detournay, Daniel Grumiller, Wout Merbis, Alfredo Perez, David Tempo, Ricardo Troncoso March 15, 2016 hep-th, gr-qc Three-dimensional Einstein gravity with negative cosmological constant admits stationary black holes that are not necessarily spherically symmetric. We propose boundary conditions for the near horizon region of these black holes that lead to a surprisingly simple near horizon symmetry algebra consisting of two affine u(1) current algebras. The symmetry algebra is essentially equivalent to the Heisenberg algebra. The associated charges give a specific example of "soft hair" on the horizon, as defined by Hawking, Perry and Strominger. We show that soft hair does not contribute to the Bekenstein-Hawking entropy of Banados-Teitelboim-Zanelli black holes and "black flower" generalizations. From the near horizon perspective the conformal generators at asymptotic infinity appear as composite operators, which we interpret in the spirit of black hole complementarity. Another remarkable feature of our boundary conditions is that they are singled out by requiring that the whole spectrum is compatible with regularity at the horizon, regardless the value of the global charges like mass or angular momentum. Finally, we address black hole microstates and generalizations to cosmological horizons. Asymptotically flat black holes and gravitational waves in three-dimensional massive gravity (1512.09046) Cédric Troessaert, David Tempo, Ricardo Troncoso Different classes of exact solutions for the BHT massive gravity theory are constructed and analyzed. We focus in the special case of the purely quadratic Lagrangian, whose field equations are irreducibly of fourth order and are known to admit asymptotically locally flat black holes endowed with gravitational hair. The first class corresponds to a Kerr-Schild deformation of Minkowski spacetime along a covariantly constant null vector. As in the case of General Relativity, the field equations linearize so that the solution can be easily shown to be described by four arbitrary functions of a single null coordinate. These solutions can be regarded as a new sort of pp-waves. The second class is obtained from a deformation of the static asymptotically locally flat black hole, that goes along the spacelike (angular) Killing vector. Remarkably, although the deformation is not of Kerr-Schild type, the field equations also linearize, and hence the generic solution can be readily integrated. It is neither static nor spherically symmetric, being described by two integration constants and two arbitrary functions of the angular coordinate. In the static case it describes "black flowers" whose event horizons break the spherical symmetry. The generic time-dependent solution appears to describe a graviton that moves away from a black flower. Despite the asymptotic behaviour of these solutions at null infinity is relaxed with respect to the one for General Relativity, the asymptotic symmetries coincide. However, the algebra of the conserved charges corresponds to BMS$_{3}$, but devoid of central extensions. The "dynamical black flowers" are shown to possess a finite energy. The surface integrals that define the global charges also turn out to be useful in the description of the thermodynamics of solutions with event horizons. Extended anti-de Sitter Hypergravity in $2+1$ Dimensions and Hypersymmetry Bounds (1512.08603) Marc Henneaux, Alfredo Pérez, David Tempo, Ricardo Troncoso In a recent paper (JHEP {\bf 1508} (2015) 021), we have investigated hypersymmetry bounds in the context of simple anti-de Sitter hypergravity in $2+1$ dimensions. We showed that these bounds involved non linearly the spin-$2$ and spin-$4$ charges, and were saturated by a class of extremal black holes, which are $\frac14$-hypersymmetric. We continue the analysis here by considering $(M,N)$-extended anti-de Sitter hypergravity models, based on the superalgebra $osp(M \vert 4) \oplus osp(N \vert 4)$. The asymptotic symmetry superalgebra is then the direct sum of two-copies of a $W$-superalgebra that contains $so(M)$ (or $so(N)$) Kac-Moody currents of conformal weight $1$, fermionic generators of conformal weight $5/2$ and bosonic generators of conformal weight $4$ in addition to the Virasoro generators. The nonlinear hypersymmetry bounds on the conserved charges are derived and shown to be saturated by a class of extreme hypersymmetric black holes which we explicitly construct. Asymptotically locally flat spacetimes and dynamical black flowers in three dimensions (1512.05410) Glenn Barnich, Cédric Troessaert, David Tempo, Ricardo Troncoso Dec. 16, 2015 hep-th The theory of massive gravity proposed by Bergshoeff, Hohm and Townsend is considered in the special case of the pure irreducibly fourth order quadratic Lagrangian. It is shown that the asymptotically locally flat black holes of this theory can be consistently deformed to "black flowers" that are no longer spherically symmetric. Moreover, we construct radiating spacetimes settling down to these black flowers in the far future. The generic case can be shown to fit within a relaxed set of asymptotic conditions as compared to the ones of general relativity at null infinity, while the asymptotic symmetries remain the same. Conserved charges as surface integrals at null infinity are constructed following a covariant approach, and their algebra represents BMS$_{3}$, but without central extensions. For solutions possessing an event horizon, we derive the first law of thermodynamics from these surface integrals. Asymptotic structure of the Einstein-Maxwell theory on AdS$_{3}$ (1512.01576) Alfredo Perez, Miguel Riquelme, David Tempo, Ricardo Troncoso Dec. 4, 2015 hep-th, gr-qc The asymptotic structure of AdS spacetimes in the context of General Relativity coupled to the Maxwell field in three spacetime dimensions is analyzed. Although the fall-off of the fields is relaxed with respect to that of Brown and Henneaux, the variation of the canonical generators associated to the asymptotic Killing vectors can be shown to be finite once required to span the Lie derivative of the fields. The corresponding surface integrals then acquire explicit contributions from the electromagnetic field, and become well-defined provided they fulfill suitable integrability conditions, implying that the leading terms of the asymptotic form of the electromagnetic field are functionally related. Consequently, for a generic choice of boundary conditions, the asymptotic symmetries are broken down to $\mathbb{R}\otimes U\left(1\right)\otimes U\left(1\right)$. Nonetheless, requiring compatibility of the boundary conditions with one of the asymptotic Virasoro symmetries, singles out the set to be characterized by an arbitrary function of a single variable, whose precise form depends on the choice of the chiral copy. Remarkably, requiring the asymptotic symmetries to contain the full conformal group selects a very special set of boundary conditions that is labeled by a unique constant parameter, so that the algebra of the canonical generators is given by the direct sum of two copies of the Virasoro algebra with the standard central extension and $U\left(1\right)$. This special set of boundary conditions makes the energy spectrum of electrically charged rotating black holes to be well-behaved. Asymptotically flat structure of hypergravity in three spacetime dimensions (1508.04663) Nov. 4, 2015 hep-th, gr-qc The asymptotic structure of three-dimensional hypergravity without cosmological constant is analyzed. In the case of gravity minimally coupled to a spin-$5/2$ field, a consistent set of boundary conditions is proposed, being wide enough so as to accommodate a generic choice of chemical potentials associated to the global charges. The algebra of the canonical generators of the asymptotic symmetries is given by a hypersymmetric nonlinear extension of BMS$_{3}$. It is shown that the asymptotic symmetry algebra can be recovered from a subset of a suitable limit of the direct sum of the W$_{\left(2,4\right)}$ algebra with its hypersymmetric extension. The presence of hypersymmetry generators allows to construct bounds for the energy, which turn out to be nonlinear and saturate for spacetimes that admit globally-defined "Killing vector-spinors". The null orbifold or Minkowski spacetime can then be seen as the corresponding ground state in the case of fermions that fulfill periodic or anti-periodic boundary conditions, respectively. The hypergravity theory is also explicitly extended so as to admit parity-odd terms in the action. It is then shown that the asymptotic symmetry algebra includes an additional central charge, being proportional to the coupling of the Lorentz-Chern-Simons form. The generalization of these results in the case of gravity minimally coupled to arbitrary half-integer spin fields is also carried out. The hypersymmetry bounds are found to be given by a suitable polynomial of degree $s+\frac{1}{2}$ in the energy, where $s$ is the spin of the fermionic generators. Extension of the Poincar\'e group with half-integer spin generators: hypergravity and beyond (1505.06173) An extension of the Poincar\'e group with half-integer spin generators is explicitly constructed. We start discussing the case of three spacetime dimensions, and as an application, it is shown that hypergravity can be formulated so as to incorporate this structure as its local gauge symmetry. Since the algebra admits a nontrivial Casimir operator, the theory can be described in terms of gauge fields associated to the extension of the Poincar\'e group with a Chern-Simons action. The algebra is also shown to admit an infinite-dimensional non-linear extension, that in the case of fermionic spin-$3/2$ generators, corresponds to a subset of a contraction of two copies of WB$_2$. Finally, we show how the Poincar\'e group can be extended with half-integer spin generators for $d\geq3$ dimensions. Conserved charges and black holes in the Einstein-Maxwell theory on AdS$_{3}$ reconsidered (1509.01750) Stationary circularly symmetric solutions of General Relativity with negative cosmological constant coupled to the Maxwell field are analyzed in three spacetime dimensions. Taking into account that the fall-off of the fields is slower than the standard one for a localized distribution of matter, it is shown that, by virtue of a suitable choice of the electromagnetic Lagrange multiplier, the action attains a bona fide extremum provided the asymptotic form of the electromagnetic field fulfills a nontrivial integrability condition. As a consequence, the mass and the angular momentum become automatically finite, without the need of any regularization procedure, and they generically acquire contributions from the electromagnetic field. Therefore, unlike the higher-dimensional case, it is found that the precise value of the mass and the angular momentum explicitly depends on the choice of boundary conditions. It can also be seen that requiring compatibility of the boundary conditions with the Lorentz and scaling symmetries of the class of stationary solutions, singles out a very special set of "holographic boundary conditions" that is described by a single parameter. Remarkably, in stark contrast with the somewhat pathological behaviour found in the standard case, for the holographic boundary conditions (i) the energy spectrum of an electrically charged (rotating) black hole is nonnegative, and (ii) for a fixed value of the mass, the electric charge is bounded from above. Super-BMS$_{3}$ invariant boundary theory from three-dimensional flat supergravity (1510.08824) Glenn Barnich, Laura Donnay, Javier Matulich, Ricardo Troncoso Oct. 29, 2015 hep-th The two-dimensional super-BMS$_{3}$ invariant theory dual to three-dimensional asymptotically flat $\mathcal{N}=1$ supergravity is constructed. It is described by a constrained or gauged chiral Wess-Zumino-Witten action based on the super-Poincar\'e algebra in the Hamiltonian, respectively the Lagrangian formulation, whose reduced phase space description corresponds to a supersymmetric extension of flat Liouville theory. Hypersymmetry bounds and three-dimensional higher-spin black holes (1506.01847) Marc Henneaux, Alfredo Perez, David Tempo, Ricardo Troncoso We investigate the hypersymmetry bounds on the higher spin black hole parameters that follow from the asymptotic symmetry superalgebra in higher-spin anti-de Sitter gravity in three spacetime dimensions. We consider anti-de Sitter hypergravity for which the analysis is most transparent. This is a $osp(1\vert 4) \oplus osp(1\vert 4)$ Chern-Simons theory which contains, besides a spin-$2$ field, a spin-$4$ field and a spin-$5/2$ field. The asymptotic symmetry superalgebra is then the direct sum of two-copies of the hypersymmetric extension $W_{(2,\frac52,4)}$ of $W_{(2,4)}$, which contains fermionic generators of conformal weight $5/2$ and bosonic generators of conformal weight $4$ in addition to the Virasoro generators. Following standard methods, we derive bounds on the conserved charges from the anticommutator of the hypersymmetry generators. The hypersymmetry bounds are nonlinear and are saturated by the hypersymmetric black holes, which turn out to possess $1/4$-hypersymmetry and to be "extreme", where extremality can be defined in terms of the entropy: extreme black holes are those that fulfill the extremality bounds beyond which the entropy ceases to be a real function of the black hole parameters. We also extend the analysis to other $sp(4)$-solitonic solutions which are maximally (hyper)symmetric. Higher spin extension of cosmological spacetimes in 3D: asymptotically flat behaviour with chemical potentials and thermodynamics (1412.1464) Javier Matulich, Alfredo Perez, David Tempo, Ricardo Troncoso A generalized set of asymptotic conditions for higher spin gravity without cosmological constant in three spacetime dimensions is constructed. They include the most general temporal components of the gauge fields that manifestly preserve the original asymptotic higher spin extension of the BMS$_{3}$ algebra, with the same central charge. By virtue of a suitable permissible gauge choice, it is shown that this set can be directly recovered as a limit of the boundary conditions that have been recently constructed in the case of negative cosmological constant, whose asymptotic symmetries are spanned by two copies of the centrally-extended W$_{3}$ algebra. Since the generalized asymptotic conditions allow to incorporate chemical potentials conjugated to the higher spin charges, a higher spin extension of locally flat cosmological spacetimes becomes naturally included within the set. It is shown that their thermodynamic properties can be successfully obtained exclusively in terms of gauge fields and the topology of the Euclidean manifold, which is shown to be the one of a solid torus, but with reversed orientation as compared with one of the black holes. It is also worth highlighting that regularity of the fields can be ensured through a procedure that does not require an explicit matrix representation of the entire gauge group. In few words, we show that the temporal components of generalized dreibeins can be consistently gauged away, which partially fixes the chemical potentials, so that the remaining conditions can just be obtained by requiring the holonomy of the generalized spin connection along a thermal circle to be trivial. The extension of the generalized asymptotically flat behaviour to the case of spins $s\geq2$ is also discussed. Asymptotic symmetries and dynamics of three-dimensional flat supergravity (1407.4275) A consistent set of asymptotic conditions for the simplest supergravity theory without cosmological constant in three dimensions is proposed. The canonical generators associated to the asymptotic symmetries are shown to span a supersymmetric extension of the BMS$_3$ algebra with an appropriate central charge. The energy is manifestly bounded from below with the ground state given by the null orbifold or Minkowski spacetime for periodic, respectively antiperiodic boundary conditions on the gravitino. These results are related to the corresponding ones in AdS$_3$ supergravity by a suitable flat limit. The analysis is generalized to the case of minimal flat supergravity with additional parity odd terms for which the Poisson algebra of canonical generators form a representation of the super-BMS$_3$ algebra with an additional central charge. Brief review on higher spin black holes (1402.1465) Alfredo Perez, David Tempo, Ricardo Troncoso May 12, 2014 hep-th, gr-qc We review some relevant results in the context of higher spin black holes in three-dimensional spacetimes, focusing on their asymptotic behaviour and thermodynamic properties. For simplicity, we mainly discuss the case of gravity nonminimally coupled to spin-3 fields, being nonperturbatively described by a Chern-Simons theory of two independent sl(3,R) gauge fields. Since the analysis is particularly transparent in the Hamiltonian formalism, we provide a concise discussion of their basic aspects in this context; and as a warming up exercise, we briefly analyze the asymptotic behaviour of pure gravity, as well as the BTZ black hole and its thermodynamics, exclusively in terms of gauge fields. The discussion is then extended to the case of black holes endowed with higher spin fields, briefly signaling the agreements and discrepancies found through different approaches. We conclude explaining how the puzzles become resolved once the fall off of the fields is precisely specified and extended to include chemical potentials, in a way that it is compatible with the asymptotic symmetries. Hence, the global charges become completely identified in an unambiguous way, so that different sets of asymptotic conditions turn out to contain inequivalent classes of black hole solutions being characterized by a different set of global charges. Generalized Black Holes in Three-dimensional Spacetime (1404.3305) Claudio Bunster, Marc Henneaux, Alfredo Perez, David Tempo, Ricardo Troncoso April 12, 2014 hep-th, gr-qc Three-dimensional spacetime with a negative cosmological constant has proven to be a remarkably fertile ground for the study of gravity and higher spin fields. The theory is topological and, since there are no propagating field degrees of freedom, the asymptotic symmetries become all the more crucial. For pure (2+1) gravity they consist of two copies of the Virasoro algebra. There exists a black hole which may be endowed with all the corresponding charges. The pure (2+1) gravity theory may be reformulated in terms of two Chern-Simons connections for sl(2,R). An immediate generalization containing gravity and a finite number of higher spin fields may be achieved by replacing sl(2,R) by sl(3,R) or, more generally, by sl(N,R). The asymptotic symmetries are then two copies of the so-called W_N algebra, which contains the Virasoro algebra as a subalgebra. The question then arises as to whether there exists a generalization of the standard pure gravity (2+1) black hole which would be endowed with all the W_N charges. The original pioneering proposal of a black hole along this line for N=3 turns out, as shown in this paper, to actually belong to the so called "diagonal embedding" of sl(2,R) in sl(3,R), and it is therefore endowed with charges of lower rather than higher spins. In contradistinction, we exhibit herein the most general black hole which belongs to the "principal embedding". It is endowed with higher spin charges, and possesses two copies of W_3 as its asymptotic symmetries. The most general diagonal embedding black hole is studied in detail as well, in a way in which its lower spin charges are clearly displayed. The extension to N>3 is also discussed. A general formula for the entropy of a generalized black hole is obtained in terms of the on-shell holonomies. Chemical potentials in three-dimensional higher spin anti-de Sitter gravity (1309.4362) Feb. 7, 2014 hep-th, gr-qc We indicate how to introduce chemical potentials for higher spin charges in higher spin anti-de Sitter gravity in a manner that manifestly preserves the original asymptotic W-symmetry. This is done by switching on a non-vanishing component of the connection along the temporal (thermal) circles. We first recall the procedure in the pure gravity case (no higher spin) where the only "chemical potentials" are the temperature and the chemical potential associated with the angular momentum. We then generalize to the higher spin case. We find that there is no tension with the W(N) or W(infinity) asymptotic algebra, which is obviously unchanged by the introduction of the chemical potentials. Our argument is non-perturbative. Asymptotically flat spacetimes in three-dimensional higher spin gravity (1307.5651) Hernan A. Gonzalez, Javier Matulich, Miguel Pino, Ricardo Troncoso Sept. 6, 2013 hep-th, gr-qc A consistent set of asymptotic conditions for higher spin gravity in three dimensions is proposed in the case of vanishing cosmological constant. The asymptotic symmetries are found to be spanned by a higher spin extension of the BMS3 algebra with an appropriate central extension. It is also shown that our results can be recovered from the ones recently found for asymptotically AdS3 spacetimes by virtue of a suitable gauge choice that allows to perform the vanishing cosmological constant limit. Higher spin black hole entropy in three dimensions (1301.0847) A generic formula for the entropy of three-dimensional black holes endowed with a spin-3 field is found, which depends on the horizon area A and its spin-3 analogue, given by the reparametrization invariant integral of the induced spin-3 field at the spacelike section of the horizon. From this result it can be shown that the absolute value of the spin-3 analogue of the area has to be bounded from above by A/3^(1/2). The entropy formula is constructed by requiring the first law of thermodynamics to be fulfilled in terms of the global charges obtained through the canonical formalism. For the static case, in the weak spin-3 field limit, our expression for the entropy reduces to the result found by Campoleoni, Fredenhagen, Pfenninger and Theisen, which has been recently obtained through a different approach. Higher spin gravity in 3D: black holes, global charges and thermodynamics (1207.2844) Global charges and thermodynamic properties of three-dimensional higher spin black holes that have been recently found in the literature are revisited. Since these solutions possess a relaxed asymptotically AdS behavior, following the canonical approach, it is shown that the global charges, and in particular the mass, acquire explicit nontrivial contributions given by nonlinear terms in the deviations with respect to the reference background. It is also found that there are cases for which the first law of thermodynamics is fulfilled in the canonical ensemble, i.e., without work terms associated to the presence of higher spin fields, and remarkably, the semiclassical higher spin black hole entropy is exactly reproduced from Cardy formula. Hairy black hole entropy and the role of solitons in three dimensions (1112.6198) Francisco Correa, Cristian Martinez, Ricardo Troncoso Feb. 13, 2012 hep-th, gr-qc Scalar fields minimally coupled to General Relativity in three dimensions are considered. For certain families of self-interaction potentials, new exact solutions describing solitons and hairy black holes are found. It is shown that they fit within a relaxed set of asymptotically AdS boundary conditions, whose asymptotic symmetry group coincides with the one for pure gravity and its canonical realization possesses the standard central extension. Solitons are devoid of integration constants and their (negative) mass, fixed and determined by nontrivial functions of the self-interaction couplings, is shown to be bounded from below by the mass of AdS spacetime. Remarkably, assuming that a soliton corresponds to the ground state of the sector of the theory for which the scalar field is switched on, the semiclassical entropy of the corresponding hairy black hole is exactly reproduced from Cardy formula once nonvanishing lowest eigenvalues of the Virasoro operators are taking into account, being precisely given by the ones associated to the soliton. This provides further evidence about the robustness of previous results, for which the ground state energy instead of the central charge appears to play the leading role in order to reproduce the hairy black hole entropy from a microscopic counting.
CommonCrawl
Is reducing speed the right mitigating action to limit harmful emissions of seagoing RoRo cargo carriers? Koos Frouws1 The Energy Emission Design Index (EEDI) is an index indicating the CO2 emission per transportation effort, for example the emitted tons CO2 per ton mile, to be calculated for each new design. The required index for new designs will be gradually lowered in the coming years resulting in either improved energy efficiencies or speed reductions. RoRo carriers are key stones in shore based logistical systems and as a result diverse in design speeds and main dimension ratio's. This diversity could be threatened by the relative simplicity of the EEDI regulations. This article aims to estimate the influence of the EEDI approach on 30 existing RoRo cargo carriers. The attained EEDI's per design are determined. Also the costs per transport effort are calculated based on the private costs and based on the social costs, both at the economically optimum speeds based on a uniformly applied sailing profile. The social costs are based on all emissions because the number of Environmental Special Area's is limited and the impact of speed reductions will not be limited to climate change. The expected speed reductions for these designs based on the EEDI, but also the required speed reductions when taking into account the total social costs are used to estimate the effectivity from the EEDI regulations. Amongst others it was concluded that the existing diversity in service speeds and main dimension ratio's will be jeopardized by the EEDI regulations. Emissions from ships are mostly emitted in a global context far from land. This combined with the fact that a new regulation requires the approval from many countries results often in a relatively slow process when developing and introducing emission reducing measures for the maritime world. Nevertheless, emission reduction has got its attention from the IMO, at first the sulphur and NOx emission reductions are regulated in the SECA areas. Finally there are now the new regulations for the CO2 emissions, regulated by means of the Energy Efficiency Design Index (EEDI). The fact that this has happened reflects the worldwide acceptance of the necessity of emission reducing measures. The essential goal of the EEDI regulations for new-buildings is decreasing the CO2 emissions per unit of transport capacity. The latter can be in principal a ton-mile or a cubic-meter-mile. This depends on the specific gravity of the cargo. The principal formula used to express the index (Germanische Loyd, 2013) : $$ EEDI\ Attained=\frac{\left\{\sum\ {\mathrm{f}}_{\mathrm{j}}\left({\mathrm{P}}_{\mathrm{ME}\kern0.1em \left(\mathrm{i}\right)} \times {\mathrm{C}}_{\mathrm{FME}\kern0.1em \left(\mathrm{i}\right)}\times {\mathrm{SFC}}_{\mathrm{ME}\kern0.1em \left(\mathrm{i}\right)}\right)+\left.{\mathrm{P}}_{\mathrm{AE}\ }\mathrm{x}\ {\mathrm{C}}_{\mathrm{FAE}}\mathrm{x}{\mathrm{SFC}}_{\mathrm{AE}}\right\}\right.}{{\mathrm{f}}_{\mathrm{i}}\mathrm{x}\ {\mathrm{f}}_{\mathrm{l}}\mathrm{x}\ {\mathrm{f}}_{\mathrm{w}}\mathrm{x}\ {\mathrm{f}}_{\mathrm{c}}\mathrm{x}\ \mathrm{Capacity}\ \mathrm{x}\ {\mathrm{V}}_{\mathrm{ref}}} $$ With PME = 75 % of the MCR of the main engine, the power required to create Vref at trial condition. The factor fj being a ship-type or specific design dependent correction factor. The summation means that each Main Engine has to be taken into account separately. The parameters used in the equation are PME, 75 % of the installed power, CFME the CO2 emissions in ton per ton fuel and per type of fuel, the SFC (specific fuel consumption in gram per kWh) of that specific engine. This, plus the emissions from the auxiliary engines determined by a statistical defined power consumption based on the installed main power. The rules describe corrections for the usage of PTO's (Power Take Offs) on the main engines, these corrections are not reflected in the formula above. For Ro-Ro vessels the factor fj is defined as follows: $$ \mathrm{f}\mathrm{j}={\mathrm{f}}_{\mathrm{jRoRo}}=\frac{1}{{\mathrm{Fn}}^{\alpha} \times {\left(\mathrm{L}\mathrm{p}\mathrm{p}\ /\ \mathrm{B}\right)}^{\upbeta} \times {\left(\mathrm{B}\mathrm{m}/\mathrm{T}\mathrm{s}\right)}^{\upgamma} \times {\left(\mathrm{L}\mathrm{p}\mathrm{p}/\surd \nabla \right)}^{\updelta}} $$ This function with the above shown parameters compensates for off standard main dimension ratio's and design speed variations. These are often the case with Ro-Ro vessels due to the specific services and conditions in which these vessels operate. These kind of corrections are not applied for Bulk carriers. The capacity correction factors in the EEDI formula: fi : Correction for structural deadweight reducing design features. (safety, corrosion, ice-class and fatigue) The Ro-Ro EEDI has no Ice class correction. fl : Correction factor for general cargo ships fw : Correction factors for decreased speed at sea conditions at Beaufort 6 in order to calculate the EEDI weather. It has to be set on 1 for the normally attained EEDI calculation. fc : Deadweight correction factor for gas and chemical tankers. The attained EEDI has to be less than the reference line expressed by the following formula: $$ \mathrm{EEDI}\ \mathrm{reference} = \mathrm{a}\ {\mathrm{b}}^{-\mathrm{c}} $$ With a and c constants defined per ship-type and b the design deadweight of the vessel. It is the intention to lower these reference lines each year, in 2015 with a total of 30 %. It has to be noted that this approach is focused on the design speed. The design speed in this paper is the speed with a clean hull, without added resistance due to waves and at approximately 75 % of the maximum continuous rating of the engine. The 25 % reduction is composed by two chosen factors, one to reduce the maintenance bill, often 10 % and a sea margin most often 15 %. These figures suggest that you can just add them, however, the calculation is more complex. Intermediate speeds and their corresponding fuel consumption is an area which can often be substantially improved with sometimes large effects on the voyage costs. It seems a goal based regulation, not specifying the means. However, a lack of real means will result in speed reductions and actually changing the regulation into a prescriptive one. Also the relative simplicity of the correction factors for off standard main dimensions and speed in comparison to power prediction methods like Holtrop & Mennen (Holtrop & Mennen, 1984), still used today in the conceptual design phase, could result in undesirable speed reductions in some off standard designs typical for RoRo carriers. Then, the consequence will be more ships and/or less products shipped. These questions will be addressed. Speed reductions based on the minimum social costs per ton-mile, could differ substantially from the speed reductions required to get some designs in line with the EEDI regulations. With other words does this measure reflects the impact of the social costs ? Are the proposed reductions in required power achievable ? If not, speed reduction is the only answer. Is speed reduction defendable? For example the required power for bulk-carriers has dropped with 40 % in 30 years. From this 40 %, 20 % is achieved by an improvement of the engine performance and the other 20 % by hydrodynamic improvements. (Frouws, 2012) The EEDI regulations require a drop of required power of 25 % in 10 years. This paper tries to evaluate these aspects looking from the perspectives of the society as well as from the ship-owner. This study is focusing on Ro-Ro cargo carriers. A ship-type with an interesting variation in main dimension ratios and design speeds. To be able to deal with all the research questions the following information and calculated performance indicators thought to be necessary: A description and interpretation from the emissions and there impacts on the environment, in terms of marginal external costs in combination with emission aspects like fund and stock pollutants. But also the possibilities for their abatement. Determination from the attained EEDI for a set of 30 diverse existing RoRo cargo carriers from which the information is well known and thoroughly gathered and composed. These attained EEDI's will be set out against the in the future planned reference lines. Secondly the determination of the required speed in order to achieve the attained EEDI planned in 2025. In this way one is testing existing designs in their ability to fulfil the future regulations for new designs. Calculation of the costs per ton mile and lane-meter mile at the economic speed per vessel. The economic speed is the speed with the lowest costs per ton mile and/or lane-meter mile. The port time is set on 40 % of the total time. The calculations are executed with the NPV method and include all costs, except loading discharging costs. The voyage, running and capital costs are modelled as a function of relevant parameters from the vessel (Aalbers, 2000). The designs are actually regarded as new buildings. At first the current new-building value is determined afterwards a net present value calculation of all the costs is made per vessel assuming a lifetime of 30 years with 60 % own capital. These calculations were repeated, but then based on the social costs which means an increase in the voyage costs, the latter depends strongly on the speed. Including the marginal external costs of all emissions is the only way to compare the consequences of emissions from several sources in a consistent way. However, the law deals with these emissions in separate regulations which has the disadvantage that the overall more holistic picture could be lost. By setting the CO2 emission regulation in the total perspective of all emissions makes it easier to judge the quality from the EEDI approach. There are other aspects which has to be taken into account for example the ease of dealing with a specific emission in combination with the long term effects. This aspect is approached in a more qualitative way. The economic calculation model used, determines the costs of transport expressed in euro per ton mile using 80 % of the deadweight on average and being in port for 40 % of the time. Also the costs per lane-meter mile are determined based on an average filling rate of 60 % of the available lane-meters and again a port time of 40 % of the year. All the costs who are normally covered by a voyage charter. The existing vessels in the database are regarded as new-buildings in the calculations, despite their age. By this approach and by excluding price effects, a fair design comparison becomes possible. In reality varying second hand prices and sailing profiles can change the real transport costs per ton-mile dramatically. The used emission rates and marginal external costs applied in these calculations are shown in Table 1. Table 1 Marginal external costs and emission rates as a function of the tons of fuel The external cost rates are averaged values for European sea areas and based on Clean Air for Europe (CAFE) Program 2005. The chosen external costs for CO2 is the lowest estimate, known upper values are at least 80 euro which shows the difficulty in estimating the effects of climate change. In order to make a fair economic comparison between the designs it is important to determine those speeds where the costs per ton mile or lane-meter mile are the lowest. Which is defined here as the economic speed. Otherwise the effects of large design speed variations in the database would prevail. There is a balance between the capex and opex. Where the opex will increase exponentially per ton-mile with increasing speed, the capex per ton-mile will decrease. The economic optimum speed is determined enabling us to compare the designs in their economic optimum operational performance. The considerations and calculations below are based on HFO as main engine fuel and MGO for the auxiliaries. These fuels when burned in an engine tend to emit CO2, SO2, PM, HC, CO and NOx. The Sulphur content of HFO is assumed to be 2,5 %. The PM emission is strongly coupled to the Sulphur content. Table 2 shows the share of certain emission combinations in the external costs as a percentage of the fuel costs (average). the fuel costs are based on figures from April 2015. The NOX emission rates on Tier 2 maximum allowable emission. (see Table 1) Table 2 Average external costs as a function of the average fuel costs The external costs of the SOx and NOx emissions are larger than the external costs of CO2 emissions. Partly due to the low estimation applied. The SOx and NOx emissions are dealt with in the regulations in Sulphur Emission Control Areas (SECA) and/or Emission Control Areas (ECA), the CO2 emissions are worldwide approached. The reason is that the acid rain problem (SOx and NOx) is more locally oriented than the GHG emissions like COx and methane. It has to be kept in mind that the used external cost figures are based on the estimations of the consequences of emissions at sea in the coastal areas from Europe. The majority of the costs included are on shore, where the fund pollutants tend to change in stock pollutants. Many people have forgotten that the acid rain problem in the 70's was largely solved by a cap and trade approach resulting in for example replacing the sulphur containing coal as a fuel and/or the application of sulphur scrubbers in the shore based power plants in combination with SCR systems to abate the NOx emissions. With other words reversible till a certain height. The CO2 emissions, the cause of climate change, cannot be reduced easily. Burning hydrocarbons means CO2 emissions. This tends to become a major problem. Emitting CO2 in huge amounts since 1800 has increased the content of CO2 in the atmosphere. The combination of an increasing world population, a decreasing usable agricultural area and water shortages due to the GHG effects is not a fine perspective and tends to increase the marginal costs substantially in the future. The EEDI will lower the acceleration of the emissions but not decrease the flowrate. The study from the IMO (Bazari & Longva, 2011), page 24, shows clearly that the emission rate is not expected to decrease because of the estimated required hydrocarbon fuel to serve the world population. The "attained EEDI" for the dataset of existing vessels The dataset used is based on multiple sources, for example (RINA 1990 to RINA 2011), several publications about these vessels, internet sources and is believed to be as precise as possible for publicly available information sources. The combinations of design speed and required power are based on the given information of the corresponding percentage of the MCR of the main engine, often 90 % and the, in the design applied sea margin of mostly 15 %. In other words, the design speed in the database is reached with approximately 75 % of the MCR. The attained EEDI calculation requires 75 % sharp. Figure 1 shows the calculated attained EEDI's for the dataset based on the design speeds. Calculated attained EEDI's In the figure are 6 vessels designed and build by the Flensburger Schiffbau-Gesellschaft. Several publications about their design process are used to get the database correct. The publication about the 142 meter vessel with 4140 ton deadweight is one of them. (Tobias Haack, 2009) This vessel is off standard in terms of the number of decks(4) in relation to its length, more decks require a certain breadth. It had to be short and wide because of the size of the berth in the port. On the other hand the research during the design phase involved substantial CFD calculations in order to improve the fuel consumption. The attained EEDI for this off standard design is not good. An identical design in 2025 should have an EEDI just below 16 instead of the attained EEDI of close to 23. This should mean a power reduction of 30 % or a speed reduction of 3,6 knots in order to comply. Its design speed was 21,6 knots to keep up with the required schedule. A power reduction of 30 % is not realistic taking into account the effort already taken to reduce its fuel consumption. It seems that off standard length breadth and deadweight volume ratios are not taken into account correctly by the attained EEDI calculations. It should be realised that one of the major reasons of this low EEDI is the fact that the EEDI is based on the deadweight while this vessel, in this trade, is more oriented on selling lane-meters which increases the "garage" volume while increasing its lightweight and decreasing its deadweight. The reference line 2015 is achieved easily by the majority of the designs, indicating that the remarks made in the assessment of the EEDI (Bazari & Longva, 2011) are correct. They state that 30 % reduction will be achievable, at first because the starting reference line is above the average and secondly that there are enough possibilities to improve the designs. Looking to the performance of the FSG vessels it seems that it is more the application of the available knowledge that can do the job than a "to be expected innovation". However, the rate of improvement required by the reference line of 2025 is without radical innovation not foreseeable in the future after 2025. The EEDI, if well applied will mainly force the ship-owners to design the vessels according the state of the art. If that recipe does not work anymore, there is only one alternative, speed reduction. The question can also be considered from a different point of view. Which design speed should be necessary to reach the 2025 EEDI level? The results of these calculations are summarised in Fig. 2. Required design speed to get EEDI approval in 2025 (existing vessels) The balls reflect the original design speed. The triangles the required speed in order to comply with the 2025 reference line from the EEDI. There is one current vessel that does not require a speed reduction, that design is from FSG. The average required speed reduction is 2,74 knots with an original average speed of 20.49 knots, this results in a new average speed of 17,75 knots. May be positive from the environmental point of view but on the other hand, it will influence the schedules. As soon as the latter aspect influences the competitiveness with, for example, trucks it could be questionable in terms of overall environmental effect. The variance in design speed is mainly due to schedule requirements, for example daylight operations or just night operations for sleeping truck drivers. For this vessel type it could be sometimes wiser to exclude design speed considerations in the attained EEDI. This graph shows also the large spread in the design speeds indicating the very different design goals as a result of the different businesses served. The economic calculations of the marginal private and social costs are summarised in Fig. 3. The cost items involved are based on the costs covered by voyage charters. It has to be realised that these calculations are executed at the economic speed which are different in the case of private costs or social costs. So, the transport carrying capacity per year is different. Marginal private and social costs per ton mile and lane-meter mile at economic speeds Figure 3 indicates clearly the substantial impact of the marginal external costs on top of the marginal private costs. Secondly it has to be realised that these costs are achieved at the economic speeds. For a ship type the EEDI will be based on one type of "capacity", normally tons deadweight or volume, the latter is proportional to lane-meters in the case of RoRo carriers. Taking again the 142 m design as discussed at Fig. 1 (Tobias Haack, 2009) (see vertical dotted line) some remarkable effects can be seen. Both, marginal private and social costs per ton mile are relative high, the marginal costs per lane-meter mile are however extremely low and even in comparison to larger vessels. In combination with Fig. 2 you can conclude that such a design in 2025 will be forced to increase its private costs by decreasing its design speed from 21 knots to 17 knots based on the EEDI. The economic speed from this vessel based on marginal private costs is 19.5 knots and based on social costs a merely 13.3 knots. (see Fig. 4) This line service is a connection between Ireland and England, the schedule is dictated by logistical reasons. Economic speed at marginal private costs and social costs The corresponding economic speeds at private and social costs are shown in Fig. 4. The economic speeds are determined by means of an iteration in order to find the speed with the lowest overall costs per ton mile. The calculated economic speed based on the marginal social costs is on average 72 % from the economic speed based on the marginal private costs. This means that from the societal point of view a speed reduction from around 25 % is defendable. The average economic speed of the vessels in the database based on private costs is 19,08 knots, based on social costs 13,6 knots, a difference of 5.48 knots. The difference with the speed reduction as a result of the EEDI, 2.74 knots, is large. Due to the exponential increase of fuel consumption with speed, the difference will be substantial in terms of fuel consumption. Speeds based on the social costs will require roughly 30 % more vessels to get the same transport capacity. The perspective from the societal point of view differs strongly from the private cost point of view from the ship-owner. The EEDI approach helps, but as long as the external costs are not really included in the cost equations the right decisions in terms of design and operational speed or required investments to avoid emissions will not be taken. The external costs are mainly based on the acidifying emissions, not on GHG related emissions. The low chosen external costs of the CO2 emissions are 13 % of the fuel costs and as a result has a limited effect on the economic speed based on the social costs. However, it is extremely difficult to reverse the usage of hydrocarbons as a fuel in shipping, due to the required energy density. So, a substantial increase in marginal external cost of CO2 emissions can be expected. This makes a straight forward comparison only based on current marginal external costs between GHG related emissions (Stock pollutants) and acidifying emissions (Fund pollutants) not credible while looking to all these aspects. When on board measures are taken abating acidifying emissions and health threatening emissions the required speed reductions to reduce the social costs can be limited. Further research is required on this aspect. However at this moment there are no requirements outside the SECA area's. A new calculation method taking into account the threat of the different emissions in the future based on forecasts could help to value and judge the situation. Especially because the NPV method applied did not take into account the expected raise in the marginal costs nor the expected impact of these emissions after the ship will be scrapped. This study seems to give good reasons to charge the external costs to the ship-owners. That could help to solve the problem of the acidifying emissions. However, it does not much in the field of decreasing the CO2 emissions and as a result the CO2 content of the atmosphere. A situation, hardly to reverse, but also difficult in its abatements possibilities. One could reduce the emission per ton deadweight, but the tons are still increasing. The EEDI reference lines as proposed in 2025 are most likely achievable. On the other hand the suggested improvement with 30 % is misleading. The major gain will be forcing designers to apply the state of the art in their designs and/or in combination with speed reductions. Speed reduction can be wrong when the competiveness with other more emitting transport modalities suffers. The EEDI system seems to punish "off standard" designs in terms of main dimension ratio's and or high design speed. That can be counterproductive from the environmental point of view in specific logistical situations, especially when competing with other modalities, but also if one off standard ship can replace two standard ships. This could be solved by allowing alternatives like the method of the 'Environmental Impact Statement' in those cases where the overall environmental performance will improve. Separately dealing in the regulations with different kinds of emissions decreases the potential growth of the application of LNG as a fuel. This fuel is favourable for sulphur, NOx, PM and CO2 emissions. Its use is financially mainly credited in SECA areas. In other areas it will only enable higher speeds because of the lower attained EEDI, actually areas where higher speeds are less important because the schedule problems are less. The focus of the EEDI regulations on the design speed of the vessel underestimates the possible reductions of the fuel consumption at lower speeds. For example The current practice of running the controllable pitch propeller on a constant rpm because of the PTO lowers the propeller efficiency substantially at lower speeds. Aalbers, A. (2000). Evaluation of Ship Design Alternatives. Proceedings of the 34th Wegemt School, June. Delft, the Netherlands. Bazari Z, Longva T (2011) Assessment of imo mandated energy efficiency measures for international shipping. IMO, London Frouws, K. (2012). Benchmarking existing and new designs. Napels, NAV 2012 Germanische L (2013) Guidelines for Determination of the Energy Efficiency Design Index. Germanische Loyd, Hamburg Holtrop, J., & Mennen, G. (1984). An Approximate Power Prediction Method. International Ship_building Progress, Vol. 31. RINA. (1990 to 2011). The Significant Ships 1990 to 2011. The Royal Institute of Naval Architects Tobias Haack, Stefan Krüger, Hendrik Vorhölter (2009) Optimisation of a fast monohull with CFD-methods. 10th International Conference on Fast Sea Transportation FAST 2009. Athens, Greece, National Technical University of Athens My acknowledgement goes to the reviewers which I would like to thank for their valuable contribution. The author is a Naval Architect and is currently active as Assistant Professor in Ship design and Shipping Management. After his graduation he has hold positions in the port/shipping business in the gas and chemical trade, manager from a 300 km liquid gas transport pipeline system, assistant superintendent in the polyethylene granulate production and manager from a design department, active in the design of municipal wastewater plants and pumping stations. The author declares that he has collected all the data and made all the calculation models behind the results presented in the paper and that he has written this paper without the help of co-authors besides a language check and the work from the reviewers. I declare that there are no financial neither non-financial competing interests as indicated today February 2, 2016 in your description of competing interests on the site from the Journal Shipping and Trade in relation to this paper. Department of Maritime Technology and Transportation, Delft University of Technology, Delft, the Netherlands Koos Frouws Correspondence to Koos Frouws. Frouws, K. Is reducing speed the right mitigating action to limit harmful emissions of seagoing RoRo cargo carriers?. J. shipp. trd. 1, 9 (2016). https://doi.org/10.1186/s41072-016-0014-2 EEDI External costs Maritime economic Environmental challenges and solutions in shipping
CommonCrawl
Force is an agent capable of producing acceleration and/or deformation of a body. The Different Types of Forces The free body diagram, FBD , is a diagram representing the object of interest and the forces acting on it. To describe the mechanical behavior of an object it is necessary to know and understand the forces acting on it. The figure illustrates the FBD of a brick on the ground, and being pushed forwards. Typical forces that appear in mechanical problems can be understood as follows: Net force \((\vec{F}_{net})\) When several forces act simultaneously on a particle, they can be replaced by a single force, which alone will have the same effect on all others together. In mathematical form, it is written: \begin{align} \vec{F}_{net} &= \sum_i^n \vec{F}_i = \\ &= \vec{F}_1+ \vec{F}_2+...+\vec{F}_j+...+\vec{F}_n. \end{align} Weight Force or Gravitacional Force \((\vec{W})\) It is the force of the Earth that attracts bodies to its center. Near the surface of the Earth, the module of this force is: $$W = mg,$$ where \(m\) is the mass of the body and \(g\) is the constant known as the Earth's gravity. Elastic Force \((\vec{F}_e)\) It is the force which arises due to elastic deformation of the bodies. (See section Elastic Forces ) Normal force \((\vec{N})\) It is the force that acts between two contact surfaces. Its direction is always orthogonal, to the contact surface. (See section Contact Force ) Friction Force \((\vec{F}_{fric})\) It is the force that acts between two contact surfaces. Its direction is always parallel to the surfaces. It opposes the displacement of the surfaces, one against another. Tractive Force \((\vec{T})\) It is the force that is exerted by ropes or rods. External Force \((\vec{F}_{ext})\) The forces of external agents that are not part of the system of interest. In general, the forces that do not fit the above definitions are called external forces, and have different sources: motors, people, animals, etc. Internal Forces \((\vec{F}_{int})\) They are forces of internal components that are part of the system of interest. Since they form an action-reaction pair, they do not contribute to the resulting force, since they cancel each other out. The \(IS\) unit of force is Newton, \([F] = N\) .
CommonCrawl
The nod-like receptor, Nlrp12, plays an anti-inflammatory role in experimental autoimmune encephalomyelitis Marjan Gharagozloo1, Tara M. Mahvelati1, Emilie Imbeault1, Pavel Gris2, Echarki Zerif1, Diwakar Bobbala1, Subburaj Ilangumaran1, Abdelaziz Amrani1 and Denis Gris1Email author © Gharagozloo et al. 2015 Accepted: 19 October 2015 Multiple sclerosis (MS) is an organ-specific autoimmune disease resulting in demyelinating plaques throughout the central nervous system. In MS, the exact role of microglia remains unknown. On one hand, they can present antigens, skew T cell responses, and upregulate the expression of pro-inflammatory molecules. On the other hand, microglia may express anti-inflammatory molecules and inhibit inflammation. Microglia express a wide variety of immune receptors such as nod-like receptors (NLRs). NLRs are intracellular receptors capable of regulating both innate and adaptive immune responses. Among NLRs, Nlrp12 is largely expressed in cells of myeloid origins. It plays a role in immune inflammatory responses by negatively regulating the nuclear factor-kappa B (NF-κB) pathway. Thus, we hypothesize that Nlrp12 suppresses inflammation and ameliorates the course of MS. We used experimental autoimmune encephalomyelitis (EAE), a well-characterized mouse model of MS. EAE was induced in wild-type (WT) and Nlrp12 −/− mice with myelin oligodendrocyte glycoprotein (MOG):complete Freud's adjuvant (CFA). The spinal cords of healthy and immunized mice were extracted for immunofluorescence and pro-inflammatory gene analysis. Primary murine cortical microglia cell cultures of WT and Nlrp12 −/− were prepared with cortices of 1-day-old pups. The cells were stimulated with lipopolysaccharide (LPS) and analyzed for the expression of pro-inflammatory genes as well as pro-inflammatory molecule secretions. Over the course of 9 weeks, the Nlrp12 −/− mice demonstrated increased severity in the disease state, where they developed the disease earlier and reached significantly higher clinical scores compared to the WT mice. The spinal cords of immunized WT mice relative to healthy WT mice revealed a significant increase in Nlrp12 messenger ribonucleic acid (mRNA) expression at 1, 3, and 5 weeks post injection. A significant increase in the expression of pro-inflammatory genes Ccr5, Cox2, and IL-1β was found in the spinal cords of the Nlrp12 −/− mice relative to the WT mice (P < 0.05). A significant increase in the level of gliosis was observed in the spinal cords of the Nlrp12 −/− mice compared to the WT mice after 9 weeks of disease (P < 0.05). Primary Nlrp12 −/− microglia cells demonstrated a significant increase in inducible nitric oxide synthase (iNOS) expression (P < 0.05) and secreted significantly (P < 0.05) more tumor necrosis factor alpha (TNFα), interleukin-6 (IL-6), and nitric oxide (NO). Nlrp12 plays a protective role by suppressing inflammation during the development of EAE. The absence of Nlrp12 results in an increased inflammatory response. Nlrp12 Experimental autoimmune encephalomyelitis Microglia Neuroinflammations Multiple sclerosis (MS) is among one of the most common neurodegenerative diseases affecting an estimated 2.3 million individuals worldwide [1]. This organ-specific autoimmune disease is characterized by four different types of demyelinating plaques; types I and II which are T cell mediated or T cell and antibody-mediated, while types III and IV are mediated by oligodendrocyte death [2]. In all four cases, plaques are associated with activated macrophages, microglia, and astrocytes. Regardless of the type of plaque formation, inflammation plays a central role in MS pathophysiology [1, 3]. Microglia, the resident immune cells of the central nervous system (CNS), play a major role in maintaining CNS homeostasis. They have been shown to be associated with developing plaques and are thought to contribute to the development of MS [2, 4], as well as other chronic inflammatory neurodegenerative diseases such as Alzheimer's [5]. During MS, activated microglia can play the role of antigen-presenting cells (APCs) and, therefore, skew T cell responses towards a T helper cell 1 (Th1) pro-inflammatory phenotype [1, 2, 6]. In addition, once activated, microglia upregulate the expression of pro-inflammatory molecules including but not restricted to tumor necrosis factor alpha (TNFα), interleukin (IL)-1β, IL-6, macrophage inhibitory protein 1 alpha (MIP1α), and inducible nitric oxide synthase (iNOS), all of which have been shown to play a role in demyelination and neuronal damage [7]. There is a wide variety of immune receptors expressed by microglia that regulate its function. Pathogen-recognition receptors (PRRs) such as nod-like receptors (NLRs) are innate immune receptors and sensors of pathogen-associated molecular patterns (PAMPs) [8]. NLRs are a group of proteins that share a NACHT and leucine-rich repeat (LRR) domain but differ in their N-terminal effector domain. Upon recognition of their respective ligand, NLRs become activated and it result in the subsequent triggering of multiple pro-inflammatory molecular pathways, such as nuclear factor-kappa B (NF-κB). In addition, they are able to regulate both innate and adaptive immune responses and play a role in pathological processes [8]. Recently discovered Nlrp12 is a pyrin-containing intracellular NLR protein. It is largely expressed in the cells of myeloid origin such as monocytes and dendritic cells (DCs). The expression of Nlrp12 has been shown to play an important role in immune inflammatory responses by negatively regulating the NF-κB pathway and modulatory roles, such as dendritic cell migration [9, 10]. The NF-κB pathway is one of the major pathways involved in the inflammatory response. Typically, the activation of NF-κB following insults results in the transcription of pro-inflammatory cytokines such as TNFα, IL-1β, and IL-6; chemokines such as CCL5, CCL22, and MIP1α; and proteins, such as iNOS and cyclooxygenase 2 (COX2) [11, 12]. This study aims to investigate the role of NLRs in neuroinflammation, particularly to uncover the role of Nlrp12 during experimental autoimmune encephalomyelitis (EAE) development. In our study, results show that Nlrp12 acts to downregulate inflammation during the development of EAE. This study may have significant implications in the development of potential novel therapies to treat MS and other neuro-inflammatory degenerative diseases. Nlrp12 knock-out (Nlrp12 −/− ) mice were kindly provided by Dr. Jenny P. Y. Ting (Chapel Hill, NC). All of the protocols and procedures were approved by the University of Sherbrooke at the University of Sherbrooke Animal Facility and Use Committee. EAE was induced in 8–10-week-old C57BL/6 female mice using a previously established protocol by Miller et al. [13]. Briefly, a 1:1 emulsion mixture of myelin oligodendrocyte glycoprotein (MOG35−55) (Genemed Synthesis Inc., San Antonio, TX) and complete Freund's Adjuvant (CFA) (Sigma-Aldrich, St. Louis, MO) supplemented with 100 μg Mycobacterium tuberculosis H37 RA (Difco Laboratories, Detroit, MI) was prepared using a glass tuberculin syringe. The MOG:CFA emulsion (100 μL) was injected subcutaneously on each side of the midline on the lower back of each mouse for a total of 200 μg MOG35–55 and 500 μg Mycobacterium. Pertussis toxin (200 ng) (List Biological Laboratories Inc., Campbell, CA) was injected intraperitoneally on the day of and 48 h following immunization. The mice were monitored every day for the development of disease. Clinical scores were given by two independent observers, using the following scale: 0, no sign of disease; 1, limp tail or weakness in limbs; 2, limp tail and weakness in limb; 3, partial limb paralysis; 4, complete limb paralysis. The immunized mice were anesthetized by intraperitoneal injection of Avertin® (2,2,2-tribromoethanol, approximately 240 mg/kg) (Sigma-Aldrich, St. Louis, MO) diluted in 0.9 % saline solution. The mice were then perfused with ice-cold phosphate-buffered saline (PBS) (Wisent, St. Bruno, QC), and the spinal cords were removed and stored at −80 °C immediately for RNA extraction (thoracic region) and placed in 4 % paraformaldehyde (Sigma-Aldrich, St. Louis, MO) for immunofluorescence analysis (lumbar region). The spinal cord tissues were embedded in paraffin and cut into 5-μm sections. T cell proliferation assay T cell proliferation was performed using 3H-thymidine incorporation assay. A single cell suspension was prepared from draining the lymph nodes (more precisely, from the inguinal and axillary lymph nodes) and spleen. CD4+ T cells were then purified using EasySep Mouse CD4+ T Cell Isolation Kit (Stem cell, Vancouver, BC), seeded in a round-bottom 96-well culture plate (1 × 105 cells/well) and activated with plate-bound anti-CD3 (1 μg/mL) and anti-CD28 (2 μg/mL) antibodies for 3 days. During the last 18 h of culture, 1 μCi of methyl-[3H]-thymidine (NEN Life Sciences, Boston, MA) was added per well. The cells were harvested onto glass fiber filter mats, and the incorporated radioactivity was measured using Top Count® microplate scintillation counter (PerkinElmer, Wellesley, MA). Intracellular IL-4 staining for flow cytometry The purified CD4+ T cells from the wild-type (WT) and Nlrp12 −/− mice were activated by plate-bound anti-CD3 (1 μg/mL) and anti-CD28 (2 μg/mL) antibodies for 3 days. Then, the cells were stimulated with phorbol 12-myristate 13-acetate (PMA; 50 ng/mL, Sigma Chemical Co., St. Louis, MO) and ionomycin (1 μg/mL, Calbiochem Corp., La Jolla, CA) for 4 h at 37 °C and 5 % CO2 in the presence of Brefeldin A (1 μg/mL, eBioscience, San Diego, CA). After staining the cells with anti-CD4-FITC antibody (eBioscience), the cells were fixed and permeabilized using intracellular fixation and permeabilization buffer (eBioscience) and stained with anti-IL-4-PE antibody, as per the manufacturer's instructions. Sample analysis was performed with FACSCalibur, and data analysis was done using FlowJo Software (FlowJo, LLC, Ashland, OR). RNA extraction, cDNA synthesis, reverse transcription and real-time quantitative PCR RNA from the spinal cords and lymph nodes were extracted using TRIzol reagent (Life Technologies Inc., Burlington, ON). The tissues were homogenized with sterile beads (Qiagen, Limburg, Netherlands) at a speed of 20 Hz for 2 min. Chloroform (200 μL) (Fisher Scientific, Ottawa, ON) was added to each tube per 1 mL of TRIzol and incubated at room temperature for 15 min followed by centrifugation at 13,000 rpm for 15 min at 4 °C. Supernatants were collected in new tubes, and 500 μL isopropanol (Fisher Scientific, Ottawa, ON) was added to each tube and incubated for 10 min at −80 °C before spinning down at 13,000 rpm for 10 min at 4 °C. Pellets were washed with 75 % ethanol and re-suspended in 20 μL RNAse-free sterile water (Wisent, St-Bruno, QC). cDNA was synthesized using Oligo(dT) primer (IDT, Coralville, IA), PCR Nucleotide Mix (GE Healthcare, Baie d'Urfe, QC), M-MuLV Reverse Transcriptase, M-MuLV Reverse Transcriptase Buffer (New England BioLabs, Whitby, ON), and RNasin Ribonuclease Inhibitor (Promega, Madison, WI). Reverse transcription PCR (RT-PCR) and quantitative reverse transcription PCR (RT-qPCR) were used to verify the expression of Nlrp12, Mip3α, Cox2, IL-1β, and Ccr5 using Brilliant III Ultra-Fast SYBR Green QPCR Master Mix (Agilent Technologies, Santa Clara, CA). Primers (IDT, Coralville, IA) sequences were as follows: Nlrp12 F: 5′-CCT CTT TGA GCC AGA CGA AG-3′, Nlrp12 R: 5′-GCC CAG TCC AAC ATC ACT TT-3′, Mip3α F: 5′-CTC AGC CTA AGA GTC AAG AAG ATG-3′, Mip3α R: 5′-AAG TCC ACT GGG ACA CAA ATC-3′, Cox2 F: 5′-CCA GCA CTT CAC CCA TCA GTT-3′, Cox2 R: 5′-ACC CAG GTC CTC GCT TAT GA-3′, IL-1β F: 5′-CAT CCA GCT TCA AAT CTC GCA G-3′, IL-1β R: 5′CAC ACA CCA GCA GGT TAT CAT C-3′, Ccr5 F: 5′-CGA AAA CAC ATG GTC AAA CG-3′, Ccr5 R: 5′-GTT CTC CTG TGG ATC GGG TA-3′, 18S F: 5′-CGG CTA CCA CAT CCA AGG AA-3′, and 18S R: 5′-GCT GGA ATT ACC GCG GCT-3′. The samples were normalized to the internal control 18S rRNA, and relative expression was calculated using the ΔΔCT method [14]. Slides were de-paraffinized in xylene (EMD Millipore, Etobicoke, ON) and hydrated in 100, 95, and 70 % ethanol gradient. Antigen unmasking was performed at sub-boiling temperature for 10 min in 10 mM sodium citrate buffer pH 6.0 (Sigma-Aldrich, St. Louis, MO). Immunofluorescence was performed in Sequenza Slide Rack and Coverplate System (Ted Pella, Inc., Redding, CA). The slides were washed with 0.1 % Triton X-100 in PBS solution, blocked in 5 % fetal bovine serum (FBS) plus 0.1 % Triton X-100 in PBS for 1 h and incubated with primary antibody (1:1000) overnight at 4 °C. Secondary antibody (1:2000) incubation was done at room temperature for 2 h. The slides were mounted with DAPI Fluoromount-G (SouthernBiotech, Birmingham, AL), and photomicrograph pictures were taken with Retiga SRV Mono Cooled numerical camera attached to Zeiss Axioskop 2 Microscope. The pictures were stitched with Adobe Photoshop CS6, and stain density was quantified with Image-Pro Plus 6.0 (Media Cybernetics, Inc., Rockville, MD). Rabbit anti-glial fibrillary acidic protein (GFAP) antibody was purchased from Cedarlane (Burlington, ON). Rabbit anti-ionized calcium-binding adaptor molecule 1 (Iba1) antibody was purchased from Wako (Osaka, Japan). Alexa Fluor 488 AfinniPure Goat Anti-Rabbit IgG (H + L) was purchased from Jackson ImmunoResearch Laboratories Inc. (West Grove, PA). The percentage of microgliosis and astrogliosis in the spinal cord and gray matter were calculated as follows: $$ \mathrm{Percentage}\ \mathrm{of}\ \mathrm{gliosis}\ \left(\%\right) = \frac{\mathrm{Density}\ \mathrm{stain}\ }{\mathrm{Total}\ \mathrm{area}} \times \mathsf{100} $$ The percentage of microgliosis and astrogliosis in the white matter were calculated as follows: $$ \mathrm{Percentage}\ \mathrm{of}\ \mathrm{gliosis}\ \mathrm{in}\ \mathrm{the}\ \mathrm{white}\ \mathrm{matter}\ \left(\%\right) = \left(\frac{\mathrm{Density}\ \mathrm{stain}\ \mathrm{of}\ \mathrm{spinal}\ \mathrm{cord}-\mathrm{Density}\ \mathrm{stain}\ \mathrm{of}\ \mathrm{gray}\ \mathrm{matter}}{\mathrm{Total}\ \mathrm{area}\ \mathrm{of}\ \mathrm{spinal}\ \mathrm{cord}-\mathrm{total}\ \mathrm{area}\ \mathrm{of}\ \mathrm{gray}\ \mathrm{matter}}\right) \times \mathsf{100} $$ Primary cell culture Cortices from 1-day-old pups were extracted and placed onto a 100-mm petri dish using aseptic techniques. Cortices were sliced with a commercial razor blade, further broken up with a rigorous up-and-down motion in 10 mL of medium, and filtered with a 70-μm filter. The cells were then plated onto a 100-mm petri dish and put in an incubator of 37 °C with 5 % CO2. Cell culture medium DMEM/F12 (Wisent, St. Bruno, QC) was supplemented with 10 % FBS (Invitrogen, Burlington, ON), 1 % penicillin-streptomycin solution (Wisent, St-Bruno, QC), 1 % L-glutamine solution (Wisent, St. Bruno, QC), 0.9 % sodium pyruvate solution (Wisent, St. Bruno, QC), 0.9 % MEM amino acid solution (Wisent, St. Bruno, QC), and 0.9 % amphotericin B solution (Wisent, St. Bruno, QC). The medium of the mixed glial culture was changed every 2 to 3 days. After 3 weeks, primary microglia cells were separated from astrocytes using EasySep CD11b positive selection kit following the manufacturer's instructions (Stem cell, Vancouver, BC). Immunoblotting Proteins were separated in 10 % polyacrylamide gels and transferred onto PVDF (Millipore, Etobicoke, ON) membranes. The membranes were blocked with PBS containing 10 % nonfat milk and 0.05 % Tween-20 (Sigma-Aldrich, St. Louis, MO). The membranes were washed in 1× Tris-buffered saline (TBS) plus 1 % Tween-20 for 15 min and incubated with primary antibody (1:1000) overnight at 4 °C and with secondary antibody (1:2000) for 2 h at room temperature. The membranes were revealed with GE HealthCare Life Sciences Amersham ECL Plus (Baie d'Urfe, QC) and viewed with Molecular Imager VersaDoc from BioRad, and protein bands were quantified using NIH ImageJ software. The antibodies used were as follows: β-actin (rabbit), iNOS (rabbit), and anti-rabbit IgG HRP-linked antibodies were purchased from Cell Signaling Technology (Beverly, MA). Cytokine measurement TNFα and IL-6 cytokines in the supernatant of microglia culture were measured using ELISA kits purchased from BioLegend (San Diego, CA). Cerebellum and lymph node samples were homogenized in 0.5 mL of ice-cold lysis buffer (Cell Signaling Technology, Beverly, MA) supplemented with protease inhibitors (Roche Diagnosis, Mannheim, Germany) by rapid agitation for 2 min in the presence of 3-mm stainless beads. The tissue lysate was centrifuged for 10 min at 13,000×g in a cold microfuge, and the supernatant was transferred to a new tube. The concentration of proteins in the lysate was determined by Bradford protein assay. The tissue levels of IL-4 were determined using a high sensitivity IL-4 ELISA Kit (eBioscience, San Diego, CA), and the concentration of IL-4 in serum samples was quantified using Mouse IL-4 DuoSet (R&D Systems), according to the manufacturer's instruction. All statistical analyses were conducted using GraphPad Prism 6 software. The results were expressed as mean ± SD. Statistical significance was determined using one-way ANOVA Kruskal-Wallis followed by Bonferroni (EAE clinical score), one-way ANOVA followed by Tukey-Kramer (Nlrp12 mRNA expression, iNOS expression in primary microglia, concentration of pro-inflammatory cytokines), two-way ANOVA followed by Tukey's (percentage of gliosis), or one-way ANOVA followed by Dunet (pro-inflammatory mRNA expression) multiple comparison test. IL-4 results were compared between WT and Nlrp12 −/− mice using Mann-Whitney U test. Statistical significance was accepted at P < 0.05. Nlrp12 mRNA expression reaches a peak at the third week post injection Following immunization with ovalbumin and MOG35–55 in CFA, the spinal cords were dissected from healthy and EAE mice and analyzed for the expression of Nlrp12 messenger ribonucleic acid (mRNA) (Fig. 1). Nlrp12 mRNA expression in the immunized mice was shown to be significantly increased relative to the healthy wild-type (WT) mice at week 1 (threefold increase), week 3 (sevenfold increase), and week 5 (fourfold increase). Additionally, the level of Nlrp12 mRNA expression was increased as of the first week of EAE and reached its highest level at the third week. At 5 weeks post injection, although the expression of Nlrp12 was significantly higher in the diseased mice compared to the healthy mice, it was considerably lower than the third week and resembled much more the disease state of the first week. As a control, ovalbumin was injected, and the spinal cords of the mice treated with ovalbumin were removed after the third week in order to keep consistency with MOG-injected mice. Nlrp12 mRNA expression reaches a peak at third week post injection. Results indicate fold change in Nlrp12 mRNA expression of the diseased mice over the healthy mice. The mice injected with ovalbumin as control were sacrificed 3 weeks post injection. Results are expressed as mean ± SD. Statistical significance was accepted at *P < 0.05. Statistical analysis was done using one-way ANOVA followed by Tukey-Kramer multiple comparison test. n = 5 Nlrp12 −/− mice exhibit exacerbated form of the disease compared to WT mice In order to investigate the role of Nlrp12 in MS, EAE was induced in 8–10-week-old C57BL/6 female mice. An emulsified mixture of MOG35–55 in CFA was subcutaneously injected in mice. The Nlrp12 −/− mice demonstrated clinical symptoms after approximately 5 days post injection whereas the WT mice developed the disease roughly after 9 days. In addition, while the WT mice were showing the first signs of disease, the Nlrp12 −/− mice already demonstrated indications of severe disease, reaching scores of 2, indicative of tail and back limb weaknesses (Fig. 2). Indeed, the Nlrp12 −/− mice were observed to reach higher clinical scores throughout the 9-week period. More precisely, they reached scores of 3–3.5, which indicates weakness in the tail, back, and front limbs compared to the WT mice that reached scores of 2–2.5. In both genotypes, the severity of the disease outcome was observed to peak around the third week post injection and remained relatively constant throughout the 9-week period. Nlrp12 −/− mice exhibit an exacerbated form of disease compared to WT mice. Animals were scored daily by two independent observers and scored based on the following scale: 0, no sign of disease; 1, limp tail or weakness in limbs; 2, limp tail and weakness in limbs; 3, partial limb paralysis; and 4, complete limb paralysis. Statistical significance was accepted at *P < 0.05. Statistical analysis was done by Kruskal-Wallis one-way ANOVA test followed by Bonferroni multiple comparison test. n = 7 Nlrp12 −/− mice demonstrate higher percentage of reactive gliosis after EAE In response to injury, glial cells become reactive, producing multiple pro-inflammatory proteins, as well as increasing in numbers. GFAP is an intermediate protein expressed and upregulated by astrocytes in response to CNS insults [15]. Moreover, in addition to the secretion of multiple pro-inflammatory proteins, the reactive microglial response can be measured by the extent of upregulation of Iba1 [5]. The spinal cords of the healthy and immunized mice were extracted and stained for GFAP (Fig. 3) and Iba1 (Fig. 4). We observed no significant difference in the percent level of astrogliosis and microgliosis between the healthy WT and healthy Nlrp12 −/− mice. Additionally, we observed no differences in the percentage of neither microgliosis nor astrogliosis between the WT and Nlrp12 −/− mice after 3 weeks of EAE (Figs. 5a, 6a). However, after 9 weeks of disease, the Nlrp12 −/− mice demonstrated a significant increase in the level of astrogliosis (30 % compared to 15 % in WTs) in the white matter (WM) and an observable increase within the gray matter (GM) area of the spinal cord compared to the WT mice (Fig. 5b). The 10 % difference between the Nlrp12 −/− mice and the WT mice occurred within the WM. Similar results were obtained for the level of microgliosis, where the Nlrp12 −/− mice demonstrated increased percentage of Iba1 compared to the WT mice (Fig. 6b). Indeed, the difference of 20 % increase in microgliosis within the spinal cord of the Nlrp12 −/− mice compared to the WT mice was primarily within the WM. Photomicrograph pictures of the spinal cords stained with GFAP. GFAP staining of the spinal cord, evaluating astrogliosis percentage following EAE induction. a WT mice, healthy. b Nlrp12 −/− mice, healthy. c WT mice, 3 weeks EAE. d Nlrp12 −/− mice, 3 weeks EAE. e WT mice, 9 weeks EAE. f Nlrp12 −/− mice, 9 weeks EAE. Scale bar is 500 μm Photomicrograph pictures of the spinal cords stained with Iba1. Iba1 staining of the spinal cord, evaluating microgliosis percentage following EAE induction. a WT mice, healthy. b Nlrp12 −/− mice, healthy. c WT mice, 3 weeks EAE. d Nlrp12 −/− mice, 3 weeks EAE. e WT mice, 9 weeks EAE. f Nlrp12 −/− mice, 9 weeks EAE. Scale bar is 500 μm Percent level of astrogliosis following EAE. Percentage of astrogliosis is calculated by the intensity of GFAP staining on total area of the spinal cord. a After 3 weeks EAE. b After 9 weeks EAE. Results are expressed as mean ± SEM. Statistical significance was accepted at *P < 0.05. Statistical analysis was done by two-way ANOVA followed by Tukey's multiple comparison test. Each spinal cord was quantified in duplicates and/or triplicates. n = 3–4 Percent level of microgliosis following EAE. Percentage of microgliosis is calculated by the intensity of Iba1 staining on the total area of the spinal cord. a After 3 weeks EAE. b After 9 weeks EAE. Results are expressed as mean ± SEM. Statistical significance was accepted at *P < 0.05. Statistical analysis was done by two-way ANOVA followed by Tukey's multiple comparison test. Each spinal cord was quantified in duplicates and/or triplicates. n = 3–5 Nlrp12 negatively regulates T cell proliferation We observed higher proliferation in responses to purified CD4+ T cells from the Nlrp12 −/− compared to the WT mice (Fig. 7a–c). Interestingly, while pure activation by anti-CD3/CD28 antibodies resulted in the significantly higher proliferative responses in T cells (Fig. 7c) from the Nlrp12 −/− compared to the WT mice, more physiological activation by splenocytes, although, tended to be higher in T cells from the Nlrp12 −/− mice, did not result in a statistically different proliferation compared to the WT mice (Fig. 7a, b). The proliferation of activated T cells from the WT and Nlrp12 −/− mice in vitro. a, b CD4+ T cells were stained with CFSE and activated with plate-bound anti-CD3/CD28 antibodies stimulation for 3 days. The intensity of CFSE dye in the cells was analyzed by flow cytometry. No significant difference was observed. c Purified CD4+ T cells from the WT and Nlrp12 −/− mice were stimulated with anti-CD3/CD28 for 72 h, and the incorporation of 3H-thymidine was measured during the final 18 h of cell culture. Statistical significance was accepted at *P < 0.05. Statistical analysis was done by Mann-Whitney U test, n = 3–6 per group Nlrp12 deficiency did not affect IL-4 production by activated T cells Differences in T cells proliferation prompted us to verify the levels of IL-4 in the Nlrp12 −/− mice after EAE induction. We chose to look at IL-4 in light of the recent publication by Lukens et al. that observed that Nlrp12 inhibited Th2 responses. We investigated whether Nlrp12 deficiency might affect IL-4 production by T cells in EAE mice. As shown in Fig. 8a, no significant difference was detected between the Nlrp12 −/− and WT mice in the percentage of CD4+ IL-4+ T cells after 3 days of activation with anti-CD3/CD28 antibodies in vitro. Consistent with this finding, we did not observe any significant differences in the levels IL-4 in lysates from the lymph nodes of the Nlrp12 −/− or WT EAE mice neither by RT-PCR (Fig. 8c) nor by ELISA (Fig. 8e). Similar observation demonstrated that there was no statistical difference between IL-4 levels in serum (Fig. 8b) and cerebellum (Fig. 8d) from the Nlrp12 −/− EAE mice compared to the WT EAE mice. IL-4 production by activated T cells from the WT or Nlrp12 −/− mice in vitro and in vivo. a CD4+ T cells were purified from the lymph nodes and spleens and stimulated with anti-CD3/CD28 antibodies. Intracellular production of IL-4 by activated CD4+ T cells was determined using flow cytometry. b The levels of IL-4 in serum samples from WT and Nlrp12 −/− EAE mice, measured by ELISA. c The level of IL-4 mRNA in lymph nodes from WT and Nlrp12 −/− EAE mice, quantified by real-time PCR. d, e The levels of IL-4 in tissue samples from the WT and Nlrp12 −/− EAE mice. Cerebellum and lymph node tissues were collected from the mice after 3 weeks of immunization with MOG:CFA. The tissues were homogenized in lysis buffer, and IL-4 levels were measured in tissue lysate by ELISA. Statistical analysis was done by Mann-Whitney U test, n = 3–6 per group. No significant difference was observed Nlrp12 deficiency augments expression of pro-inflammatory molecules in the CNS after EAE Looking into the mechanisms of increased inflammation in the Nlrp12 −/− mice, we analyzed mRNA expression of pro-inflammatory proteins in the spinal cords of the mice 3 weeks post immunization (Fig. 9). Compared to the WT mice, the Nlrp12 −/− mice demonstrated significantly higher levels of Cox2 (threefold increase), IL-1β (fourfold increase), and Ccr5 (tenfold increase) mRNA expressions. Although a relative increase in the mRNA expression of Mip3α was observed, that difference was not significant. Thus, these results demonstrate that in the absence of Nlrp12, the inflammatory response is much more significant. Nlrp12 deficiency augments expression of pro-inflammatory molecules in the CNS after EAE. Results indicate fold change in mRNA expression of pro-inflammatory proteins in the spinal cords of the Nlrp12 −/− mice relative to the WT mice. A significant increase in the expression of Cox2, IL-1β, and Ccr5 mRNAs and no in the expression of Mip3α were observed. Results are expressed as mean ± SD. Statistical significance was accepted at *P < 0.05. Statistical analysis was done using one-way ANOVA followed by Dunet comparison test relative to control. n = 5 Nlrp12 −/− primary microglia express increased levels of reactive species and pro-inflammatory cytokines The inflammatory response is an important feature of the innate immunity in the regulation of homeostasis. Inflammation is an innate response that occurs following the encounter of harmful bodies; however, a shift towards anti-inflammatory environment occurs in order to re-establish the balance. Microglia cells play a critical role in this process. Cortices from 1-day-old murine pups were removed, and after 3 weeks in culture, primary microglia cells were separated from astrocytes. Stimulation with bacterial endotoxin lipopolysaccharide (LPS) revealed a significant increase (twofold increase) in the expression of inducible nitric oxide synthase (iNOS), the enzyme responsible for the production of nitric oxide (NO), in Nlrp12 −/− microglia compared to WT microglia (Fig. 10a, b). The supernatants, following LPS stimulation, were further analyzed by Griess reagent assay, and we observed significantly more (2.5-fold increase) nitrates secreted in the media from the microglia of the Nlrp12 −/− mice compared to the WT mice. We additionally observed a dose-response effect (Fig. 10c). Expression of iNOS in primary microglia cells. Microglia cells (1 × 105) from the Nlrp12 −/− mice and WT mice were stimulated with 1 ug/mL LPS for 12 h. a Western blot analysis. b Densitometric analysis of iNOS. c Concentration of nitrates using Griess reagent assay. Results are expressed as mean ± SD. Statistical significance was accepted at *P < 0.05. Statistical analysis was done using one-way ANOVA followed by Tukey-Kramer multiple comparison test. n = 5 To further characterize microglial response, purified microglia from both genotypes were incubated with 500 ng/mL LPS for 12 h and supernatants were analyzed for the presence of pro-inflammatory cytokines TNFα and IL-6. At basal level, we observed no differences between the WT and Nlrp12 −/− microglia. However, after treatment with LPS, the microglia from the Nlrp12 −/− mice secreted more than twofold increase in TNFα (Fig. 11a) and IL-6 (Fig. 11b) concentrations compared to the WT microglia. Once again demonstrating that in the absence of Nlrp12, the cellular environment is more inflammatory. TNF-α and IL-6 concentrations following treatment with LPS in primary microglia cells. Microglia cells (1 × 105) from the Nlrp12 −/− and WT mice were stimulated with 500 ng/mL LPS for 12 h. a ELISA for TNF-α concentration. b ELISA for IL-6 concentration. Results are expressed as mean ± SD. Statistical significance was accepted at *P < 0.05. Statistical analysis was done using one-way ANOVA followed by Tukey-Kramer multiple comparison test. n = 5 The process of inflammation is a fundamental response aimed at protecting the body from foreign and detrimental causes. Neuroinflammation can become harmful if it is unregulated and prolonged. A continuous and persistent response will eventually lead to a chronic state of inflammation, a prominent feature of many neurodegenerative diseases, including MS. NLRP12 is of interest to the study of MS notably due to its restricted expression in cells derived from hematopoietic origins such as monocytes, dendritic cells, and granulocytic cells, and most recently, T cells [16] and its role in attenuating the inflammatory response by interfering in both branches of the NF-κB pathway [9, 17]. To investigate the implication of Nlrp12 in MS, EAE was induced in the WT and in Nlrp12 −/− mice. Our results demonstrated that in mice lacking the Nlrp12 gene, EAE developed earlier compared to the WT mice, and the Nlrp12 −/− mice showed increased severity throughout the course of the disease. Interestingly, after EAE induction, Nlrp12 mRNA expression was significantly increased in the WT mice compared to the healthy WTs. These results suggest that Nlrp12 plays an important role in maintaining the level of inflammation and ensuring that a hyper-inflammatory state does not occur. In fact, the expression profile of Nlrp12 over the course of the disease is suggestive of this regulatory role. Indeed, previous studies have shown that in response to live bacteria such as M. tuberculosis, TNFα, and IFNγ, a reduction in Nlrp12's expression is in accordance with an increase in the inflammatory response [18, 19]. Moreover, Nlrp12's over-expression has been previously shown to attenuate the inflammatory response by negatively regulating the NF-κB pathways [17]. A study conducted by Shami et al. demonstrated that the expression of Nlrp12 is increased in response to nitric oxide [20]. NO is a reactive molecule that is produced in iNOS at sites of inflammation in MS, and it is involved in lesion development [21]. As T cell responses play a crucial role in the development of EAE [12], we evaluated purified CD4 T cell proliferative response after CD3/CD28 activation and we saw significantly elevated proliferation of T cells from the Nlrp12 −/− mice. We then evaluated T cells proliferation using recall response 10 days after EAE induction by stimulating purified CD4 T cells with MOG peptide in the presence of splenocytes. We observed the tendency of T cells from the Nlrp12 −/− mice for a higher proliferation rate; however, these differences never reached a statistical significance. These results are similar to those published by Lukens et al. [16], where authors observed that pure anti-CD3/CD 28 activation resulted in significantly higher proliferation in T cells from Nlrp12 −/− mice, while in the presence of splenocytes, differences between T cell proliferation of different genotypes were greatly reduced. In this work, the authors propose that the cell autonomous effect of Nlrp12 in T cells shifts T cell differentiations to a Th2-IL-4 producing phenotype [16]. We measured the concentration of IL-4 in the serum, lymph nodes, and brain samples from the WT and from Nlrp12 −/− mice at 3 weeks after EAE and did not find any differences in the expression of IL-4. These results are consistent with our observation that there were no differences in percentage of IL-4 producing cells and that results from experiments in complex cellular interaction at the tissue level in vivo can be different from results of clean anti-CD3/CD28 activation in vitro. Till today, no exact mechanism has been described that explains Nlrp12 activity in different cell types. Nlrp12 has been shown to inhibit classical and alternative pathways of NF-κB in different cell types and different stimulations; for extensive review, please read Tuncer et al. [9]. In light of these controversies, the different KO strategies to remove Nlrp12 may have produced an uncontrolled variable that resulted in different phenotypes [22, 23]. Future studies should address these differences. In our studies we observed that Nlrp12 −/− mice demonstrated more severe course of EAE according to classical evaluation of clinical scores, while in the work by Lukens and co-workers, the authors noted appearances of the atypical EAE. These results are intriguing, as overall effect of Nlrp12 on the EAE pathology was similar to our observations. Furthermore, EAE is a well-characterized and the most widely used mouse model to study MS [13]. It exhibits the main features of MS pathology such as inflammation, destruction of myelin, and reactive gliosis. Moreover, many of the current therapies for MS, such as Tysabri were developed following EAE studies [24]. However, it is important to note that the evaluations of clinical scores are subjective. In our studies, we did not measure the degree of atypical EAE as there is no quantifiable scale to evaluate this pathology. Observing video clips published by Lukens et al. (supplemental materials), we can tell that Nlrp12 mouse was severely compromised and had impaired righting reflex, which suggests severe weakness/paralysis of the hind limbs as well as paralysis of the trunk muscles. To further elucidate how Nlrp12 is playing a protective role in the disease, the spinal cords of both the WT and Nlrp12 −/− mice were analyzed for the expression of genes implicated in EAE as well as in MS. Our results demonstrated a significant increase in the mRNA expression of Cox-2, IL-1β, and Ccr5 genes in the Nlrp12 −/− mice compared to the WT mice, suggesting a protective role played by Nlrp12 in EAE at the level of pro-inflammatory gene expression. The increase in expression of pro-inflammatory molecules in Nlrp12-deficient phenotype has been demonstrated by multiple studies [22, 25]. Next, we demonstrated that Nlrp12 inhibits inflammation during EAE at the level of microglia. We showed that Nlrp12 deficiency augments pro-inflammatory microglial phenotypes by using purified primary microglia cells from the WT and Nlrp12 −/− mice. Consistent with our in vivo observation, stimulation of microglia with LPS resulted in a significant increase of iNOS expression, NO, TNFα, and IL-6 secretion from the Nlrp12 −/− microglia cells compared to the WT microglia. These results are consistent with the suppressive role of Nlrp12 in cells of myeloid origin [26]. A report by Lukens et al. also found increased inflammatory response in the CNS tissue of Nlrp12 −/− mice compared to WT controls, although, microglia responses per se were not verified. Furthermore, the notion of inhibitory NLRs is not new. Similar to our results, stimulation of primary Nlrx1 −/− microglia cells revealed a significant increase in the pro-inflammatory response, thus, showing a suppressive role for Nlrx1 in microglial activation [27]. The roles of microglia and astrocytes are well defined in the pathology of MS. Previous studies on Nlrp3 have demonstrated that the absence of this receptor results in better disease outcome and reduced gliosis following EAE [28]. The spinal cords of the Nlrp12 −/− mice and WT mice were stained with Iba1 and GFAP in order to assess the extent of microgliosis and astrogliosis, respectively. Surprisingly, no differences in the percentage of gliosis were observed between the two genotypes at the third week; however, after 9 weeks, the Nlrp12 −/− mice demonstrated significantly increased gliosis compared to the WT mice. Additionally, in both genotypes, the majority of gliosis occurs within the white matter area of the spinal cord. Although a quantitative difference was not observed at the third week, in vitro study suggests qualitative changes in microglia activation. Indeed upon LPS stimulation, microglia from the Nlrp12 −/− mice released significantly more pro-inflammatory mediators. Furthermore, the remarkable increase in Ccr5 mRNA expression observed in the Nlrp12 −/− mice suggests that Nlrp12 may be playing a crucial role in the influx of inflammatory infiltrates. CCR5 is a chemokine receptor that is expressed primarily by monocytes, macrophages, effector T cells, immature dendritic cells, and NK cells [29]. Moreover, previous studies in both animal and in MS patients have demonstrated the upregulation of CCR5 in inflammatory lesions [30–32]. Also, a chronic over-expression of IL-1β has been shown to result in the disruption of the blood-brain barrier (BBB) and in the infiltration of leukocytes such as macrophages, DCs, and neutrophils [33, 34]. Thus, the increase of Ccr5 and IL-1β mRNA in the spinal cords of the Nlrp12 −/− compared to the WT mice during EAE supports the notion of an increased influx of inflammatory cells in these mice. In fact, the entry of pro-inflammatory leukocytes into the CNS is an early phenomenon capable of initiating events that result in BBB disruption and neuroinflammation [35]. Interestingly, previous studies have demonstrated a reduction in inflammatory infiltrates within the CNS in EAE-induced Nlrp3 −/− mice, where Nlrp3 was shown to play an inflammatory role by inducing immune cell migration whereas, our results suggest that Nlrp12 plays a protective role by maintaining the level of inflammatory influx [36, 37]. Thus, future studies should focus on evaluating in details the presence of inflammatory infiltrates in order to clarify the driving force responsible for the differences observed between WT and Nlrp12 −/− mice. The study of NLRs and their functions has been mainly studied in the context of host and pathogen interactions. Their role in mediating the inflammatory response is well recognized, while their role in other diseases is an emerging field. Recent reports suggest that Nlrs may play a detrimental as well as beneficial role in the progression of EAE. For example, Nod1, Nod2, and Nlrp3 augment inflammation and T cell responses that lead to increased EAE severity. On the other hand, the expression of Nlrp12 and Nlrx1 inhibits the expression of pro-inflammatory genes, suppressing inflammation and reducing the severity of EAE [27]. In many neurodegenerative diseases the regulation of neuro-inflammatory responses is a key target for therapeutic interventions. Numerous studies have focused on the role and contribution of T- and B-lymphocytic responses in MS, and much of the pathophysiology of MS has gravitated around the adaptive branch of the immune system. The implication of the adaptive immune response is undeniable in this disorder, given that the primary cause of damages in the nervous system of MS patients is due to CNS inflammation, where CD4+ autoreactive T cells primarily react to myelin epitope, enter the CNS, and result in the destruction of myelin [3]. At this stage, we are not excluding the role of Nlrp12 in T cell responses during EAE. However, it is vital to understand the underlying cause of the inflammatory process in MS. Thus, it is important to focus on the innate immune response, since in essence, inflammation is a response of innate immunity [38]. Thus our findings that Nlrp12 plays a role in microglia activation during EAE may help find the mechanism that regulates CNS specific inflammation. Marjan Gharagozloo and Tara M. Mahvelati are first co-authors APCs: antigen-presenting cells BBB: CCL22: CC chemokine ligand 22 CCL5: CC chemokine ligand 5 CCR5: CC chemokine receptor 5 complete Freud's adjuvant CFSE: carboxyfluorescein diacetate succinimidyl ester COX2: cyclooxygenase 2 DCs: EAE: GFAP: glial fibrillary acidic protein GM: Iba1: ionized calcium binding adaptor molecule 1 IFNγ: interferon gamma IL-1β: interleukin-1-beta IL-4: iNOS: inducible nitric oxide synthase LRR: leucine-rich repeat MHC class II: major histocompatibility complex class II MIP1α: macrophage inhibitory protein 1 alpha MOG: myelin oligodendrocyte glycoprotein mRNA: messenger ribonucleic acid NACHT: NAIP (neuronal apoptosis inhibitor protein), C2TA (MHC class 2 transcription activator), HET-E (incompatibility locus protein from Podospora anserina), and TP1 (telomerase-associated protein) NF-κB: nuclear factor-kappa B NK: NLR: nod-like receptors OVA: ovalbumin PAMP: pathogen-associated molecular patterns PRR: pathogen-recognition receptors Th2: T helper cell 2 tumor necrosis factor alpha WM: white matter WT: wild type We thank the Multiple Sclerosis Society of Canada and Fonds de recherche du Québec—Santé for financial support. MG, TM, and DG designed the experiments and wrote the manuscript. MG, TM, and EI performed the experiments. PG made the silica analysis. EZ, BD, SI, and AA helped for the complementary experiments in T cells. All authors read and approved the manuscript. Program of Immunology, Department of Pediatrics, CR-CHUS, Faculty of Medicine and Health Sciences, University of Sherbrooke, Sherbrooke, Quebec, Canada Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada Browning V, Joseph M, Sedrak M. Multiple sclerosis: a comprehensive review for the physician assistant. Jaapa. 2012;25:24–9.PubMedGoogle Scholar Lucchinetti C, Bruck W, Parisi J, Scheithauer B, Rodriguez M, Lassmann H. Heterogeneity of multiple sclerosis lesions: implications for the pathogenesis of demyelination. Ann Neurol. 2000;47:707–17.View ArticlePubMedGoogle Scholar Hemmer B, Kerschensteiner M, Korn T. Role of the innate and adaptive immune responses in the course of multiple sclerosis. Lancet Neurol. 2015;14:406–19.View ArticlePubMedGoogle Scholar Goverman J. Autoimmune T cell responses in the central nervous system. Nat Rev Immunol. 2009;9:393.PubMed CentralView ArticlePubMedGoogle Scholar Ohsawa K, Imai Y, Sasaki Y, Kohsaka S. Microglia/macrophage-specific protein Iba1 binds to fimbrin and enhances its actin-bundling activity. J Neurochem. 2004;88:844–56.View ArticlePubMedGoogle Scholar Hernandez-Pedro NY, Espinosa-Ramirez G, de la Cruz VP, Pineda B, Sotelo J. Initial immunopathogenesis of multiple sclerosis: innate immune response. Clin Dev Immunol. 2013;2013:413465.PubMed CentralView ArticlePubMedGoogle Scholar David S, Kroner A. Repertoire of microglial and macrophage responses after spinal cord injury. Nat Rev Neurosci. 2011;12:388–99.View ArticlePubMedGoogle Scholar Kumar H, Kawai T, Akira S. Pathogen recognition by the innate immune system. Int Rev Immunol. 2011;30:16–34.View ArticlePubMedGoogle Scholar Tuncer S, Fiorillo MT, Sorrentino R. The multifaceted nature of NLRP12. J Leukoc Biol. 2014;96:991–1000.View ArticlePubMedGoogle Scholar Kufer TA, Sansonetti PJ. NLR functions beyond pathogen recognition. Nat Immunol. 2011;12:121–8.View ArticlePubMedGoogle Scholar Lawrence T. The nuclear factor NF-kappaB pathway in inflammation. Cold Spring Harb Perspect Biol. 2009;1:a001651.PubMed CentralView ArticlePubMedGoogle Scholar Gasparini C, Feldmann M. NF-kappaB as a target for modulating inflammatory responses. Curr Pharm Des. 2012;18:5735–45.View ArticlePubMedGoogle Scholar Miller SD, Karpus WJ, Davidson TS. Experimental autoimmune encephalomyelitis in the mouse. Current protocols in immunology/edited by John E Coligan [et al.] 2007, CHAPTER:Unit-15.11.Google Scholar Schmittgen TD, Livak KJ. Analyzing real-time PCR data by the comparative CT method. Nat Protoc. 2008;3:1101–8.View ArticlePubMedGoogle Scholar Brahmachari S, Fung YK, Pahan K: Induction of glial fibrillary acidic protein expression in astrocytes by nitric oxide. The Journal of neuroscience 2006;26:4930-4939.Google Scholar Lukens JR, Gurung P, Shaw PJ, Barr MJ, Zaki MH, Brown SA, et al. The NLRP12 sensor negatively regulates autoinflammatory disease by modulating interleukin-4 production in T Cells. Immunity. 2015;42:654–64.View ArticlePubMedGoogle Scholar Lich JD, Williams KL, Moore CB, Arthur JC, Davis BK, Taxman DJ, et al. Monarch-1 suppresses non-canonical NF-kappaB activation and p52-dependent chemokine expression in monocytes. J Immunol. 2007;178:1256–60.View ArticlePubMedGoogle Scholar Williams KL, Lich JD, Duncan JA, Reed W, Rallabhandi P, Moore C, Kurtz S, Coffield VM, Accavitti-Loper MA, Su L: The CATERPILLER protein Monarch-1 is an antagonist of Toll-like receptor-, tumor necrosis factor α-, and Mycobacterium tuberculosis-induced pro-inflammatory signals. Journal of Biological Chemistry 2005;280:39914-39924.Google Scholar Lich JD, Ting JP: Monarch-1/PYPAF7 and other CATERPILLER (CLR, NOD, NLR) proteins with negative regulatory functions. Microbes Infect 2007;9:672-676.Google Scholar Shami PJ, Kanai N, Wang LY, Vreeke TM, Parker CH: Identification and characterization of a novel gene that is upregulated in leukaemia cells by nitric oxide. Br J Haematol 2001;112:138-147.Google Scholar Smith KJ, Lassmann H: The role of nitric oxide in multiple sclerosis. Lancet Neurol 2002;1:232-241.Google Scholar Zaki MH, Vogel P, Malireddi RK, Body-Malapel M, Anand PK, Bertin J, et al. The NOD-like receptor NLRP12 attenuates colon inflammation and tumorigenesis. Cancer Cell. 2011;20:649–60.Google Scholar Arthur JC, Lich JD, Wilson JE, Ye Z, Allen IC, Gris D, et al. NLRP12 controls dendritic and myeloid cell migration to affect contact hypersensitivity. J Immunol. 2010;185:4515–9.Google Scholar Constantinescu CS, Farooqi N, O'Brien K, Gran B. Experimental autoimmune encephalomyelitis (EAE) as a model for multiple sclerosis (MS). Br J Pharmacol. 2011;164:1079–106.PubMed CentralView ArticlePubMedGoogle Scholar Allen IC, Wilson JE, Schneider M, Lich JD, Roberts RA, Arthur JC, et al. NLRP12 suppresses colon inflammation and tumorigenesis through the negative regulation of noncanonical NF-kappaB signaling. Immunity. 2012;36:742–54.PubMed CentralView ArticlePubMedGoogle Scholar Ye Z, Lich JD, Moore CB, Duncan JA, Williams KL, Ting JP. ATP binding by monarch-1/NLRP12 is critical for its inhibitory function. Mol Cell Biol. 2008;28:1841–50.PubMed CentralView ArticlePubMedGoogle Scholar Eitas TK, Chou WC, Wen H, Gris D, Robbins GR, Brickey J, et al. The nucleotide-binding leucine-rich repeat (NLR) family member NLRX1 mediates protection against experimental autoimmune encephalomyelitis and represses macrophage/microglia-induced inflammation. J Biol Chem. 2014;289:4173–9.PubMed CentralView ArticlePubMedGoogle Scholar Jha S, Srivastava SY, Brickey WJ, Iocca H, Toews A, Morrison JP, et al. The inflammasome sensor, NLRP3, regulates CNS inflammation and demyelination via caspase-1 and interleukin-18. J Neurosci. 2010;30:15811–20.View ArticlePubMedGoogle Scholar Lira SA, Furtado GC. The biology of chemokines and their receptors. Immunol Res. 2012;54:111–20.PubMed CentralView ArticlePubMedGoogle Scholar Jiang Y, Salafranca MN, Adhikari S, Xia Y, Feng L, Sonntag MK, et al. Chemokine receptor expression in cultured glia and rat experimental allergic encephalomyelitis. J Neuroimmunol. 1998;86:1–12.View ArticlePubMedGoogle Scholar Sorce S, Myburgh R, Krause KH. The chemokine receptor CCR5 in the central nervous system. Prog Neurobiol. 2011;93:297–311.View ArticlePubMedGoogle Scholar Zang YC, Samanta AK, Halder JB, Hong J, Tejada-Simon MV, Rivera VM, et al. Aberrant T cell migration toward RANTES and MIP-1 alpha in patients with multiple sclerosis. Overexpression of chemokine receptor CCR5. Brain. 2000;123(Pt 9):1874–82.View ArticlePubMedGoogle Scholar Holman DW, Klein RS, Ransohoff RM. The blood–brain barrier, chemokines and multiple sclerosis. Biochim Biophys Acta. 1812;2011:220–30.Google Scholar Shaftel SS, Carlson TJ, Olschowka JA, Kyrkanides S, Matousek SB, O'Banion MK. Chronic interleukin-1beta expression in mouse brain leads to leukocyte infiltration and neutrophil-independent blood brain barrier permeability without overt neurodegeneration. J Neurosci. 2007;27:9301–9.View ArticlePubMedGoogle Scholar Larochelle C, Alvarez JI, Prat A. How do immune cells overcome the blood–brain barrier in multiple sclerosis? FEBS Lett. 2011;585:3770–80.View ArticlePubMedGoogle Scholar Gris D, Ye Z, Iocca HA, Wen H, Craven RR, Gris P, et al. NLRP3 plays a critical role in the development of experimental autoimmune encephalomyelitis by mediating Th1 and Th17 responses. J Immunol. 2010;185:974–81.PubMed CentralView ArticlePubMedGoogle Scholar Inoue M, Williams KL, Gunn MD, Shinohara ML. NLRP3 inflammasome induces chemotactic immune cell migration to the CNS in experimental autoimmune encephalomyelitis. Proc Natl Acad Sci U S A. 2012;109:10480–5.PubMed CentralView ArticlePubMedGoogle Scholar Newton K, Dixit VM. Signaling in innate immunity and inflammation. Cold Spring Harb Perspect Biol. 2012;4:a006049.PubMed CentralView ArticlePubMedGoogle Scholar
CommonCrawl
Type-preserving matrices and security of block ciphers Self-dual additive $ \mathbb{F}_4 $-codes of lengths up to 40 represented by circulant graphs May 2019, 13(2): 221-233. doi: 10.3934/amc.2019015 Comparison analysis of Ding's RLWE-based key exchange protocol and NewHope variants Xinwei Gao , Beijing Key Laboratory of Security and Privacy in Intelligent Transportation, Beijing Jiaotong University, No.3 ShangYuanCun, Haidian District, Beijing 100044, China * Corresponding author: Xinwei Gao Received January 2018 Revised May 2018 Published February 2019 Figure(5) / Table(1) In this paper, we present a comparison study on three RLWE key exchange protocols: one from Ding et al. in 2012 (DING12) and two from Alkim et al. in 2016 (NewHope and NewHope-Simple). We compare and analyze protocol construction, notion of designing and realizing key exchange, signal computation, error reconciliation and cost of these three protocols. We show that NewHope and NewHope-Simple share very similar notion as DING12 in the sense that NewHope series also send small additional bits with small size (i.e. signal) to assist error reconciliation, where this idea was first practically proposed in DING12. We believe that DING12 is the first work that presented complete LWE & RLWE-based key exchange constructions. The idea of sending additional information in order to realize error reconciliation and key exchange in NewHope and NewHope-Simple remain the same as DING12, despite concrete approaches to compute signal and reconcile error are not the same. Keywords: Post-quantum, key exchange, RLWE, error reconciliation, comparison, analysis. Mathematics Subject Classification: Primary: 94A60, 11T71; Secondary: 14G50. Citation: Xinwei Gao. Comparison analysis of Ding's RLWE-based key exchange protocol and NewHope variants. Advances in Mathematics of Communications, 2019, 13 (2) : 221-233. doi: 10.3934/amc.2019015 24-cell, Page Version ID: 822760596. https://en.wikipedia.org/w/index.php?title=24-cell&oldid=822760596 Google Scholar M. R. Albrecht, On dual lattice attacks against small-secret lwe and parameter choices in helib and seal, in Annual International Conference on the Theory and Applications of Cryptographic Techniques, Springer, 10211 (2017), 103–129. Google Scholar M. R. Albrecht, F. Göpfert, F. Virdia and T. Wunderer, Revisiting the expected cost of solving usvp and applications to lwe, in International Conference on the Theory and Application of Cryptology and Information Security, Springer, 10624 (2017), 297–322. Google Scholar M. R. Albrecht, R. Player and S. Scott, On the concrete hardness of learning with errors, Journal of Mathematical Cryptology, 9 (2015), 169-203. doi: 10.1515/jmc-2015-0016. Google Scholar E. Alkim, L. Ducas, T. Pöppelmann and P. Schwabe, Newhope without reconciliation, IACR Cryptology ePrint Archive, 2016 (2016), 1157. Google Scholar E. Alkim, L. Ducas, T. Pöppelmann and P. Schwabe, Post-quantum key exchange-a new hope, in USENIX Security Symposium, 2016,327–343. Google Scholar J. Bos, C. Costello, L. Ducas, I. Mironov, M. Naehrig, V. Nikolaenko, A. Raghunathan and D. Stebila, Frodo: Take off the ring! practical, quantum-secure key exchange from lwe, in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, 2016, 1006–1018. doi: 10.1145/2976749.2978425. Google Scholar J. W. Bos, C. Costello, M. Naehrig and D. Stebila, Post-quantum key exchange for the tls protocol from the ring learning with errors problem, in Security and Privacy (SP), 2015 IEEE Symposium on, IEEE, 2015,553–570. doi: 10.1109/SP.2015.40. Google Scholar W. Diffie and M. Hellman, New directions in cryptography, IEEE transactions on Information Theory, 22 (1976), 644-654. doi: 10.1109/tit.1976.1055638. Google Scholar J. Ding, S. Alsayigh, J. Lancrenon, S. RV and M. Snook, Provably secure password authenticated key exchange based on rlwe for the post-quantum world, in Cryptographers Track at the RSA Conference, Springer, 10159 (2017), 183–204. Google Scholar J. Ding, X. Xie and X. Lin, A simple provably secure key exchange scheme based on the learning with errors problem., IACR Cryptology EPrint Archive, 2012 (2012), 688. Google Scholar M. S. Dousti and R. Jalili, Forsakes: A forward-secure authenticated key exchange protocol based on symmetric key-evolving schemes, Advances in Mathematics of Communications, 9 (2015), 471-514. doi: 10.3934/amc.2015.9.471. Google Scholar X. Gao, J. Ding, L. Li and J. Liu, Practical randomized rlwe-based key exchange against signal leakage attack, IEEE Transactions on Computers, 67 (2018), 1584-1593. doi: 10.1109/TC.2018.2808527. Google Scholar X. Gao, J. Ding, L. Li, S. RV and J. Liu, Efficient implementation of password-based authenticated key exchange from rlwe and post-quantum tls, International Journal of Network Security, 20 (2018), 923-930. Google Scholar X. Gao, J. Ding, J. Liu and L. Li, Post-quantum secure remote password protocol from rlwe problem, in International Conference on Information Security and Cryptology, Springer, 10726 (2017), 99–116. Google Scholar X. Gao, J. Ding, S. RV, L. Li and J. Liu, Comparison analysis and efficient implementation of reconciliation-based rlwe key exchange protocol, IACR Cryptology ePrint Archive, 2017 (2017), 1178. Google Scholar X. Gao, L. Li, J. Ding, J. Liu, S. RV and Z. Liu, Fast discretized gaussian sampling and post-quantum tls ciphersuite, in International Conference on Information Security Practice and Experience, Springer, 2017,551–565. doi: 10.1007/978-3-319-72359-4_33. Google Scholar S. González, L. Huguet, C. Martínez and H. Villafañe, Discrete logarithm like problems and linear recurring sequences, Advances in Mathematics of Communications, 7 (2013), 187-195. doi: 10.3934/amc.2013.7.187. Google Scholar V. Lyubashevsky, C. Peikert and O. Regev, On ideal lattices and learning with errors over rings, in Annual International Conference on the Theory and Applications of Cryptographic Techniques, Springer, 6110 (2010), 1–23. doi: 10.1007/978-3-642-13190-5_1. Google Scholar G. Micheli, Cryptanalysis of a noncommutative key exchange protocol, Advances in Mathematics of Communications, 9 (2015), 247-253. doi: 10.3934/amc.2015.9.247. Google Scholar C. Peikert, Public-key cryptosystems from the worst-case shortest vector problem, in Proceedings of the Forty-First Annual ACM Symposium on Theory of Computing, ACM, 2009,333–342. Google Scholar C. Peikert, Lattice cryptography for the internet, in International Workshop on Post-Quantum Cryptography, Springer, 8772 (2014), 197–219. doi: 10.1007/978-3-319-11659-4_12. Google Scholar C. Peikert et al., A decade of lattice cryptography, Foundations and Trends® in Theoretical Computer Science, 10 (2014), 283–424. doi: 10.1561/0400000074. Google Scholar T. Pöppelmann and T. Güneysu, Towards practical lattice-based public-key encryption on reconfigurable hardware, in International Conference on Selected Areas in Cryptography, Springer, 2013, 68–85. Google Scholar O. Regev, On lattices, learning with errors, random linear codes, and cryptography, Journal of the ACM (JACM), 56 (2009), Art. 34, 40 pp. doi: 10.1145/1568318.1568324. Google Scholar P. W. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer, SIAM Review, 41 (1999), 303-332. doi: 10.1137/S0036144598347011. Google Scholar J. Zhang, Z. Zhang, J. Ding, M. Snook and Ö. Dagdelen, Authenticated key exchange from ideal lattices, in EUROCRYPT (2), 9057 (2015), 719–751. doi: 10.1007/978-3-662-46803-6_24. Google Scholar Figure 1. DING12 RLWE key exchange protocol Figure 2. NewHope RLWE key exchange protocol Figure 3. Illustrated figure of simplified NewHope error reconciliation ($ d = 2 $) Figure 4. NewHope-Simple RLWE key exchange protocol Figure 5. Illustrated figure of NewHope-Simple and DING12 signal value computation Table 1. Summary Chart of DING12, BCNS15, NewHope, NewHope-Simple DING12 BCNS15 NewHope NewHope-Simple computation ● Divide $ \mathbb{Z}_q $ into 2 regions. Different region gives corresponding value ● Signal value $ \in\{0, 1\} $ ● Function: Sig() ● Divide $ \mathbb{Z}_q $ into 2 regions. Different region gives corresponding value ● Function: $ \langle\cdot\rangle_{q, 2} $ Difference vector between coefficient vector and center of closest Voronoi cell ● Signal value $ \in\{0, 1, 2, 3\} $ ● Function: HelpRec() ● Divide $ \mathbb{Z}_q $ into 8 regions. ● Signal value $ \in\{0, \cdots, 7\} $ ● Function: NHSCompress() reconciliation ● Add then mod 2 on each coefficient ● Extract least significant bit ● Function: Mod$ _2() $ ● Multiply, round then mod 2 on each coefficient ● Extract most significant bit ● Function: rec(), $ \lfloor\cdot\rceil_{q, 2} $ ● Add signal vector onto coefficient vector ● Decide key bit according to location of the sum vector ● Function: Rec() ● Variant of RLWE decryption ● Recovers encapsulated key ● Function: NHSDecode() Degree $ n $ of $ R_q $ 1024 1024 1024 1024 Signal size ● 1-bit per coefficient ● $ 1\cdot n = 1024 $ bits ● 1-bit per coefficient ● $ 3\cdot n = 3072 $ bits Key size $ n $ bits $ n $ bits 256 bits 256 bits Category Diffie-Hellman-like Diffie-Hellman-like Diffie-Hellman-like KEM (Encryption) Download as excel Jintai Ding, Sihem Mesnager, Lih-Chung Wang. Letters for post-quantum cryptography standard evaluation. Advances in Mathematics of Communications, 2020, 14 (1) : i-i. doi: 10.3934/amc.2020012 Pedro Branco. A post-quantum UC-commitment scheme in the global random oracle model from code-based assumptions. Advances in Mathematics of Communications, 2021, 15 (1) : 113-130. doi: 10.3934/amc.2020046 Giacomo Micheli. Cryptanalysis of a noncommutative key exchange protocol. Advances in Mathematics of Communications, 2015, 9 (2) : 247-253. doi: 10.3934/amc.2015.9.247 Mohamed Baouch, Juan Antonio López-Ramos, Blas Torrecillas, Reto Schnyder. An active attack on a distributed Group Key Exchange system. Advances in Mathematics of Communications, 2017, 11 (4) : 715-717. doi: 10.3934/amc.2017052 Mohammad Sadeq Dousti, Rasool Jalili. FORSAKES: A forward-secure authenticated key exchange protocol based on symmetric key-evolving schemes. Advances in Mathematics of Communications, 2015, 9 (4) : 471-514. doi: 10.3934/amc.2015.9.471 Yu-Chi Chen. Security analysis of public key encryption with filtered equality test. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021053 George Siopsis. Quantum topological data analysis with continuous variables. Foundations of Data Science, 2019, 1 (4) : 419-431. doi: 10.3934/fods.2019017 Xiaoying Han, Jinglai Li, Dongbin Xiu. Error analysis for numerical formulation of particle filter. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1337-1354. doi: 10.3934/dcdsb.2015.20.1337 Anders C. Hansen. A theoretical framework for backward error analysis on manifolds. Journal of Geometric Mechanics, 2011, 3 (1) : 81-111. doi: 10.3934/jgm.2011.3.81 Walter Allegretto, Yanping Lin, Ningning Yan. A posteriori error analysis for FEM of American options. Discrete & Continuous Dynamical Systems - B, 2006, 6 (5) : 957-978. doi: 10.3934/dcdsb.2006.6.957 Javier Fernández, Sebastián Elías Graiff Zurita, Sergio Grillo. Erratum for "Error analysis of forced discrete mechanical systems". Journal of Geometric Mechanics, 2021, 13 (4) : 679-679. doi: 10.3934/jgm.2021030 Javier Fernández, Sebastián Elías Graiff Zurita, Sergio Grillo. Error analysis of forced discrete mechanical systems. Journal of Geometric Mechanics, 2021, 13 (4) : 533-606. doi: 10.3934/jgm.2021017 René B. Christensen, Carlos Munuera, Francisco R. F. Pereira, Diego Ruano. An algorithmic approach to entanglement-assisted quantum error-correcting codes from the Hermitian curve. Advances in Mathematics of Communications, 2022 doi: 10.3934/amc.2021072 G.A.K. van Voorn, D. Stiefs, T. Gross, B. W. Kooi, Ulrike Feudel, S.A.L.M. Kooijman. Stabilization due to predator interference: comparison of different analysis approaches. Mathematical Biosciences & Engineering, 2008, 5 (3) : 567-583. doi: 10.3934/mbe.2008.5.567 Rudolf Zimka, Michal Demetrian, Toichiro Asada, Toshio Inaba. A three-country Kaldorian business cycle model with fixed exchange rates: A continuous time analysis. Discrete & Continuous Dynamical Systems - B, 2021, 26 (11) : 5999-6015. doi: 10.3934/dcdsb.2021143 Bingzheng Li, Zhengzhan Dai. Error analysis on regularized regression based on the Maximum correntropy criterion. Mathematical Foundations of Computing, 2020, 3 (1) : 25-40. doi: 10.3934/mfc.2020003 Johannes Eilinghoff, Roland Schnaubelt. Error analysis of an ADI splitting scheme for the inhomogeneous Maxwell equations. Discrete & Continuous Dynamical Systems, 2018, 38 (11) : 5685-5709. doi: 10.3934/dcds.2018248 Ahmad Ahmad Ali, Klaus Deckelnick, Michael Hinze. Error analysis for global minima of semilinear optimal control problems. Mathematical Control & Related Fields, 2018, 8 (1) : 195-215. doi: 10.3934/mcrf.2018009 M. González, J. Jansson, S. Korotov. A posteriori error analysis of a stabilized mixed FEM for convection-diffusion problems. Conference Publications, 2015, 2015 (special) : 525-532. doi: 10.3934/proc.2015.0525 Rod Cross, Hugh McNamara, Leonid Kalachev, Alexei Pokrovskii. Hysteresis and post Walrasian economics. Discrete & Continuous Dynamical Systems - B, 2013, 18 (2) : 377-401. doi: 10.3934/dcdsb.2013.18.377 Xinwei Gao
CommonCrawl
Journées SL2R à Reims 2018 from Thursday, 18 October 2018 (13:30) to Friday, 19 October 2018 (13:00) 13:30 Accueil Room: Amphi 2 14:00 Conformally covariant bi-differential operators for differential forms - Khalid Koufany (Université de Lorraine - Nancy) Conformally covariant bi-differential operators for differential forms Khalid Koufany (Université de Lorraine - Nancy) Room: Amphi 2 The classical Rankin-Cohen brackets are bi-differential operators from $C^\infty(\mathbb R)\times C^\infty(\mathbb R)$ into $ C^\infty(\mathbb R)$. They are covariant for the diagonal action of ${\rm SL}(2,\mathbb R)$ through principal series representations. We construct generalizations of these operators, replacing $\mathbb R$ by $\mathbb R^n,$ the group ${\rm SL}(2,\mathbb R)$ by the group ${\rm SO}_0(1,n+1)$ viewed as the conformal group of $\mathbb R^n,$ and functions by differential forms. 15:00 Poisson transforms adapted to BGG-complexes - Christoph Harrach (University of Vienna (Austria)) Poisson transforms adapted to BGG-complexes Christoph Harrach (University of Vienna (Austria)) Room: Amphi 2 Let $G$ be a semisimple Lie group with finite centre, $K$ a maximal compact subgroup and $P$ a parabolic subgroup of $G$. We present a new construction of Poisson transforms between vector bundle valued differential forms on the homogeneous parabolic geometry $G/P$ and its corresponding Riemannian symmetric space $G/K$ which is tailored to the exterior calculus and can be fully described by invariant elements in finite dimensional representations of reductive Lie groups. Furthermore, we show how these transforms are compatible with several invariant differential operators, which induce a strong connection between Bernstein-Gelfand-Gelfand complexes on $G/P$ and twisted deRham complexes on $G/K$. Finally, we consider the special case of the real hyperbolic space and its conformal boundary and discuss Poisson transforms of differential forms with values in the bundle associated to the standard representation $\mathbb{R}^{n+1,1}$ of $G = SO(n+1,1)_0$. 16:00 Pause Room: Salle de Séminaire 16:30 K-theory of group C*-algebras and the BGG complex - Pierre Julg (Université d'Orléans) K-theory of group C*-algebras and the BGG complex Pierre Julg (Université d'Orléans) Room: Amphi 2 The Baum-Connes conjecture on the K-theory of group C*-algebras is a difficult open problem since the beginning of the 1980's. In the last 30 years a programme has been developed to prove the Baum-Connes conjecture with coefficients for semi-simple Lie groups. The tools involved are: the flag manifolds, the BGG complex, and L2 cohomology of symmetric spaces. 17:30 A class of locally compact quantum groups arising from Kohn-Nirenberg quantization - Victor Gayral (Université de Reims Champagne-Ardenne) A class of locally compact quantum groups arising from Kohn-Nirenberg quantization Victor Gayral (Université de Reims Champagne-Ardenne) Room: Amphi 2 Locally compact quantum group (LCQG) in the setting of von Neumann algebras (aka Kustermans-Vaes quantum groups), is believed to give the correct notion of symmetries of quantum spaces (in the setting of operator algebras). While this theory is fast growing, there are very few examples of (non-compact) LCQG. In this talk, I will explain how the good old Kohn-Nirenberg quantization allows to construct a new class of LCQG (and also why the very good old Weyl quantization doesn't work here). This is a joint work (in progress) with Pierre Bieliavsky, Lars Tuset and Sergiy Neshveyev. 20:00 Repas à partir de 20h Repas à partir de 20h 09:00 Does $"ax+b"$ stand for the solvable analogue of $SL_2(\mathbb{R})$ in deformation theory ? - Ali Baklouti (Université de Sfax (Tunisie)) Does $"ax+b"$ stand for the solvable analogue of $SL_2(\mathbb{R})$ in deformation theory ? Ali Baklouti (Université de Sfax (Tunisie)) Room: Amphi 2 Let $G$ be a Lie group, $H$ a closed subgroup of $G$ and $\Gamma$ a discontinuous subgroup for the homogeneous space $\mathscr{X}=G/H$, which means that $\Gamma$ is a discrete subgroup of $G$ acting properly discontinuously and fixed point freely on $\mathscr{X}$. For any deformation of $\Gamma$, the deformed discrete subgroup may fail to act discontinuously on $\mathscr{X}$, except for the case when $H$ is compact. The subject of the talk is to emphasize this specific issue and to deal with some questions related to the geometry of the related parameter and deformation spaces, namely the local rigidity conjecture in the nilpotent setting. When $G$ is semi-simple, the analogue of the Selberg-Weil-Kobayashi rigidity theorem in the non-Riemannian setting is recorded, especially the role of the group $SL_2(\mathbb{R})$ as a fake twin of the solvable $"ax+b"$ is also discussed. 10:30 Reduction of symplectic symmetric spaces and étale affine representations - Yannick Voglaire (Université du Luxembourg) Reduction of symplectic symmetric spaces and étale affine representations Yannick Voglaire (Université du Luxembourg) Room: Amphi 2 We introduce a notion of symplectic reduction for symplectic symmetric spaces as a means to the study of their structure theory. We show that any such space can be written as a direct product of a semisimple and a completely symplectically reducible one. Underlying symplectic reduction is a notion of so-called pre-Lie triple system. We will explain how these are related to étale affine representations of Lie triple systems, how any symplectic symmetric space and any Jordan triple system yield such a structure, and how they allow to build new from old (symplectic) symmetric spaces. 11:30 Asymptotics of characters and associated cycles of Harish-Chandra modules - Salah Mehdi (Université de Lorraine (Metz)) Asymptotics of characters and associated cycles of Harish-Chandra modules Salah Mehdi (Université de Lorraine (Metz)) Room: Amphi 2 Abstract: We describe a translation principle for the Dirac index of virtual $({\mathfrak g},K)$-modules. To each coherent family of such modules we attach a polynomial, on the dual of the compact Cartan subalgebra, which expresses the dependence of the leading term in the Taylor expansion of the character of the modules. Finally we will explain how this polynomial is related to the multiplicities of the associated cycle of certain Harish-Chandra modules. These results are joint with P. Pandžić, D. Vogan and R. Zierau.
CommonCrawl
Foil Calculator How to Use the Foil Calculator? What Is Foil Method And Calculator? How Can We Use The FOIL Method In Multiplication Of Binomial Expressions? Foil Calculator is a free online tool that simplifies the given equation. Using STUDYQUERIES's foil calculator tool makes the calculation faster, and it displays the simplification value in a fraction of a second. To use the foil calculator, follow these steps: Step 1: Put the expression into the input field Step 2: Click "Calculate" to get the simplification Step 3: The simplified expression will be displayed in the output field FOIL (first, outer, inner, and last) is an efficient way of remembering how to multiply two binomials in a very organized manner. FOIL refers to the following acronym: $$\color{red}{First}\Longrightarrow\pmb{\color{Blue}{F}}$$ $$\color{red}{Outer}\Longrightarrow\pmb{\color{Blue}{O}}$$ $$\color{red}{Inner}\Longrightarrow\pmb{\color{Blue}{I}}$$ $$\color{red}{Last}\Longrightarrow\pmb{\color{Blue}{L}}$$ The acronym FOIL stands for first, outer, inner, and last. In order to put this into perspective, suppose we want to multiply two binomials, $$\left( {a + b} \right)\left( {c + d} \right)$$ The first means multiplying the terms that appear in the first position of each binomial. $$\left( {\color{red}{a} + b} \right)\left( {\color{red}{c} + d} \right)=\color{red}{a.c}+\_$$ The outer means to multiply the terms that are located at the ends (outermost) of the two binomials when written side-by-side. $$\left( {\color{red}{a} + b} \right)\left( {c + \color{red}{d}} \right)=a.c+\color{red}{a.d}+\_$$ The inner means to multiply the middle two terms of the binomials when they are side-by-side. $$\left( {a + \color{red}{b}} \right)\left( {\color{red}{c} + d} \right)=a.c+b.d+\color{red}{b.c}+\_$$ The last means multiplying the terms in the last position of each binomial. $$\left( {a + \color{red}{b}} \right)\left( {c + \color{red}{d}} \right)=a.c+b.d+b.c+\color{red}{b.d}$$ Taking the four (4) partial products from the first, outer, inner and last, we simply add them together to obtain the final answer. \(\pmb{\color{red}{Multiply\ the\ binomials \left( {x + 5} \right)\left( {x – 3} \right)\ using\ the\ FOIL\ Method.}}\) Multiply the pair of terms from each binomial in the first position. \(\left( {\color{red}{x} + 5} \right)\left( {\color{red}{x} – 3} \right)=\color{red}{x^2}\) When the two binomials are written side by side, multiply the outer terms. \(\left( {\color{red}{x} + 5} \right)\left( {x \color{red}{-3}} \right)=x^2-\color{red}{3x}\) When you write the two binomials side by side, multiply their inner terms. \(\left( {x + \color{red}{5}} \right)\left( {\color{red}{x} – 3} \right)=x^2-3x+\color{red}{5x}\) Multiply the pair of terms in each binomial from the last position. \(\left( {x + \color{red}{5}} \right)\left( {x \color{red}{-3}} \right)=x^2-3x+5x-\color{red}{15}\) Lastly, combine like terms to simplify. The middle terms can be combined with the variable x. \(x^2\color{red}{-3x}+\color{red}{5x}-15=x^2+\color{red}{2x}-15\) \(\pmb{\color{red}{Multiply\ the\ binomials \left( {3x – 7} \right)\left( {2x + 1} \right)\ using\ the\ FOIL\ Method.}}\) If the first presentation on how to multiply binomials using FOIL did not make sense to you. Let me show you another approach. In this way, you will be exposed to different approaches to addressing the same type of problem with a different approach. Multiply the first two terms \(\left( {\color{red}{3x} – 7} \right)\left( {\color{red}{2x} +1} \right)=\color{red}{6x^2}\) By multiplying the outer terms \(\left( {\color{red}{3x} – 7} \right)\left( {2x+ \color{red}{1}} \right)=6x^2+\color{red}{3x}\) Multiply the inner terms \(\left( {3x \color{red}{-7}} \right)\left( {\color{red}{2x} +1} \right)=6x^2+3x \color{red}{-14x}\) Multiply the last terms \(\left( {3x \color{red}{-7}} \right)\left( {2x+ \color{red}{1}} \right)=6x^2+3x-14x \color{red}{-7}\) Using FOIL, we arrive at this polynomial, which can be simplified by combining similar terms. In order to get a single value, the two middle x-terms can be subtracted. Combine the terms 3x and -14x that are similar in the middle. \(6x^2+\color{red}{3x}-\color{red}{14x}-7=6x^2\color{red}{-11x}-7\) \(\pmb{\color{red}{Multiply\ the\ binomials \left( { -4x + 5} \right)\left( {x + 1} \right)\ using\ the\ FOIL\ Method.}}\) Another way of doing this is to list the four partial products, and then add them together to get the answer. Multiply the first terms: \((-4x)\times (x)=\color{red}{-4x^2}\) Multiply the outer terms: \((-4x)\times(1)=\color{red}{-4x}\) Multiply the inner terms: \((5)\times(x)=\color{red}{5x}\) Multiply the last terms: \((5)\times(1)=\color{red}{5}\) Get the sum of the partial products, and then combine similar terms. \((-4x^2)+(-4x)+(5x)+(5) = -4x^2+x+5\) \(\pmb{\color{red}{Multiply\ the\ binomials \left( { -7x – 3} \right)\left( { -2x + 8} \right)\ using\ the\ FOIL\ Method.}}\) Multiply the first terms: \((-7x)\times (-2x)=\color{red}{14x^2}\) Multiply the outer terms: \((-7x)\times(8)=\color{red}{-56x}\) Multiply the inner terms: \((-3)\times(-2x)=\color{red}{+6x}\) Multiply the last terms: \((-3)\times(8)=\color{red}{-24}\) Finally, combine like terms to finish this off! \(\left( { -7x – 3} \right)\left( { -2x + 8} \right)=(14x^2)+(-56x)+(6x)+(-24) = -14x^2-50x-24\) \(\pmb{\color{red}{Multiply\ the\ binomials \left( { -x – 1} \right)\left( { -x + 1} \right). Multiply the first terms: \((-x)\times (-x)=\color{red}{x^2}\) Multiply the outer terms: \((-x)\times(1)=\color{red}{-x}\) Multiply the inner terms: \((-1)\times(-x)=\color{red}{+x}\) Multiply the last terms: \((-1)\times(1)=\color{red}{-1}\) \(\left( { -7x – 3} \right)\left( { -2x + 8} \right)=(x^2)+(-x)+(x)+(-1) = x^2-1\) \(\pmb{\color{red}{Multiply\ the\ binomials \left( {6x + 5} \right)\left( {5x + 3} \right). Multiply the first terms: \((6x)\times (5x)=\color{red}{30x^2}\) Multiply the outer terms: \((6x)\times(3)=\color{red}{18x}\) Multiply the inner terms: \((5)\times(5x)=\color{red}{25x}\) Multiply the last terms: \((5)\times(3)=\color{red}{15}\) \(\left( { -7x – 3} \right)\left( { -2x + 8} \right)=(30x^2)+(18x)+(25x)+(15) = 30x^2+43x+15\) How do you FOIL in math? First – multiply the first terms. Outside – multiply the outside/outer terms. Inside – multiply the inside/inner terms. Last – multiply the last terms. How do you FOIL numbers in front of parentheses? That is, foil tells you to multiply the first terms in each of the parentheses, then multiply the two terms that are on the "outside" (furthest from each other), then the two terms that are on the "inside" (closest to each other), and then the last terms in each of the parentheses. What does L stand for in the FOIL method? The FOIL method is made up of four multiplication steps. Let's see what each letter in FOIL stands for one a time. The 'F' stands for first. The 'O' stands for outside. The 'I' stands for inside, and the 'L' stands for last. How do multiply fractions? There are 3 simple steps to multiply fractions Multiply the top numbers (the numerators). Multiply the bottom numbers (the denominators). Simplify the fraction if needed. What is factored form in algebra? A factored form is a parenthesized algebraic expression. In effect, a factored form is a product of sums of products … or a sum of products of sums. Any logic function can be represented by a factored form, and any factored form is a representation of some logic function. How do you distribute 4 Binomials? Break the first binomial into its two terms. Distribute each term of the first binomial over the other terms. Multiply the terms. Simplify and combine any like terms. Is FOIL distributive property? Using FOIL to Multiply Binomials. A shortcut called FOIL is sometimes used to find the product of two binomials. The FOIL method arises out of the distributive property. We are simply multiplying each term of the first binomial by each term of the second binomial, and then combining like terms. Who made FOIL math? Hidden behind this simple acronym lies a step-by-step guide to solving a seemly difficult mathematical problem. Coined by William Betz in his 1929 textbook, Algebra for Today, the FOIL technique of multiplying two binomials is widely known by children and adults all around the world. Is FOIL a mathematical concept? In elementary algebra, FOIL is a mnemonic for the standard method of multiplying two binomials—hence the method may be referred to as the FOIL method.
CommonCrawl
MCRF Home Nash equilibrium points of recursive nonzero-sum stochastic differential games with unbounded coefficients and related multiple\\ dimensional BSDEs June 2017, 7(2): 305-345. doi: 10.3934/mcrf.2017011 Exact controllability of linear stochastic differential equations and related problems Yanqing Wang 1, , Donghui Yang 2, , Jiongmin Yong 3, and Zhiyong Yu 4,, School of Mathematics and Statistics, Southwest University, Chongqing 400715, China School of Mathematics and Statistics, School of Information Science and Engineering, Central South University, Changsha 410075, China Department of Mathematics, University of Central Florida, Orlando, FL 32816, USA School of Mathematics, Shandong University, Jinan 250100, China ∗ Corresponding author: Zhiyong Yu. Received January 2017 Published April 2017 Fund Project: the National Natural Science Foundation of China (11471192, 11371375, 11526167), the Fundamental Research Funds for the Central Universities (SWU113038, XDJK2014C076), the Nature Science Foundation of Shandong Province (JQ201401), the Natural Science Foundation of CQCSTC (2015jcyjA00017), China Postdoctoral Science Foundation and Central South University Postdoctoral Science Foundation, and NSF Grant DMS-1406776. A notion of $L^p$-exact controllability is introduced for linear controlled (forward) stochastic differential equations with random coefficients. Several sufficient conditions are established for such kind of exact controllability. Further, it is proved that the $L^p$-exact controllability, the validity of an observability inequality for the adjoint equation, the solvability of an optimization problem, and the solvability of an $L^p$-type norm optimal control problem are all equivalent. Keywords: Controlled stochastic differential equation, $L^p$-exact controllability, observability inequality, norm optimal control problem. Mathematics Subject Classification: Primary: 93B05, 93E20; Secondary: 60H10. Citation: Yanqing Wang, Donghui Yang, Jiongmin Yong, Zhiyong Yu. Exact controllability of linear stochastic differential equations and related problems. Mathematical Control & Related Fields, 2017, 7 (2) : 305-345. doi: 10.3934/mcrf.2017011 R. Buckdahn, M. Quincampoix and G. Tessitore, A characterization of approximately controllable linear stochastic differential equations, , (). doi: 10.1201/9781420028720.ch6. Google Scholar S. Chen, X. Li, S. Peng and J. Yong, On stochastic linear controlled systems, Preprint, 1993. Google Scholar M.M. Connors, Controllability of discrete, linear, random dynamical systems, SIAM J. Control, 5 (1967), 183-210. doi: 10.1137/0305012. Google Scholar E.D. Denman and A.N. Beavers,Jr., The matrix sign function and computations in systems, Appl. Math. Comput., 2 (1976), 63-94. doi: 10.1016/0096-3003(76)90020-5. Google Scholar M. Ehrhardt and W. Kliemann, Controllability of linear stochastic systems, , (). doi: 10.1016/0167-6911(82)90012-3. Google Scholar N. El Karoui, S. Peng and M.C. Quenez, Backward stochastic differential equations in finance, Math. Finance, 7 (1997), 1-71. doi: 10.1111/1467-9965.00022. Google Scholar H. O. Fattorini, Infinite-Dimensional Optimization and Control Theory Cambridge Univ. Press, Cambridge, 1999. doi: 10.1017/CBO9780511574795. Google Scholar H.O. Fattorini, Time and norm optimal controls: A survey of recent results and open problems, Acta Math. Sci. Ser. B Engl. Ed., 31 (2011), 2203-2218. doi: 10.1016/S0252-9602(11)60394-9. Google Scholar G. B. Folland, Real Analysis: Modern Techniques and Their Applications, 2nd Edition John Wiley & Sons, New York, 1999. Google Scholar B. Gashi, Stochastic minimum-energy control, Systems Control Lett., 85 (2015), 70-76. doi: 10.1016/j.sysconle.2015.08.012. Google Scholar D. Goreac, A Kalman-type condition for stochastic approximate controllability, C. R. Math. Acad. Sci. Paris, 346 (2008), 183-188. doi: 10.1016/j.crma.2007.12.008. Google Scholar M. Gugat and G. Leugering, $L^∞$-norm minimal control of the wave equation: On the weakness of the bang-bang principle, ESAIM Control Optim. Calc. Var., 14 (2008), 254-283. doi: 10.1051/cocv:2007044. Google Scholar E. B. Lee and L. Markus, Foundations of Optimal Control Theory John Wiley & Sons, 1967. Google Scholar J.L. Lions, Exact controllability, stabilizability and perturbations for distributed systems, SIAM Rev., 30 (1988), 1-68. doi: 10.1137/1030001. Google Scholar J. L. Lions, Controlabilité Exacte, Perturbations et Stabilisation de Systémes Distribués Masson, Paris, RMA 8,1988. Google Scholar F. Liu and S. Peng, On controllability for stochastic control systems when the coefficient is time-variant, J. Syst. Sci. Complex., 23 (2010), 270-278. doi: 10.1007/s11424-010-8158-x. Google Scholar Q. Lü, J. Yong and X. Zhang, Representation of Itô integrals by Lebesgue/Bochner integrals, J. Eur. Math. Soc., 14 (2012), 1795-1823. doi: 10.4171/JEMS/347. Google Scholar S. Peng, Backward stochastic differential equation and exact controllability of stochastic control systems, Progr. Natur. Sci. (English Ed.), 4 (1994), 274-284. Google Scholar D.L. Russell, Controllability and stabilizability theory for linear partial differential equations, Recent Progress and open questions, SIAM Rev., 20 (1978), 639-739. doi: 10.1137/1020095. Google Scholar Y. Shi, T. Wang and J. Yong, Mean-field backward stochastoic Volterra integral equations, Discrete Continuous Dyn. Systems, Ser. B, 18 (2013), 1929-1967. doi: 10.3934/dcdsb.2013.18.1929. Google Scholar Y. Sunahara, S. Aihara and K. Kishino, On the stochastic observability and controllability for non-linear systems, Int. J Control, 22 (1975), 65-82. doi: 10.1080/00207177508922061. Google Scholar G. Wang and Y. Xu, Equivalence of three different kinds of optimal control problems for heat equations and its applications, SIAM J. Control Optim., 51 (2013), 848-880. doi: 10.1137/110852449. Google Scholar G. Wang and E. Zuazua, On the equivalence of minimal time and minimal norm controls for internally controlled heat equations, SIAM J. Control Optim., 50 (2012), 2938-2958. doi: 10.1137/110857398. Google Scholar Y. Wang and C. Zhang, The norm optimal control problem for stochastic linear control systems, ESAIM Control Optim. Calc. Var., 21 (2015), 399-413. doi: 10.1051/cocv/2014030. Google Scholar W. M. Wonham, Linear Multivariable Control: A Geometric Approach, 3rd Edition Springer Verlag, New York, 1985. doi: 10.1007/978-1-4612-1082-5. Google Scholar J. Yong and X. Y. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations Springer, New York, 1999. doi: 10.1007/978-1-4612-1466-3. Google Scholar K. Yosida, Functional Analysis Springer-Verlag, Berlin, New York, 1980. Google Scholar J. Zabczyk, Controllability of stochastic linear systems, Systems Control Lett., 1 (1981), 25-31. doi: 10.1016/S0167-6911(81)80008-4. Google Scholar E. Zuazua, Controllability of Partial Differential Equations manuscript, 2006. Google Scholar Abdelmouhcene Sengouga. Exact boundary observability and controllability of the wave equation in an interval with two moving endpoints. Evolution Equations & Control Theory, 2020, 9 (1) : 1-25. doi: 10.3934/eect.2020014 Peter I. Kogut, Olha P. Kupenko. On optimal control problem for an ill-posed strongly nonlinear elliptic equation with $p$-Laplace operator and $L^1$-type of nonlinearity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1273-1295. doi: 10.3934/dcdsb.2019016 Guozhen Lu, Yunyan Yang. Sharp constant and extremal function for the improved Moser-Trudinger inequality involving $L^p$ norm in two dimension. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 963-979. doi: 10.3934/dcds.2009.25.963 Lingyang Liu, Xu Liu. Controllability and observability of some coupled stochastic parabolic systems. Mathematical Control & Related Fields, 2018, 8 (3&4) : 829-854. doi: 10.3934/mcrf.2018037 Lijuan Wang, Qishu Yan. Optimal control problem for exact synchronization of parabolic system. Mathematical Control & Related Fields, 2019, 9 (3) : 411-424. doi: 10.3934/mcrf.2019019 Karina Samvelyan, Frol Zapolsky. Rigidity of the ${{L}^{p}}$-norm of the Poisson bracket on surfaces. Electronic Research Announcements, 2017, 24: 28-37. doi: 10.3934/era.2017.24.004 Thierry Horsin, Peter I. Kogut, Olivier Wilk. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. II. Approximation of solutions and optimality conditions. Mathematical Control & Related Fields, 2016, 6 (4) : 595-628. doi: 10.3934/mcrf.2016017 Thierry Horsin, Peter I. Kogut. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. I. Existence result. Mathematical Control & Related Fields, 2015, 5 (1) : 73-96. doi: 10.3934/mcrf.2015.5.73 Jianhui Huang, Xun Li, Jiongmin Yong. A linear-quadratic optimal control problem for mean-field stochastic differential equations in infinite horizon. Mathematical Control & Related Fields, 2015, 5 (1) : 97-139. doi: 10.3934/mcrf.2015.5.97 Giuseppe Da Prato. An integral inequality for the invariant measure of some finite dimensional stochastic differential equation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3015-3027. doi: 10.3934/dcdsb.2016085 Stanisław Migórski. A note on optimal control problem for a hemivariational inequality modeling fluid flow. Conference Publications, 2013, 2013 (special) : 545-554. doi: 10.3934/proc.2013.2013.545 Leszek Gasiński. Optimal control problem of Bolza-type for evolution hemivariational inequality. Conference Publications, 2003, 2003 (Special) : 320-326. doi: 10.3934/proc.2003.2003.320 Piernicola Bettiol. State constrained $L^\infty$ optimal control problems interpreted as differential games. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 3989-4017. doi: 10.3934/dcds.2015.35.3989 Hee-Dae Kwon, Jeehyun Lee, Sung-Dae Yang. Eigenseries solutions to optimal control problem and controllability problems on hyperbolic PDEs. Discrete & Continuous Dynamical Systems - B, 2010, 13 (2) : 305-325. doi: 10.3934/dcdsb.2010.13.305 Haiyang Wang, Zhen Wu. Time-inconsistent optimal control problem with random coefficients and stochastic equilibrium HJB equation. Mathematical Control & Related Fields, 2015, 5 (3) : 651-678. doi: 10.3934/mcrf.2015.5.651 Tatsien Li, Bopeng Rao, Zhiqiang Wang. Exact boundary controllability and observability for first order quasilinear hyperbolic systems with a kind of nonlocal boundary conditions. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 243-257. doi: 10.3934/dcds.2010.28.243 Diana Keller. Optimal control of a linear stochastic Schrödinger equation. Conference Publications, 2013, 2013 (special) : 437-446. doi: 10.3934/proc.2013.2013.437 Fulvia Confortola, Elisa Mastrogiacomo. Optimal control for stochastic heat equation with memory. Evolution Equations & Control Theory, 2014, 3 (1) : 35-58. doi: 10.3934/eect.2014.3.35 Manuel González-Burgos, Sergio Guerrero, Jean Pierre Puel. Local exact controllability to the trajectories of the Boussinesq system via a fictitious control on the divergence equation. Communications on Pure & Applied Analysis, 2009, 8 (1) : 311-333. doi: 10.3934/cpaa.2009.8.311 Jan-Hendrik Webert, Philip E. Gill, Sven-Joachim Kimmerle, Matthias Gerdts. A study of structure-exploiting SQP algorithms for an optimal control problem with coupled hyperbolic and ordinary differential equation constraints. Discrete & Continuous Dynamical Systems - S, 2018, 11 (6) : 1259-1282. doi: 10.3934/dcdss.2018071 HTML views (17) Yanqing Wang Donghui Yang Jiongmin Yong Zhiyong Yu
CommonCrawl
Measure-theoretic Lie brackets for nonsmooth vector fields DCDS-S Home The vanishing viscosity limit for a system of H-J equations related to a debt management problem October 2018, 11(5): 825-843. doi: 10.3934/dcdss.2018051 A flame propagation model on a network with application to a blocking problem Fabio Camilli 1,, , Elisabetta Carlini 2, and Claudio Marchi 3, Dip. di Scienze di Base e Applicate per l'Ingegneria, "Sapienza" Università di Roma, via Scarpa 16, 00161 Roma, Italy Dipartimento di Matematica, "Sapienza" Università di Roma, p.le A. Moro 5, 00185 Roma, Italy Dip. di Ingegneria dell'Informazione, Università di Padova, via Gradenigo 6/B, 35131 Padova, Italy Received February 2017 Revised August 2017 Published June 2018 Figure(9) We consider the Cauchy problem $\left\{ \begin{array}{*{35}{l}} {{\partial }_{t}}u+H(x,Du) = 0&(x,t)\in \Gamma \times (0,T) \\ u(x,0) = {{u}_{0}}(x)&x\in \Gamma \\\end{array} \right.$ $\Gamma$ is a network and $H$ is a positive homogeneous Hamiltonian which may change from edge to edge. In the first part of the paper, we prove that the Hopf-Lax type formula gives the (unique) viscosity solution of the problem. In the latter part of the paper we study a flame propagation model in a network and an optimal strategy to block a fire breaking up in some part of a pipeline; some numerical simulations are provided. Keywords: Evolutive Hamilton-Jacobi equation, viscosity solution, network, Hopf-Lax formula, approximation. Mathematics Subject Classification: Primary: 35D40; Secondary: 35R02, 35F21, 65M06, 49L25. Citation: Fabio Camilli, Elisabetta Carlini, Claudio Marchi. A flame propagation model on a network with application to a blocking problem. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : 825-843. doi: 10.3934/dcdss.2018051 Y. Achdou, F. Camilli, A. Cutrí and N. Tchou, Hamilton-Jacobi equations constrained on networks, NoDEA Nonlinear Differential Equations Appl., 20 (2013), 413-445. doi: 10.1007/s00030-012-0158-1. Google Scholar M. Bardi and I. Capuzzo Dolcetta, Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations, Birkäuser, Boston, 1997. doi: 10.1007/978-0-8176-4755-1. Google Scholar G. Barles, Remark on a Flame Propagation Model, Report INRIA #464, 1985. Google Scholar G. Barles, H. M. Soner and P. Souganidis, Front propagations and phase field theory, SIAM J. Control Optim., 31 (1993), 439-469. doi: 10.1137/0331021. Google Scholar F. Camilli and C. Marchi, A comparison among various notions of viscosity solution for Hamilton-Jacobi equations on networks, J. Math. Anal. Appl., 407 (2013), 112-118. doi: 10.1016/j.jmaa.2013.05.015. Google Scholar F. Camilli, C. Marchi and D. Schieborn, The vanishing viscosity limit for Hamilton-Jacobi equation on networks, J. Differential Equations, 254 (2013), 4122-4143. doi: 10.1016/j.jde.2013.02.013. Google Scholar F. Camilli, A. Festa and D. Schieborn, An approximation scheme for an Hamilton-Jacobi equation defined on a network, Applied Num. Math., 73 (2013), 33-47. doi: 10.1016/j.apnum.2013.05.003. Google Scholar Y. G. Chen, Y. Giga and S. Goto, Uniqueness and existence of viscosity solutions of generalized mean curvature flow equations, J. Differential Geom., 33 (1991), 749-786. doi: 10.4310/jdg/1214446564. Google Scholar G. Costeseque, J.-P. Lebacque and R. Monneau, A convergent scheme for Hamilton-Jacobi equations on a junction: application to traffic, Numer. Math., 129 (2015), 405-447. doi: 10.1007/s00211-014-0643-z. Google Scholar M. Garavello and B. Piccoli, Traffic Flow on Networks, AIMS Series on Applied Mathematics, American Institute of Mathematical Sciences, Springfield, MO, 2006. Google Scholar A. Khanafer and T. Başar, Information Spread in Networks: Control, Game and Equilibria, Proc. Information theory and Application Workshop (ITA'14), San Diego, 2014. Google Scholar C. Imbert and R. Monneau, Flux-limited solutions for quasi-convex Hamilton-Jacobi equations on networks, Ann. Sci. Éc. Norm. Supér., 50 (2017), 357-448. doi: 10.24033/asens.2323. Google Scholar P.-L. Lions and P. E. Souganidis, Viscosity solutions for junctions: well posedness and stability, Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl., 27 (2016), 535-545. doi: 10.4171/RLM/747. Google Scholar P.-L. Lions and P. E. Souganidis, Well posedness for multi-dimensional junction problems with Kirchoff-type conditions, Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl., 28 (2017), 807-816. doi: 10.4171/RLM/786. Google Scholar D. Mugnolo, Semigroup Methods for Evolution Equations on Networks, Understanding Complex Systems, Springer, Berlin, 2014. doi: 10.1007/978-3-319-04621-1. Google Scholar G. Namah and J. M. Roquejoffre, Remarks on the long time behaviour of the solutions of Hamilton-Jacobi equations, Comm. Partial Differential Equations, 24 (1999), 883-893. doi: 10.1080/03605309908821451. Google Scholar D. Schieborn and F. Camilli, Viscosity solutions of Eikonal equations on topological network, Calc. Var. Partial Differential Equations, 46 (2013), 671-686. doi: 10.1007/s00526-012-0498-z. Google Scholar A. Siconolfi, A first order Hamilton-Jacobi equation with singularity and the evolution of level sets, Comm. Partial Differential Equations, 20 (1995), 277-307. doi: 10.1080/03605309508821094. Google Scholar A. Siconolfi, Metric character of Hamilton-Jacobi equations, Trans. Amer. Math. Soc., 355 (2003), 1987-2009. doi: 10.1090/S0002-9947-03-03237-9. Google Scholar P. Soravia, Generalized motion of front along its normal direction: A differential game approach, Nonlinear Anal. TMA, 22 (1994), 1247-1262. doi: 10.1016/0362-546X(94)90108-2. Google Scholar P. Van Mieghem, J. Omic and R. Kooij, Virus spread in Networks, IEEE/ACM Trans. on networking, 17 (2009), 1-14. doi: 10.1109/TNET.2008.925623. Google Scholar Figure 1. Test1. Graph structure where $R_0$ is represented by the circle marker and the vertices by the rhombus markers (Top Left). Color map of the time $u_h(x)$ at which a node $x$ get burnt, computed by (29), (Top Right) and its 3D view (Bottom). Figure Options Download as PowerPoint slide Figure 2. Test1. Time to reach a point $x$ from the operation center $x_0$ (circle marker) and set of the admissible nodes $V^h_{ad}$ (square marker). 2D view (Left) and 3D view (Right). Figure 3. Test1. Optimal blocking strategy $\sigma ^h_{opt}$ (square marker), preserved network region (cross marker) and minimum burnt network region (continuum line) starting from $R_0$ (circle marker). Figure 4. Test2. Graph structure where $R_0$ is represented by the circle markers and the vertices by the rhombus markers (Top Left). Color map of the time $u_h(x)$ at which a node $x$ get burnt, computed by (29) (Top Right), and its 3D view (Bottom). Figure 5. Test2. Time to reach a point $x$ from the operation center $x_0$ (circle marker) and set of the admissible nodes $V^h_{ad}$ (square markers). 2D view(Left) and 3D view (Right). Figure 6. Test2. Optimal blocking strategy $\sigma ^h_{opt}$ (square markers), preserved network region (thin line) and minimum burnt network region (thick line) starting from $R_0$ (circle markers). Figure 7. Test3. Graph structure where $R_0$ is represented by the circle markers and the vertices by the rhombus markers (Top Left). Color map of the time $u_h(x)$ at which a node $x$ get burnt, computed by (29), (Top Right) and its 3D view (Bottom). Figure 9. Test3. Optimal blocking strategy $\sigma ^h_{opt}$ (square marker), preserved network region (thin line) and minimum burnt network region (thick line) starting from $R_0$ (circle marker). Federica Dragoni. Metric Hopf-Lax formula with semicontinuous data. Discrete & Continuous Dynamical Systems - A, 2007, 17 (4) : 713-729. doi: 10.3934/dcds.2007.17.713 Juan Pablo Rincón-Zapatero. Hopf-Lax formula for variational problems with non-constant discount. Journal of Geometric Mechanics, 2009, 1 (3) : 357-367. doi: 10.3934/jgm.2009.1.357 David McCaffrey. A representational formula for variational solutions to Hamilton-Jacobi equations. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1205-1215. doi: 10.3934/cpaa.2012.11.1205 María Barbero-Liñán, Manuel de León, David Martín de Diego, Juan C. Marrero, Miguel C. Muñoz-Lecanda. Kinematic reduction and the Hamilton-Jacobi equation. Journal of Geometric Mechanics, 2012, 4 (3) : 207-237. doi: 10.3934/jgm.2012.4.207 Mihai Bostan, Gawtum Namah. Time periodic viscosity solutions of Hamilton-Jacobi equations. Communications on Pure & Applied Analysis, 2007, 6 (2) : 389-410. doi: 10.3934/cpaa.2007.6.389 Olga Bernardi, Franco Cardin. Minimax and viscosity solutions of Hamilton-Jacobi equations in the convex case. Communications on Pure & Applied Analysis, 2006, 5 (4) : 793-812. doi: 10.3934/cpaa.2006.5.793 Kaizhi Wang, Jun Yan. Lipschitz dependence of viscosity solutions of Hamilton-Jacobi equations with respect to the parameter. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1649-1659. doi: 10.3934/dcds.2016.36.1649 Steven Richardson, Song Wang. The viscosity approximation to the Hamilton-Jacobi-Bellman equation in optimal feedback control: Upper bounds for extended domains. Journal of Industrial & Management Optimization, 2010, 6 (1) : 161-175. doi: 10.3934/jimo.2010.6.161 Eddaly Guerra, Héctor Sánchez-Morgado. Vanishing viscosity limits for space-time periodic Hamilton-Jacobi equations. Communications on Pure & Applied Analysis, 2014, 13 (1) : 331-346. doi: 10.3934/cpaa.2014.13.331 Kai Zhao, Wei Cheng. On the vanishing contact structure for viscosity solutions of contact type Hamilton-Jacobi equations I: Cauchy problem. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4345-4358. doi: 10.3934/dcds.2019176 HTML views (76) Fabio Camilli Elisabetta Carlini Claudio Marchi
CommonCrawl
The Best Leaper In The Animal Kingdom Is The Puma The Best Leaper In The Animal Kingdom Is The Puma. The best leaper in the animal kingdom is the puma,. With what speed, in si units,. Greek Photographic Circuit 2014 3rd Greek Photographic Circuit 2015 from www.pinterest.com The best leaper in the animal kingdom is the puma, which can jump to a height of $12 \mathrm{ft}$ when leaving the ground at an angle of $45^{\circ}$. The best leaper in the animal kingdom is the puma, which can jump to a height of 11.5 ft when leaving the ground at an angle of 44 degrees. With what speed must the animal leave the ground to. With what speed, in si units, must the animal leave the. The best leaper in the animal kingdom is the puma, which can jump to a height of 3.7 m when leaving the ground at an angle of 45^∘. Estimate the maximum kinetic energy of a puma, one of the best leapers of the animal kingdom. The best leaper in the animal kingdom is the puma, which can jump to a height of 12 \mathrm{ft} when leaving the ground at an angle of 45^{\circ}. The best leaper in the animal kingdom is the puma, which can jump to a height of 3.7 m when leaving the ground at an angle of 45°. The best leaper in the animal kingdom is the puma, which can jump to a height of 3.7 m when leaving the ground at an angle of 45^∘. Source: www.viewbug.com The best leaper in the animal kingdom is the puma, which can jump to a height of 3.7 m when leaving the ground at an angle of 45°. With what speed must the animal leave the ground to. Read: Which Sentence Best Describes This Excerpt From Shakespeare's Sonnet 130 Source: maidstonecameraclub.org.uk The best leaper in the animal kingdom is the puma, which can jump to a height of 3.7 m when leaving the ground at an angle of 45^∘. With what speed must the animal leave. Source: www.wallpaperup.com Published in category physics, 19.08.2020 >>. With what speed, in si units,. Source: www.oceanlight.com The best leaper in the animal kingdom is the puma, which can jump to a height of 11.5 ft when leaving the ground at an angle of 44 degrees. The best leaper in the animal kingdom is the puma, which can jump to a height of 12.4 ft when leaving the ground at an angle of 43°. Source: www.robertharding.com With what speed, in si units, must the animal leave the. The best leaper in the animal kingdom is the puma, which can jump to a height of 3.7 m when leaving the ground at an angle of 45°. Source: www.gettyimages.com The best leaper in the animal kingdom is the puma, which can jump to a height of 12.4 ft when leaving the ground at an angle of 43. Estimate the maximum kinetic energy of a puma, one of the best leapers of the animal kingdom. With what speed, leave a comment /. With what speed must the animal leave the ground to. 1 Get 24/7 Study Help With The Numerade App For. 2 The Best Leaper In The Animal Kingdom Is The Puma, Which Can Jump To A Height Of 3.7 M When Leaving The Ground At An Angle Of 45^∘. 3 With What Speed Much The Animal Leave The Ground. 4 See Answer The Best Leaper In The Animal Kingdom Is The Puma, Which Can Jump To A Height Of. 5 With What Speed, Leave A Comment /. Read: Which Stage Of Policy Making Includes Enforcing Newly Written Laws Get 24/7 Study Help With The Numerade App For. With what speed, in si units, must the animal leave the. The best leaper in the animal kingdom is the puma, which can jump to a height of 3.7 m when leaving the ground at an angle of 45°. The best leaper in the animal kingdom is the puma, which can jump to a height of 12 \mathrm{ft} when leaving the ground at an angle of 45^{\circ}. The Best Leaper In The Animal Kingdom Is The Puma, Which Can Jump To A Height Of 3.7 M When Leaving The Ground At An Angle Of 45^∘. With what speed must the animal leave the ground to. The best leader in the animal kingdom is the luma, which can jump to a height of 12 ft when leaving the ground at ana angle if 45 degrees with what speed in si units must the animal. With what speed must the animal leave the. With What Speed Much The Animal Leave The Ground. With what speed, in si units, must the animal leave the. With what speed must the animal leave the ground to. The best leaper in the animal kingdom is the puma, which can jump to a height of 3.7 m when leaving the ground at an angle of 45 ^ { \circ } 45∘. See Answer The Best Leaper In The Animal Kingdom Is The Puma, Which Can Jump To A Height Of. The best leaper in the animal kingdom is the puma, which can jump to a height of 3.7 m when leaving the ground at an angle of 45°. Published in category physics, 19.08.2020 >>. With what speed must the animal leave. Read: Suppose The World Price Of Cotton Falls Substantially With What Speed, Leave A Comment /. The best leaper in the animal kingdom is the puma, which can jump to a height of 3.7 m when leaving the ground at an angle of 45. The best leaper in the animal kingdom is the puma, which can jump to a height of 11.5 ft when leaving the ground at an angle of 44 degrees. The best leaper in the animal kingdom is the puma, which can jump to a height of 11.4 ft when leaving the ground at an angle of 40°. A 14 Foot Ladder Is Leaning Against A Wall What Is Another Way To Write 9 X 200
CommonCrawl
Comparison of basicity of o-phenanthroline and ammonia I was trying to compare the basicity of o-phenanthroline and ammonia. According to me, the factors that are seen in o-phenanthroline are: o-Phenanthroline has two nitrogen atoms at the 1,10 positions. It has a free lone pair on one nitrogen that is not delocalized due to resonance. The nitrogens are $\mathrm{sp^2}$ hybridized which is more electronegative compared to the $\mathrm{sp^3}$ lone pair on ammonia. However, would there be hydrogen bonding in its conjugate base? If so, then how do we compare its basicity with a base like ammonia? organic-chemistry acid-base andselisk♦ excuse_dont_existexcuse_dont_exist A quote from here explains and give a good answer to your wondering: Amines are the most basic of the common organic functional groups, but are still fairly weak bases. Protonation occurs on the non-bonded electron pair exclusively. The basicity of amines is directly dependent on the "electron density" at the nitrogen atom. Both inductive and resonance effects can alter the basicity of a nitrogen atom. Hybridization on the $\ce{N}$ also affects basicity. An increase in $\mathrm{s}$ character on an atom increases the electronegativity of that atom which favors acidity and therefore disfavors basicity. Hence $\mathrm{sp^3}$-hybridized nitrogen is more basic than either $\mathrm{sp^2}$ or $\mathrm{sp}$ hybridized nitrogen. The availability of this non-bonding lone pair is a factor of basisity. The 1,10-Phenanthroline is a pyridine derivative. Thus, both lone pairs of two $\ce{N}$ atoms in the ring system are contributed to the system's aromaticity. This make them not 100% available to incoming protons while the lone pair in ammonia is 100% available. Therefore, in aqueous solutions, the basicity of $\ce{NH3} \ (\mathrm{p}K_\mathrm{a} = 9.3)\gt$ basicity of 1,10-Phenanthroline $(\mathrm{p}K_\mathrm{a} \approx 4.9)$. This is similar to $\mathrm{p}K_\mathrm{a}$ of pyridine, which is $5.2$ in water (comparison with piperidine is depicted in following scheme): Keep in mind that there are some other factors are effecting the basicity of amine as well. For example, the ring sizes of cyclic amines (Ref.1): The order is 5-membered $\ge$ 4-membered $\gt$ 6-membered $\gg$ 3-membered. Scott Searles, Milton Tamres, Frank Block, Lloyd A. Quarterman, "Hydrogen Bonding and Basicity of Cyclic Imines," J. Am. Chem. Soc. 1956, 78(19), 4917–4920 (https://doi.org/10.1021/ja01600a029). Mathew MahindaratneMathew Mahindaratne Not the answer you're looking for? Browse other questions tagged organic-chemistry acid-base or ask your own question. Protonation of Guanidine Comparing basicity of imidazole and 2-imidazoline Basicity of nitrogen heterocyclic compounds Basicity comparison of ammonia derivatives (and guanidine) Confused about identifying delocalized electron pairs in Isoniazid Why is aniline more basic than pyrrole? Comparing basic strengths between pyridine and 1,2-dihydropyrazine Comparing the basicity of heterocyclic amines
CommonCrawl
Statistics and Computing September 2017 , Volume 27, Issue 5, pp 1413–1432 | Cite as Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC Aki Vehtari Jonah Gabry First Online: 30 August 2016 Leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values. LOO and WAIC have various advantages over simpler estimates of predictive error such as AIC and DIC but are less used in practice because they involve additional computational steps. Here we lay out fast and stable computations for LOO and WAIC that can be performed using existing simulation draws. We introduce an efficient computation of LOO using Pareto-smoothed importance sampling (PSIS), a new procedure for regularizing importance weights. Although WAIC is asymptotically equal to LOO, we demonstrate that PSIS-LOO is more robust in the finite case with weak priors or influential observations. As a byproduct of our calculations, we also obtain approximate standard errors for estimated predictive errors and for comparison of predictive errors between two models. We implement the computations in an R package called loo and demonstrate using models fit with the Bayesian inference package Stan. Bayesian computation Leave-one-out cross-validation (LOO) K-fold cross-validation Widely applicable information criterion (WAIC) Stan Pareto smoothed importance sampling (PSIS) An erratum to this article is available at http://dx.doi.org/10.1007/s11222-016-9709-3. After fitting a Bayesian model we often want to measure its predictive accuracy, for its own sake or for purposes of model comparison, selection, or averaging (Geisser and Eddy 1979; Hoeting et al. 1999; Vehtari and Lampinen 2002; Ando and Tsay 2010; Vehtari and Ojanen 2012). Cross-validation and information criteria are two approaches to estimating out-of-sample predictive accuracy using within-sample fits (Akaike 1973; Stone 1977). In this article we consider computations using the log-likelihood evaluated at the usual posterior simulations of the parameters. Computation time for the predictive accuracy measures should be negligible compared to the cost of fitting the model and obtaining posterior draws in the first place. Exact cross-validation requires re-fitting the model with different training sets. Approximate leave-one-out cross-validation (LOO) can be computed easily using importance sampling (IS; Gelfand et al. 1992; Gelfand 1996) but the resulting estimate is noisy, as the variance of the importance weights can be large or even infinite (Peruggia 1997; Epifani et al. 2008). Here we propose to use Pareto smoothed importance sampling (PSIS), a new approach that provides a more accurate and reliable estimate by fitting a Pareto distribution to the upper tail of the distribution of the importance weights. PSIS allows us to compute LOO using importance weights that would otherwise be unstable. The widely applicable or Watanabe-Akaike information criterion (WAIC; Watanabe 2010) can be viewed as an improvement on the deviance information criterion (DIC) for Bayesian models. DIC has gained popularity in recent years, in part through its implementation in the graphical modeling package BUGS (Spiegelhalter et al. 2002; Spiegelhalter et al. 1994, 2003), but it is known to have some problems, which arise in part from not being fully Bayesian in that it is based on a point estimate (van der Linde 2005; Plummer 2008). For example, DIC can produce negative estimates of the effective number of parameters in a model and it is not defined for singular models. WAIC is fully Bayesian in that it uses the entire posterior distribution, and it is asymptotically equal to Bayesian cross-validation. Unlike DIC, WAIC is invariant to parametrization and also works for singular models. Although WAIC is asymptotically equal to LOO, we demonstrate that PSIS-LOO is more robust in the finite case with weak priors or influential observations. We provide diagnostics for both PSIS-LOO and WAIC which tell when these approximations are likely to have large errors and computationally more intensive methods such as K-fold cross-validation should be used. Fast and stable computation and diagnostics for PSIS-LOO allows safe use of this new method in routine statistical practice. As a byproduct of our calculations, we also obtain approximate standard errors for estimated predictive errors and for the comparison of predictive errors between two models. We implement the computations in a package for R (R Core Team 2016) called loo (Vehtari et al. 2016a, b) and demonstrate using models fit with the Bayesian inference package Stan (Stan Development Team 2016a, b).1 All the computations are fast compared to the typical time required to fit the model in the first place. Although the examples provided in this paper all use Stan, the loo package is independent of Stan and can be used with models estimated by other software packages or custom user-written algorithms. 2 Estimating out-of-sample pointwise predictive accuracy using posterior simulations Consider data \(y_1,\ldots ,y_n\), modeled as independent given parameters \(\theta \); thus \(p(y|\theta )=\prod _{i=1}^n p(y_i | \theta )\). This formulation also encompasses latent variable models with \(p(y_i | f_i, \theta )\), where \(f_i\) are latent variables. Also suppose we have a prior distribution \(p(\theta )\), thus yielding a posterior distribution \(p(\theta |y)\) and a posterior predictive distribution \(p(\tilde{y}|y)=\int p(\tilde{y}_i|\theta )p(\theta |y)d\theta \). To maintain comparability with the given dataset and to get easier interpretation of the differences in scale of effective number of parameters, we define a measure of predictive accuracy for the n data points taken one at a time: $$\begin{aligned} \hbox {elpd}= & {} \hbox {expected log pointwise predictive density} \nonumber \\&\quad \hbox {for a new dataset}\nonumber \\= & {} \sum _{i=1}^n \int p_t(\tilde{y}_i) \log p(\tilde{y}_i|y) d\tilde{y}_i, \end{aligned}$$ where \(p_t(\tilde{y}_i)\) is the distribution representing the true data-generating process for \(\tilde{y}_i\). The \(p_t(\tilde{y}_i)\)'s are unknown, and we will use cross-validation or WAIC to approximate (1). In a regression, these distributions are also implicitly conditioned on any predictors in the model. See Vehtari and Ojanen (2012) for other approaches to approximating \(p_t(\tilde{y}_i)\) and discussion of alternative prediction tasks. Instead of the log predictive density \(\log p(\tilde{y}_i|y)\), other utility (or cost) functions \(u(p(\tilde{y}|y),\tilde{y})\) could be used, such as classification error. Here we take the log score as the default for evaluating the predictive density (Geisser and Eddy 1979; Bernardo and Smith 1994; Gneiting and Raftery 2007). A helpful quantity in the analysis is $$\begin{aligned} \mathrm{lpd}= & {} \text{ log } \text{ pointwise } \text{ predictive } \text{ density } \nonumber \\= & {} \sum _{i=1}^n \log p(y_i|y)=\sum _{i=1}^n \log \int p(y_i|\theta )p(\theta |y)d\theta . \end{aligned}$$ The lpd of observed data y is an overestimate of the elpd for future data (1). To compute the lpd in practice, we can evaluate the expectation using draws from \(p_\mathrm{post}(\theta )\), the usual posterior simulations, which we label \(\theta ^s,s=1,\ldots ,S\): $$\begin{aligned} \widehat{\mathrm{lpd}}= & {} \text{ computed } \text{ log } \text{ pointwise } \text{ predictive } \text{ density } \nonumber \\= & {} \sum _{i=1}^n \log \left( \frac{1}{S}\sum _{s=1}^S p(y_i|\theta ^s)\right) . \end{aligned}$$ 2.1 Leave-one-out cross-validation The Bayesian LOO estimate of out-of-sample predictive fit is $$\begin{aligned} p(y_i|y_{-i})=\int p(y_i|\theta )p(\theta |y_{-i}) d\theta \end{aligned}$$ is the leave-one-out predictive density given the data without the ith data point. 2.1.1 Raw importance sampling As noted by Gelfand et al. (1992), if the n points are conditionally independent in the data model we can then evaluate (5) with draws \(\theta ^s\) from the full posterior \(p(\theta |y)\) using importance ratios $$\begin{aligned} r_i^s=\frac{1}{p(y_i|\theta ^s)} \propto \frac{p(\theta ^s|y_{-i})}{p(\theta ^s|y)} \end{aligned}$$ to get the importance sampling leave-one-out (IS-LOO) predictive distribution, $$\begin{aligned} p(\tilde{y}_i|y_{-i})\approx \frac{\sum _{s=1}^S r_i^s p(\tilde{y}_i|\theta ^s)}{\sum _{s=1}^S r_i^s}. \end{aligned}$$ Evaluating this LOO log predictive density at the held-out data point \(y_i\), we get $$\begin{aligned} p(y_i|y_{-i})\approx \frac{1}{\frac{1}{S}\sum _{s=1}^S\frac{1}{p(y_i|\theta ^s)}}. \end{aligned}$$ However, the posterior \(p(\theta |y)\) is likely to have a smaller variance and thinner tails than the leave-one-out distributions \(p(\theta |y_{-i})\), and thus a direct use of (8) induces instability because the importance ratios can have high or infinite variance. For simple models the variance of the importance weights may be computed analytically. The necessary and sufficient conditions for the variance of the case-deletion importance sampling weights to be finite for a Bayesian linear model are given by Peruggia (1997). Epifani et al. (2008) extend the analytical results to generalized linear models and non-linear Michaelis-Menten models. However, these conditions can not be computed analytically in general. Koopman et al. (2009) propose to use the maximum likelihood fit of the generalized Pareto distribution to the upper tail of the distribution of the importance ratios and use the fitted parameters to form a test for whether the variance of the importance ratios is finite. If the hypothesis test suggests the variance is infinite then they abandon importance sampling. 2.1.2 Truncated importance sampling Ionides (2008) proposes a modification of importance sampling where the raw importance ratios \(r^s\) are replaced by truncated weights $$\begin{aligned} w^s = \min (r^s,\sqrt{S}\bar{r}), \end{aligned}$$ where \(\bar{r}=\frac{1}{S}\sum _{s=1}^Sr^s\). Ionides (2008) proves that the variance of the truncated importance sampling weights is guaranteed to be finite, and provides theoretical and experimental results showing that truncation using the threshold \(\sqrt{S}\bar{r}\) gives an importance sampling estimate with a mean square error close to an estimate with a case specific optimal truncation level. The downside of the truncation is that it introduces a bias, which can be large as we demonstrate in our experiments. 2.1.3 Pareto smoothed importance sampling We can improve the LOO estimate using Pareto smoothed importance sampling (PSIS; Vehtari and Gelman 2015), which applies a smoothing procedure to the importance weights. We briefly review the motivation and steps of PSIS here, before moving on to focus on the goals of using and evaluating predictive information criteria. As noted above, the distribution of the importance weights used in LOO may have a long right tail. We use the empirical Bayes estimate of Zhang and Stephens (2009) to fit a generalized Pareto distribution to the tail (20 % largest importance ratios). By examining the shape parameter k of the fitted Pareto distribution, we are able to obtain sample based estimates of the existence of the moments (Koopman et al. 2009). This extends the diagnostic approach of Peruggia (1997) and Epifani et al. (2008) to be used routinely with IS-LOO for any model with a factorizing likelihood. Epifani et al. (2008) show that when estimating the leave-one-out predictive density, the central limit theorem holds if the distribution of the weights has finite variance. These results can be extended via the generalized central limit theorem for stable distributions. Thus, even if the variance of the importance weight distribution is infinite, if the mean exists then the accuracy of the estimate improves as additional posterior draws are obtained. When the tail of the weight distribution is long, a direct use of importance sampling is sensitive to one or few largest values. By fitting a generalized Pareto distribution to the upper tail of the importance weights, we smooth these values. The procedure goes as follows: Fit the generalized Pareto distribution to the 20% largest importance ratios \(r_s\) as computed in (6). The computation is done separately for each held-out data point i. In simulation experiments with thousands and tens of thousands of draws, we have found that the fit is not sensitive to the specific cutoff value (for a consistent estimation, the proportion of the samples above the cutoff should get smaller when the number of draws increases). Stabilize the importance ratios by replacing the M largest ratios by the expected values of the order statistics of the fitted generalized Pareto distribution $$\begin{aligned} F^{-1}\left( \frac{z-1/2}{M}\right) , \quad z=1,\ldots ,M, \end{aligned}$$ where M is the number of simulation draws used to fit the Pareto (in this case, \(M=0.2\,S\)) and \(F^{-1}\) is the inverse-CDF of the generalized Pareto distribution. Label these new weights as \(\tilde{w}_i^s\) where, again, s indexes the simulation draws and i indexes the data points; thus, for each i there is a distinct vector of S weights. To guarantee finite variance of the estimate, truncate each vector of weights at \(S^{3/4}\bar{w}_i\), where \(\bar{w}_i\) is the average of the S smoothed weights corresponding to the distribution holding out data point i. Finally, label these truncated weights as \(w^s_i\). The above steps must be performed for each data point i. The result is a vector of weights \(w_i^s, s=1,\ldots ,S\), for each i, which in general should be better behaved than the raw importance ratios \(r_i^s\) from which they are constructed. The results can then be combined to compute the desired LOO estimates. The PSIS estimate of the LOO expected log pointwise predictive density is The estimated shape parameter \({\hat{k}}\) of the generalized Pareto distribution can be used to assess the reliability of the estimate: If \(k<\frac{1}{2}\), the variance of the raw importance ratios is finite, the central limit theorem holds, and the estimate converges quickly. If k is between \(\frac{1}{2}\) and 1, the variance of the raw importance ratios is infinite but the mean exists, the generalized central limit theorem for stable distributions holds, and the convergence of the estimate is slower. The variance of the PSIS estimate is finite but may be large. If \(k>1\), the variance and the mean of the raw ratios distribution do not exist. The variance of the PSIS estimate is finite but may be large. If the estimated tail shape parameter \({\hat{k}}\) exceeds 0.5, the user should be warned, although in practice we have observed good performance for values of \({\hat{k}}\) up to 0.7. Even if the PSIS estimate has a finite variance, when \({\hat{k}}\) exceeds 0.7 the user should consider sampling directly from \(p(\theta ^s|y_{-i})\) for the problematic i, use K-fold cross-validation (see Sect. 2.3), or use a more robust model. The additional computational cost of sampling directly from each \(p(\theta ^s|y_{-i})\) is approximately the same as sampling from the full posterior, but it is recommended if the number of problematic data points is not too high. A more robust model may also help because importance sampling is less likely to work well if the marginal posterior \(p(\theta ^s|y)\) and LOO posterior \(p(\theta ^s|y_{-i})\) are very different. This is more likely to happen with a non-robust model and highly influential observations. A robust model may reduce the sensitivity to one or several highly influential observations, as we show in the examples in Sect. 4. 2.2 WAIC WAIC (Watanabe 2010) is an alternative approach to estimating the expected log pointwise predictive density and is defined as where \(\widehat{p}_\mathrm{waic}\) is the estimated effective number of parameters and is computed based on the definition2 $$\begin{aligned} p_\mathrm{waic} =\sum _{i=1}^n \text{ var }_\mathrm{post} \left( \log p(y_i|\theta )\right) , \end{aligned}$$ which we can calculate using the posterior variance of the log predictive density for each data point \(y_i\), that is, \(V_{s=1}^S \log p(y_i|\theta ^s)\), where \(V_{s=1}^S\) represents the sample variance, \(V_{s=1}^S a_s = \frac{1}{S-1}\sum _{s=1}^S (a_s - \bar{a})^2\). Summing over all the data points \(y_i\) gives a simulation-estimated effective number of parameters, $$\begin{aligned} \widehat{p}_\mathrm{waic} = \sum _{i=1}^n V_{s=1}^S\left( \log p(y_i|\theta ^s)\right) . \end{aligned}$$ For DIC, there is a similar variance-based computation of the number of parameters that is notoriously unreliable, but the WAIC version is more stable because it computes the variance separately for each data point and then takes the sum; the summing yields stability. The effective number of parameters \(\widehat{p}_\mathrm{waic}\) can be used as measure of complexity of the model, but it should not be overinterpreted, as the original goal is to estimate the difference between lpd and elpd. As shown by Gelman et al. (2014) and demonstrated also in Sect. 4, in the case of a weak prior, \(\widehat{p}_\mathrm{waic}\) can severely underestimate the difference between lpd and elpd. For \(\widehat{p}_\mathrm{waic}\) there is no similar theory as for the moments of the importance sampling weight distribution, but based on our simulation experiments it seems that \(\widehat{p}_\mathrm{waic}\) is unreliable if any of the terms \(V_{s=1}^S \log p(y_i|\theta ^s)\) exceeds 0.4. The different behavior of LOO and WAIC seen in the experiments can be understood by comparing Taylor series approximations. By defining a generating function of functional cumulants, $$\begin{aligned} F(\alpha )=\sum _{i=1}^n\log E_\mathrm{post}(p(y_i|\theta )^\alpha ), \end{aligned}$$ and applying a Taylor expansion of \(F(\alpha )\) around 0 with \(\alpha =-1\) we obtain an expansion of \(\mathrm{lpd}_\mathrm{loo}\) From the definition of \(F(\alpha )\) we get $$\begin{aligned} \mathrm{lpd}=F(1)=F'(0)+\frac{1}{2}F''(0)+\frac{1}{6}F^{(3)}(0)+ \sum _{i=4}^\infty \frac{F^{(i)}(0)}{i!}, \end{aligned}$$ and the expansion for WAIC is then $$\begin{aligned} \text {WAIC}&=F(1)-F''(0)\nonumber \\&=F'(0)-\frac{1}{2}F''(0)+\frac{1}{6}F^{(3)}(0)+ \sum _{i=4}^\infty \frac{F^{(i)}(0)}{i!}. \end{aligned}$$ The first three terms of the expansion of WAIC match the expansion of LOO, and the rest of the terms match the expansion of lpd. Watanabe (2010) argues that, asymptotically, the latter terms have negligible contribution and thus asymptotic equivalence with LOO is obtained. However, the error can be significant in the case of finite n and weak prior information as shown by Gelman et al. (2014), and demonstrated also in Sect. 4. If the higher order terms are not negligible, then WAIC is biased towards lpd. To reduce this bias it is possible to compute additional series terms, but computing higher moments using a finite posterior sample increases the variance of the estimate and, based on our experiments, it is more difficult to control the bias-variance tradeoff than in PSIS-LOO. WAIC's larger bias compared to LOO is also demonstrated by Vehtari et al. (2016a, (2016b) in the case of Gaussian processes with distributional posterior approximations. In the experiments we also demonstrate that we can use truncated IS-LOO with heavy truncation to obtain similar bias towards lpd and similar estimate variance as in WAIC. 2.3 K-fold cross-validation In this paper we focus on leave-one-out cross-validation and WAIC, but, for statistical and computational reasons, it can make sense to cross-validate using \(K \ll n\) hold-out sets. In some ways, K-fold cross-validation is simpler than leave-one-out cross-validation but in other ways it is not. K-fold cross-validation requires refitting the model K times which can be computationally expensive whereas approximative LOO methods, such as PSIS-LOO, require only one evaluation of the model. If in PSIS-LOO \({\hat{k}}>0.7\) for a few i we recommend sampling directly from each corresponding \(p(\theta ^s|y_{-i})\), but if there are more than K problematic i, then we recommend checking the results using K-fold cross-validation. Vehtari and Lampinen (2002) demonstrate cases where IS-LOO fails (according to effective sample size estimates instead of the \({\hat{k}}\) diagnostic proposed here) for a large number of i and K-fold-CV produces more reliable results. In Bayesian K-fold cross-validation, the data are partitioned into K subsets \(y_k\), for \(k=1,\ldots ,K\), and then the model is fit separately to each training set \(y_{(-k)}\), thus yielding a posterior distribution \(p_{\mathrm{post} (-k)}(\theta )=p(\theta |y_{(-k)})\). If the number of partitions is small (a typical value in the literature is \(K=10\)), it is not so costly to simply re-fit the model separately to each training set. To maintain consistency with LOO and WAIC, we define predictive accuracy for each data point, so that the log predictive density for \(y_i\), if it is in subset k, is $$\begin{aligned} \log p(y_i|y_{(-k)})=\log {\int }p(y_i|\theta )p(\theta |y_{(-k)})d\theta , \quad i \in k. \end{aligned}$$ Assuming the posterior distribution \(p(\theta |y_{(-k)})\) is summarized by S simulation draws \(\theta ^{k,s}\), we calculate its log predictive density as $$\begin{aligned} \widehat{\mathrm{elpd}}_i = \log \left( \frac{1}{S}\sum _{s=1}^S p(y_i|\theta ^{k,s})\right) \end{aligned}$$ using the simulations corresponding to the subset k that contains data point i. We then sum to get the estimate $$\begin{aligned} \widehat{\mathrm{elpd}}_\mathrm{xval}=\sum _{i=1}^n \widehat{\mathrm{elpd}}_i. \end{aligned}$$ There remains a bias as the model is learning from a fraction \(\frac{1}{K}\) less of the data. Methods for correcting this bias exist but are rarely used as they can increase the variance, and if \(K \ge 10\) the size of the bias is typically small compared to the variance of the estimate (Vehtari and Lampinen 2002). In our experiments, exact LOO is the same as K-fold-CV with \(K=N\) and we also analyze the effect of this bias and bias correction in Sect. 4.2. For K-fold cross-validation, if the subjects are exchangeable, that is, the order does not contain information, then there is no need for random selection. If the order does contain information, e.g. in survival studies the later patients have shorter follow-ups, then randomization is often useful. In most cases we recommend partitioning the data into subsets by randomly permuting the observations and then systemically dividing them into K subgroups. If the subjects are exchangeable, that is, the order does not contain information, then there is no need for random selection, but if the order does contain information, e.g. in survival studies the later patients have shorter follow-ups, then randomization is useful. In some cases it may be useful to stratify to obtain better balance among groups. See Vehtari and Lampinen (2002), Arolot and Celisse (2010), and Vehtari and Ojanen (2012) for further discussion of these points. As the data can be divided in many ways into K groups it introduces additional variance in the estimates, which is also evident from our experiments. This variance can be reduced by repeating K-fold-CV several times with different permutations in the data division, but this will further increase the computational cost. 2.4 Data division The purpose of using LOO or WAIC is to estimate the accuracy of the predictive distribution \(p(\tilde{y}_i|y)\). Computation of PSIS-LOO and WAIC (and AIC and DIC) is based on computing terms \(\log p(y_i|y)=\log \int p(y_i|\theta )p(\theta |y)\) assuming some agreed-upon division of the data y into individual data points \(y_i\). Although often \(y_i\) will denote a single scalar observation, in the case of hierarchical data, it may denote a group of observations. For example, in cognitive or medical studies we may be interested in prediction for a new subject (or patient), and thus it is natural in cross-validation to consider an approach where \(y_i\) would denote all observations for a single subject and \(y_{-i}\) would denote the observations for all the other subjects. In theory, we can use PSIS-LOO and WAIC in this case, too, but as the number of observations per subject increases it is more likely that they will not work as well. The fact that importance sampling is difficult in higher dimensions is well known and is demonstrated for IS-LOO by Vehtari and Lampinen (2002) and for PSIS by Vehtari and Gelman (2015). The same problem can also be shown to hold for WAIC. If diagnostics warn about the reliability of PSIS-LOO (or WAIC), then K-fold cross-validation can be used by taking into account the hierarchical structure in the data when doing the data division as demonstrated, for example, by Vehtari and Lampinen (2002). 3 Implementation in Stan We have set up code to implement LOO, WAIC, and K-fold cross-validation in R and Stan so that users will have a quick and convenient way to assess and compare model fits. Implementation is not automatic, though, because of the need to compute the separate factors \(p(y_i|\theta )\) in the likelihood. Stan works with the joint density and in its usual computations does not "know" which parts come from the prior and which from the likelihood. Nor does Stan in general make use of any factorization of the likelihood into pieces corresponding to each data point. Thus, to compute these measures of predictive fit in Stan, the user needs to explicitly code the factors of the likelihood (actually, the terms of the log-likelihood) as a vector. We can then pull apart the separate terms and compute cross-validation and WAIC at the end, after all simulations have been collected. Sample code for carrying out this procedure using Stan and the loo R package is provided in Appendix. This code can be adapted to apply our procedure in other computing languages. Although the implementation is not automatic when writing custom Stan programs, we can create implementations that are automatic for users of our new rstanarm R package (Gabry and Goodrich 2016). rstanarm provides a high-level interface to Stan that enables the user to specify many of the most common applied Bayesian regression models using standard R modeling syntax (e.g. like that of glm). The models are then estimated using Stan's algorithms and the results are returned to the user in a form similar to the fitted model objects to which R users are accustomed. For the models implemented in rstanarm, we have preprogrammed many tasks, including computing and saving the pointwise predictive measures and importance ratios which we use to compute WAIC and PSIS-LOO. The loo method for rstanarm models requires no additional programming from the user after fitting a model, as we can compute all of the needed quantities internally from the contents of the fitted model object and then pass them to the functions in the loo package. Examples of using loo with rstanarm can be found in the rstanarm vignettes, and we also provide an example in Appendix 3 of this paper. We illustrate with six simple examples: two examples from our earlier research in computing the effective number of parameters in a hierarchical model, three examples that were used by Epifani et al. (2008) to illustrate the estimation of the variance of the weight distribution, and one example of a multilevel regression from our earlier applied research. For each example we used the Stan default of 4 chains run for 1000 warmup and 1000 post-warmup iterations, yielding a total of 4000 saved simulation draws. With Gibbs sampling or random-walk Metropolis, 4000 is not a large number of simulation draws. The algorithm used by Stan is Hamiltonian Monte Carlo with No-U-Turn-Sampling (Hoffman and Gelman 2014), which is much more efficient, and 1000 is already more than sufficient in many real-world settings. In these examples we followed standard practice and monitored convergence and effective sample sizes as recommended by Gelman et al. (2013). We performed 100 independent replications of all experiments to obtain estimates of variation. For the exact LOO results and convergence plots we run longer chains to obtain a total of 100,000 draws (except for the radon example which is much slower to run). 4.1 Example: Scaled 8 schools In a controlled study, independent randomized experiments were conducted in 8 different high schools to estimate the effect of special preparation for college admission tests Estimated effect, \(y_j\) Standard error of estimate, \(\sigma _j\) \(-\)3 Each row of the table gives an estimate and standard error from one of the schools. A hierarchical Bayesian model was fit to perform meta-analysis and use partial pooling to get more accurate estimates of the 8 effects. From Rubin (1981) 8 Schools example: a WAIC, Truncated Importance Sampling LOO, Importance Sampling LOO, Pareto Smoothed Importance Sampling LOO, and exact LOO (which in this case corresponds to eightfold-CV); b estimated effective number of parameters for each of these measures; c tail shape \(\hat{k}\) for the importance weights; and d the posterior variances of the log predictive densities, for scaled versions of the 8 schools data (the original observations y have been multiplied by a common factor). We consider scaling factors ranging from 0.1 (corresponding to near-zero variation of the underlying parameters among the schools) to 4 (implying that the true effects in the schools vary by much more than their standard errors of measurement). As the scaling increases, eventually LOO approximations and WAIC fail to approximate exact LOO as the leave-one-out posteriors are not close to the full posterior. When the estimated tail shape \(\hat{k}\) exceeds 1, the importance-weighted LOO approximations start to fail. When posterior variances of the log predictive densities exceeds 0.4, WAIC starts to fail. PSIS-LOO performs the best among the approximations considered here For our first example we take an analysis of an education experiment used by Gelman et al. (2014) to demonstrate the use of information criteria for hierarchical Bayesian models. The goal of the study was to measure the effects of a test preparation program conducted in eight different high schools in New Jersey. A separate randomized experiment was conducted in each school, and the administrators of each school implemented the program in their own way. Rubin (1981) performed a Bayesian meta-analysis, partially pooling the eight estimates toward a common mean. The model has the form, \(y_i\sim \mathrm{N}(\theta _i,\sigma ^2_i)\) and \(\theta _i\sim \mathrm{N}(\mu ,\tau ^2)\), for \(i=1,\ldots ,n=8\), with a uniform prior distribution on \((\mu ,\tau )\). The measurements \(y_i\) and uncertainties \(\sigma _i\) are the estimates and standard errors from separate regressions performed for each school, as shown in Table 1. The test scores for the individual students are no longer available. This model has eight parameters but they are constrained through their hierarchical distribution and are not estimated independently; thus we would anticipate the effective number of parameters should be some number between 1 and 8. To better illustrate the behavior of LOO and WAIC, we repeat the analysis, rescaling the data points y by a factor ranging from 0.1 to 4 while keeping the standard errors \(\sigma \) unchanged. With a small data scaling factor the hierarchical model nears complete pooling and with a large data scaling factor the model approaches separate fits to the data for each school. Figure 1 shows \(\widehat{\text{ elpd }}\) for the various LOO approximation methods as a function of the scaling factor, based on 4000 simulation draws at each grid point. When the data scaling factor is small (here, less than 1.5), all measures largely agree. As the data scaling factor increases and the model approaches no pooling, the population prior for \(\theta _i\) gets flat and \(p_\mathrm{waic}\approx \frac{p}{2}\). This is correct behavior, as discussed by Gelman et al. (2014). Simulated 8 schools example: a Root mean square error of WAIC, Truncated Importance Sampling LOO, Importance Sampling LOO, Pareto Smoothed Importance Sampling LOO, and exact LOO with the true predictive performance computed using independently simulated test data; the error for all the methods increases, but the RMSE of exact LOO has an upper limit. Eventually the LOO approximations and WAIC fail to return exact LOO, as the leave-one-out posteriors are not close to the full posterior. When the estimated tail shape k exceeds 1, the importance-weighted LOO approximations start to fail. Among the approximations IS-LOO has the smallest RMSE as it has the smallest bias, and as the tail shape k is mostly below 1, it does not fail badly. b Root mean square error of WAIC, bias corrected Pareto Smoothed Importance Sampling LOO, and bias corrected exact LOO with the true predictive performance computed using independently simulated test data. The bias correction also reduces RMSE, having the clearest impact with smaller population distribution scales, but overall the reduction in RMSE is negligible. c Root mean square error of WAIC, Truncated Importance Sampling LOO with heavy truncation \((\root 4 \of {S}\bar{r})\), Pareto Smoothed Importance Sampling LOO, bias corrected exact LOO, and shrunk exact LOO with the true predictive performance computed using independently simulated test data. Truncated Importance Sampling LOO with heavy truncation matches WAIC accurately. Shrinking exact LOO towards the lpd of observed data reduces the RMSE for some scale values with small increase in error for larger scale values In the case of exact LOO, \(\widehat{\mathrm{lpd}} - \widehat{\mathrm{elpd}}_\mathrm{loo}\) can be larger than p. As the prior for \(\theta _i\) approaches flatness, the log predictive density \(p_{\mathrm{post}(-i)}(y_i) \rightarrow -\infty \). At the same time, the full posterior becomes an inadequate approximation to \(p_{\mathrm{post}(-i)}\) and all approximations become poor approximations to the actual out-of-sample prediction error under the model. WAIC starts to fail when one of the posterior variances of the log predictive densities exceeds 0.4. LOO approximations work well even if the tail shape k of the generalized Pareto distribution is between \(\frac{1}{2}\) and 1, and the variance of the raw importance ratios is infinite. The error of LOO approximations increases with k, with a clearer difference between the methods when \(k>0.7\). 4.2 Example: Simulated 8 schools In the previous example, we used exact LOO as the gold standard. In this section, we generate simulated data from the same statistical model and compare predictive performance on independent test data. Even when the number of observations n is fixed, as the scale of the population distribution increases we observe the effect of weak prior information in hierarchical models discussed in the previous section and by Gelman et al. (2014). Comparing the error, bias and variance of the various approximations, we find that PSIS-LOO offers the best balance. For \(i=1,\ldots ,n=8\), we simulate \(\theta _{0,i}\sim \mathrm{N}(\mu _0,\tau ^2_0)\) and \(y_i\sim \mathrm{N}(\theta _{0,i},\sigma ^2_{0,i})\), where we set \(\sigma _{0,i}=10,\mu _0=0\), and \(\tau _0\in \{1,2,\ldots ,30\}\). The simulated data is similar to the real 8 schools data, for which the empirical estimate is \(\hat{\tau }\approx 10\). For each value of \(\tau _0\) we generate 100 training sets of size 8 and one test data set of size 1000. Posterior inference is based on 4000 draws for each constructed model. Figure 2a shows the root mean square error (RMSE) for the various LOO approximation methods as a function of \(\tau _0\), the scale of the population distribution. When \(\tau _0\) is large all of the approximations eventually have ever increasing RMSE, while exact LOO has an upper limit. For medium scales the approximations have smaller RMSE than exact LOO. As discussed later, this is explained by the difference in the variance of the estimates. For small scales WAIC has slightly smaller RMSE than the other methods (including exact LOO). Watanabe (2010) shows that WAIC gives an asymptotically unbiased estimate of the out-of-sample prediction error—this does not hold for hierarchical models with weak prior information as shown by Gelman et al. (2014)—but exact LOO is slightly biased as the LOO posteriors use only \(n-1\) observations. WAIC's different behavior can be understood through the truncated Taylor series correction to the lpd, that is, not using the entire series will bias it towards lpd (see Sect. 2.2). The bias in LOO is negligible when n is large, but with small n it can be be larger. Figure 2b shows RMSE for the bias corrected LOO approximations using the first order correction of Burman (1989). For small scales the error of bias corrected LOOs is smaller than WAIC. When the scale increases the RMSEs are close to the non-corrected versions. Although the bias correction is easy to compute, the difference in accuracy is negligible for most applications. We shall discuss Fig. 2c in a moment, but first consider Fig. 3, which shows the RMSE of the approximation methods and the lpd of observed data decomposed into bias and standard deviation. All methods (except the lpd of observed data) have small biases and variances with small population distribution scales. Bias corrected exact LOO has practically zero bias for all scale values but the highest variance. When the scale increases the LOO approximations eventually fail and bias increases. As the approximations start to fail, there is a certain region where implicit shrinkage towards the lpd of observed data decelerates the increase in RMSE as the variance is reduced, even if the bias continues to grow. If the goal were to minimize the RMSE for smaller and medium scales, we could also shrink exact LOO and increase shrinkage in approximations. Figure 2c shows the RMSE of the LOO approximations with two new choices. Truncated Importance Sampling LOO with very heavy truncation (to \(\root 4 \of {S}\bar{r}\)) closely matches the performance of WAIC. In the experiments not included here, we also observed that adding more correct Taylor series terms to WAIC will make it behave similar to Truncated Importance Sampling with less truncation (see discussion of Taylor series expansion in Sect. 2.2). Shrunk exact LOO (\(\alpha \cdot \mathrm{elpd}_\mathrm{loo} + (1-\alpha )\cdot \text{ lpd }\), with \(\alpha =0.85\) chosen by hand for illustrative purposes only) has a smaller RMSE for small and medium scale values as the variance is reduced, but the price is increased bias at larger scale values. If the goal is robust estimation of predictive performance, then exact LOO is the best general choice because the error is limited even in the case of weak priors. Of the approximations, PSIS-LOO offers the best balance as well as diagnostics for identifying when it is likely failing. Simulated 8 schools example: a Absolute bias of WAIC, Pareto Smoothed Importance Sampling LOO, bias corrected exact LOO, and the lpd (log predictive density) of observed data with the true predictive performance computed using independently simulated test data; b standard deviation for each of these measures; All methods except the lpd of observed data have small biases and variances with small population distribution scales. When the scale increases the bias of WAIC increases faster than the bias of the other methods (except the lpd of observed data). Bias corrected exact LOO has practically zero bias for all scale values. WAIC and Pareto Smoothed Importance Sampling LOO have lower variance than exact LOO, as they are shrunk towards the lpd of observed data, which has the smallest variance with all scales 4.3 Example: Linear regression for stack loss data To check the performance of the proposed diagnostic for our second example we analyze the stack loss data used by Peruggia (1997) which is known to have analytically proven infinite variance of one of the importance weight distributions. The data consist of \(n = 21\) daily observations on one outcome and three predictors pertaining to a plant for the oxidation of ammonia to nitric acid. The outcome y is an inverse measure of the efficiency of the plant and the three predictors \(x_1, x_2\), and \(x_3\) measure rate of operation, temperature of cooling water, and (a transformation of the) concentration of circulating acid. Stack loss example with normal errors: Distributions of a tail shape estimates and b PSIS-LOO estimation errors compared to LOO, from 100 independent Stan runs. The pointwise calculation of the terms in PSIS-LOO reveals that much of the uncertainty comes from a single data point, and it could make sense to simply re-fit the model to the subset and compute LOO directly for that point Peruggia (1997) shows that the importance weights for leave-one-out cross-validation for the data point \(y_{21}\) have infinite variance. Figure 4 shows the distribution of the estimated tail shapes \(\hat{k}\) and estimation errors compared to LOO in 100 independent Stan runs.3 The estimates of the tail shape \(\hat{k}\) for \(i=21\) suggest that the variance of the raw importance ratios is infinite, however the generalized central limit theorem for stable distributions holds and we can still obtain an accurate estimate of the component of LOO for this data point using PSIS. Stack loss example with normal errors: a Tail shape estimate and b LOO approximations for the difficult point, \(i=21\). When more draws are obtained, the estimates converge (slowly) following the generalized central limit theorem Figure 5 shows that if we continue sampling, the estimates for both the tail shape \(\hat{k}\) and \(\widehat{\mathrm{elpd}}_i\) do converge (although slowly as \(\hat{k}\) is close to 1). As the convergence is slow it would be more efficient to sample directly from \(p(\theta ^s|y_{-i})\) for the problematic i. Stack loss example with Student-t errors: Distributions of a tail shape estimates and b PSIS-LOO estimation errors compared to LOO, from 100 independent Stan runs. The computations are more stable than with normal errors (compare to Fig. 4) Puromycin example: Distributions of a tail shape estimates and b PSIS-LOO estimation errors compared to LOO, from 100 independent Stan runs. In an applied example we would only perform these calculations once, but here we replicate 100 times to give a sense of the Monte Carlo error of our procedure High estimates of the tail shape parameter \(\hat{k}\) indicate that the full posterior is not a good importance sampling approximation to the desired leave-one-out posterior, and thus the observation is surprising according to the model. It is natural to consider an alternative model. We tried replacing the normal observation model with a Student-t to make the model more robust for the possible outlier. Figure 6 shows the distribution of the estimated tail shapes \({\hat{k}}\) and estimation errors for PSIS-LOO compared to LOO in 100 independent Stan runs for the Student-t linear regression model. The estimated tail shapes and the errors in computing this component of LOO are smaller than with Gaussian model. 4.4 Example: Nonlinear regression for Puromycin reaction data As a nonlinear regression example, we use the Puromycin biochemical reaction data also analyzed by Epifani et al. (2008). For a group of cells not treated with the drug Puromycin, there are \(n = 11\) measurements of the initial velocity of a reaction, \(V_i\) , obtained when the concentration of the substrate was set at a given positive value, \(c_i\). Velocity on concentration is given by the Michaelis-Menten relation, \(V_i \sim \text{ N }(mc_i/(\kappa + c_i), \sigma ^2)\). Epifani et al. (2008) show that the raw importance ratios for observation \(i=1\) have infinite variance. Figure 7 shows the distribution of the estimated tail shapes k and estimation errors compared to LOO in 100 independent Stan runs. The estimates of the tail shape k for \(i=1\) suggest that the variance of the raw importance ratios is infinite. However, the generalized central limit theorem for stable distributions still holds and we can get an accurate estimate of the corresponding term in LOO. We could obtain more draws to reduce the Monte Carlo error, or again consider a more robust model. 4.5 Example: Logistic regression for leukemia survival Our next example uses a logistic regression model to predict survival of leukemia patients past 50 weeks from diagnosis. These data were also analyzed by Epifani et al. (2008). Explanatory variables are white blood cell count at diagnosis and whether "Auer rods and/or significant granulature of the leukemic cells in the bone marrow at diagnosis" were present. Leukemia example: Distributions of a tail shape estimates and b PSIS-LOO estimation errors compared to LOO, from 100 independent Stan runs. The pointwise calculation of the terms in PSIS-LOO reveals that much of the uncertainty comes from a single data point, and it could make sense to simply re-fit the model to the subset and compute LOO directly for that point Leukemia example: Distributions of a tail shape estimate and b LOO approximations for \(i=15\). If we continue sampling, the tail shape estimate stays above 1 and \(\widehat{\mathrm{elpd}}_i\) will not converge Leukemia example with log-transformed predictor: a Distributions of tail shape estimates for each data point and b PSIS-LOO estimation errors compared to LOO, from 100 independent Stan runs. Computations are more stable compared to the model fit on the original scale and displayed in Fig. 8 Radon example: a Tail shape estimates for each point's contribution to LOO, and b error in PSIS-LOO accuracy for each data point, all based on a single fit of the model in Stan Epifani et al. (2008) show that the raw importance ratios for data point \(i=15\) have infinite variance. Figure 8 shows the distribution of the estimated tail shapes k and estimation errors compared to LOO in 100 independent Stan runs. The estimates of the tail shape k for \(i=15\) suggest that the mean and variance of the raw importance ratios do not exist. Thus the generalized central limit theorem does not hold. Figure 9 shows that if we continue sampling, the tail shape estimate stays above 1 and \(\widehat{\mathrm{elpd}}_i\) will not converge. Large estimates for the tail shape parameter indicate that the full posterior is not a good importance sampling approximation for the desired leave-one-out posterior, and thus the observation is surprising. The original model used the white blood cell count directly as a predictor, and it would be natural to use its logarithm instead. Figure 10 shows the distribution of the estimated tail shapes k and estimation errors compared to LOO in 100 independent Stan runs for this modified model. Both the tail shape values and errors are now smaller. 4.6 Example: Multilevel regression for radon contamination Gelman and Hill (2007) describe a study conducted by the United States Environmental Protection Agency designed to measure levels of the carcinogen radon in houses throughout the United States. In high concentrations radon is known to cause lung cancer and is estimated to be responsible for several thousands of deaths every year in the United States. Here we focus on the sample of 919 houses in the state of Minnesota, which are distributed (unevenly) throughout 85 counties. We fit the following multilevel linear model to the radon data $$\begin{aligned} y_i&\sim \mathrm{N}\left( \alpha _{j[i]} + \beta _{j[i]} x_i, \sigma ^2\right) , \quad i = 1, \ldots , 919 \\ \begin{pmatrix} \alpha _j \\ \beta _j \end{pmatrix}&\sim \mathrm{N} \left( \begin{pmatrix} \gamma _0^\alpha + \gamma _1^\alpha u_j \\ \gamma _0^\beta + \gamma _1^\beta u_j \end{pmatrix}, \begin{pmatrix} \sigma ^2_\alpha &{} \rho \sigma _\alpha \sigma _\beta \\ \rho \sigma _\alpha \sigma _\beta &{} \sigma ^2_\beta \end{pmatrix} \right) ,\\&j = 1, \ldots , 85, \end{aligned}$$ where \(y_i\) is the logarithm of the radon measurement in the ith house, \(x_i = 0\) for a measurement made in the basement and \(x_i = 1\) if on the first floor (it is known that radon enters more easily when a house is built into the ground), and the county-level predictor \(u_j\) is the logarithm of the soil uranium level in the county. The residual standard deviation \(\sigma \) and all hyperparameters are given weakly informative priors. Code for fitting this model is provided in Appendix 3. The sample size in this example \((n=919)\) is not huge but is large enough that it is important to have a computational method for LOO that is fast for each data point. Although the MCMC for the full posterior inference (using four parallel chains) finished in only 93 s, the computations for exact brute force LOO require fitting the model 919 times and took more than 20 h to complete (Macbook Pro, 2.6 GHz Intel Core i7). With the same hardware the PSIS-LOO computations took less than 5 s. Figure 11 shows the results for the radon example and indeed the estimated shape parameters k are small and all of the tested methods are accurate. For two observations the estimate of k is slightly higher than the preferred threshold of 0.7, but we can easily compute the elpd contributions for these points directly and then combine with the PSIS-LOO estimates for the remaining observations.4 This is the procedure we refer to as PSIS-LOO+ in Sect. 4.7 below. Root mean square error for different computations of LOO as determined from a simulation study, in each case based on running Stan to obtain 4000 posterior draws and repeating 100 times Stacks-N Stacks-t Puromycin Leukemia-log PSIS-LOO IS-LOO TIS-LOO WAIC PSIS-LOO+ 10-fold-CV \(10\times 10\)-fold-CV Methods compared are Pareto smoothed importance sampling (PSIS), PSIS with direct sampling if \({\hat{k}}_i>0.7\) (PSIS-LOO+), raw importance sampling (IS), truncated importance sampling (TIS), WAIC, 10-fold-CV, and 10 times repeated 10-fold-CV for the different examples considered in Sects. 4.1–4.6: the hierarchical model for the 8 schools, the stack loss regression (with normal and t models), nonlinear regression for Puromycin, logistic regression for leukemia (in original and log scale), and hierarchical linear regression for radon. See text for explanations. PSIS-LOO and PSIS-LOO+ give the smallest error in all examples except the 8 schools, where it gives the second smallest error. In each case, we compared the estimates to the correct value of LOO by the brute-force procedure of fitting the model separately to each of the n possible training sets for each example Partial replication of Table 2 using 16,000 posterior draws in each case Monte Carlo errors are slightly lower. The errors for WAIC do not simply scale with \(1/\sqrt{S}\) because most of its errors come from bias not variance 4.7 Summary of examples Table 2 compares the performance of Pareto smoothed importance sampling, raw importance sampling, truncated importance sampling, and WAIC for estimating expected out-of-sample prediction accuracy for each of the examples in Sects. 4.1–4.6. Models were fit in Stan to obtain 4000 simulation draws. In each case, the distributions come from 100 independent simulations of the entire fitting process, and the root mean squared error is evaluated by comparing to exact LOO, which was computed by separately fitting the model to each leave-one-out dataset for each example. The last three lines of Table 2 show additionally the performance of PSIS-LOO combined with direct sampling for the problematic i with \({\hat{k}}>0.7\) (PSIS-LOO+), 10-fold-CV, and 10 times repeated 10-fold-CV.5 For the Stacks-N, Puromycin, and Leukemia examples, there was one i with \({\hat{k}}>0.7\), and thus the improvement has the same computational cost as the full posterior inference. 10-fold-CV has higher RMSE than LOO approximations except in the Leukemia case. The higher RMSE of 10-fold-CV is due to additional variance from the data division. The repeated 10-fold-CV has smaller RMSE than basic 10-fold-CV, but now the cost of computation is already 100 times the original full posterior inference. These results show that K-fold-CV is needed only if LOO approximations fail badly (see also the results in Vehtari and Lampinen 2002). As measured by root mean squared error, PSIS consistently performs well. In general, when IS-LOO has problems it is because of the high variance of the raw importance weights, while TIS-LOO and WAIC have problems because of bias. Table 3 shows a replication using 16,000 Stan draws for each example. The results are similar results and PSIS-LOO is able to improve the most given additional draws. 5 Standard errors and model comparison We next consider some approaches for assessing the uncertainty of cross-validation and WAIC estimates of prediction error. We present these methods in a separate section rather than in our main development because, as discussed below, the diagnostics can be difficult to interpret when the sample size is small. 5.1 Standard errors The computed estimates \(\widehat{\mathrm{elpd}}_\mathrm{loo}\) and \(\widehat{\mathrm{elpd}}_\mathrm{waic}\) are each defined as the sum of n independent components so it is trivial to compute their standard errors by computing the standard deviation of the n components and multiplying by \(\sqrt{n}\). For example, define so that \(\widehat{\mathrm{elpd}}_\mathrm{loo}\) in (4) is the sum of these n independent terms. Then and similarly for WAIC and K-fold cross-validation. The effective numbers of parameters, \(\widehat{p}_\mathrm{loo}\) and \( \widehat{p}_\mathrm{waic}\), are also sums of independent terms so we can compute their standard errors in the same way. These standard errors come from considering the n data points as a sample from a larger population or, equivalently, as independent realizations of an error model. One can also compute Monte Carlo standard errors arising from the finite number of simulation draws using the formula from Gelman et al. (2013) which uses both between and within-chain information and is implemented in Stan. In practice we expect Monte Carlo standard errors to not be so interesting because we would hope to have enough simulations that the computations are stable, but it could make sense to look at them just to check that they are low enough to be negligible compared to sampling error (which scales like 1 / n rather than 1 / S). The standard error (23) and the corresponding formula for \(\text{ se }\,(\widehat{\mathrm{elpd}}_\mathrm{waic})\) have two difficulties when the sample size is low. First, the n terms are not strictly independent because they are all computed from the same set of posterior simulations \(\theta ^s\). This is a generic issue when evaluating the standard error of any cross-validated estimate. Second, the terms in any of these expressions can come from highly skewed distributions, so the second moment might not give a good summary of uncertainty. Both of these problems should subside as n becomes large. For small n, one could instead compute nonparametric error estimates using a Bayesian bootstrap on the computed log-likelihood values corresponding to the n data points (Vehtari and Lampinen 2002). 5.2 Model comparison When comparing two fitted models, we can estimate the difference in their expected predictive accuracy by the difference in \(\widehat{\mathrm{elpd}}_\mathrm{loo}\) or \(\widehat{\mathrm{elpd}}_\mathrm{waic}\) (multiplied by \(-2\), if desired, to be on the deviance scale). To compute the standard error of this difference we can use a paired estimate to take advantage of the fact that the same set of n data points is being used to fit both models. For example, suppose we are comparing models A and B, with corresponding fit measures \(\widehat{\mathrm{elpd}}_\mathrm{loo}^A=\sum _{i=1}^n \widehat{\mathrm{elpd}}_{\mathrm{loo},i}^A\) and \(\widehat{\mathrm{elpd}}_\mathrm{loo}^B=\sum _{i=1}^n \widehat{\mathrm{elpd}}_{\mathrm{loo},i}^B\). The standard error of their difference is simply, $$\begin{aligned} \text{ se }\,(\widehat{\mathrm{elpd}}_\mathrm{loo}^A- \widehat{\mathrm{elpd}}_\mathrm{loo}^B) =\sqrt{n\,V_{i=1}^n (\widehat{\mathrm{elpd}}_{\mathrm{loo},i}^A- \widehat{\mathrm{elpd}}_{\mathrm{loo},i}^B)},\nonumber \\ \end{aligned}$$ and similarly for WAIC and K-fold cross-validation. Alternatively the non-parametric Bayesian bootstrap approach can be used (Vehtari and Lampinen 2002). As before, these calculations should be most useful when n is large, because then non-normality of the distribution is not such an issue when estimating the uncertainty of these sums. In any case, we suspect that these standard error formulas, for all their flaws, should give a better sense of uncertainty than what is obtained using the current standard approach for comparing differences of deviances to a \(\chi ^2\) distribution, a practice that is derived for Gaussian linear models or asymptotically and, in any case, only applies to nested models. Further research needs to be done to evaluate the performance in model comparison of (24) and the corresponding standard error formula for LOO. Cross-validation and WAIC should not be used to select a single model among a large number of models due to a selection induced bias as demonstrated, for example, by Piironen and Vehtari (2016). We demonstrate the practical use of LOO in model comparison using the radon example from Sect. 4.6. Model A is the multilevel linear model discussed in Sect. 4.6 and Model B is the same model but without the county-level uranium predictor. That is, at the county-level Model B has $$\begin{aligned} \begin{pmatrix} \alpha _j \\ \beta _j \end{pmatrix} \sim \mathrm{N} \left( \begin{pmatrix} \mu _\alpha \\ \mu _\beta \end{pmatrix}, \begin{pmatrix} \sigma ^2_\alpha &{} \rho \sigma _\alpha \sigma _\beta \\ \rho \sigma _\alpha \sigma _\beta &{} \sigma ^2_\beta \end{pmatrix} \right) , \quad j = 1, \ldots , 85. \end{aligned}$$ Comparing the models on PSIS-LOO reveals an estimated difference in elpd of 10.2 (with a standard error of 5.1) in favor of Model A. 5.3 Model comparison using pointwise prediction errors We can also compare models in their leave-one-out errors, point by point. We illustrate with an analysis of a survey of residents from a small area in Bangladesh that was affected by arsenic in drinking water. Respondents with elevated arsenic levels in their wells were asked if they were interested in getting water from a neighbor's well, and a series of models were fit to predict this binary response given various information about the households (Gelman and Hill 2007). Here we start with a logistic regression for the well-switching response given two predictors: the arsenic level of the water in the resident's home, and the distance of the house from the nearest safe well. We compare this to an alternative logistic regression with the arsenic predictor on the logarithmic scale. The two models have the same number of parameters but give different predictions. Arsenic example, comparing two models in terms of their pointwise contributions to LOO: a comparing contributions of LOO directly; b plotting the difference in LOO as a function of a key predictor (the existing arsenic level). To aid insight, we have colored the data according to the (binary) output, with red corresponding to \(y=1\) and blue representing \(y=0\). For any given data point, one model will fit better than another, but for this example the graphs reveal that the difference in LOO between the models arises from the linear model's poor predictions for 10–15 non-switchers with high arsenic levels Figure 12 shows the pointwise results for the arsenic example. The scattered blue dots on the left side of Fig. 12a and on the lower right of Fig. 12b correspond to data points which Model A fits particularly poorly—that is, large negative contributions to the expected log predictive density. We can also sum these n terms to yield an estimated difference in \(\mathrm{elpd}_\mathrm{loo}\) of 16.4 with a standard error of 4.4. This standard error derives from the finite sample size and is scaled by the variation in the differences displayed in Fig. 12; it is not a Monte Carlo error and does not decline to 0 as the number of Stan simulation draws increases. This paper has focused on the practicalities of implementing LOO, WAIC, and K-fold cross-validation within a Bayesian simulation environment, in particular the coding of the log-likelihood in the model, the computations of the information measures, and the stabilization of weights to enable an approximation of LOO without requiring refitting the model. Some difficulties persist, however. As discussed above, any predictive accuracy measure involves two definitions: (1) the choice of what part of the model to label as "the likelihood", which is directly connected to which potential replications are being considered for out-of-sample prediction; and (2) the factorization of the likelihood into "data points", which is reflected in the later calculations of expected log predictive density. Some choices of replication can seem natural for a particular dataset but less so in other comparable settings. For example, the 8 schools data are available only at the school level and so it seems natural to treat the school-level estimates as data. But if the original data had been available, we would surely have defined the likelihood based on the individual students' test scores. It is an awkward feature of predictive error measures that they might be determined based on computational convenience or data availability rather than fundamental features of the problem. To put it another way, we are assessing the fit of the model to the particular data at hand. Finally, these methods all have limitations. The concern with WAIC is that formula (12) is an asymptotic expression for the bias of lpd for estimating out-of-sample prediction error and is only an approximation for finite samples. Cross-validation (whether calculated directly by re-fitting the model to several different data subsets, or approximated using importance sampling as we did for LOO) has a different problem in that it relies on inference from a smaller subset of the data being close to inference from the full dataset, an assumption that is typically but not always true. For example, as we demonstrated in Sect. 4.1, in a hierarchical model with only one data point per group, PSIS-LOO and WAIC can dramatically understate prediction accuracy. Another setting where LOO (and cross-validation more generally) can fail is in models with weak priors and sparse data. For example, consider logistic regression with flat priors on the coefficients and data that happen to be so close to separation that the removal of a single data point can induce separation and thus infinite parameter estimates. In this case the LOO estimate of average prediction accuracy will be zero (that is, \(\widehat{\mathrm{elpd}}_\mathrm{is-loo}\) will be \(-\infty \)) if it is calculated to full precision, even though predictions of future data from the actual fitted model will have bounded loss. Such problems should not arise asymptotically with a fixed model and increasing sample size but can occur with actual finite data, especially in settings where models are increasing in complexity and are insufficiently constrained. That said, quick estimates of out-of-sample prediction error can be valuable for summarizing and comparing models, as can be seen from the popularity of AIC and DIC. For Bayesian models, we prefer PSIS-LOO and K-fold cross-validation to those approximations which are based on point estimation. The loo R package is available from CRAN and https://github.com/stan-dev/loo. The corresponding code for Matlab, Octave, and Python is available at https://github.com/avehtari/PSIS. In Gelman et al. (2013), the variance-based \(p_\mathrm{waic}\) defined here is called \(p_{\mathrm{waic}\, 2}\). There is also a mean-based formula, \(p_{\mathrm{waic}\, 1}\), which we do not use here. Smoothed density estimates were made using a logistic Gaussian process (Vehtari and Riihimäki 2014). As expected, the two slightly high estimates for k correspond to particularly influential observations, in this case houses with extremely low radon measurements. 10-fold-CV results were not computed for data sets with \(n\le 11\), and 10 times repeated 10-fold-CV was not feasible for the radon example due to the computation time required. The code in the generated quantities block is written using the new syntax introduced in Stan version 2.10.0. For models fit to large datasets it can be infeasible to store the entire log-likelihood matrix in memory. A function for computing the log-likelihood from the data and posterior draws of the relevant parameters may be specified instead of the log-likelihood matrix—the necessary data and draws are supplied as an additional argument—and columns of the log-likelihood matrix are computed as needed. This requires less memory than storing the entire log-likelihood matrix and allows loo to be used with much larger datasets. In statistics there is a tradition of looking at deviance, while in computer science the log score is more popular, so we return both. The extract_log_lik() function used in the example is a convenience function for extracting the log-likelihood matrix from a fitted Stan model provided that the user has computed and stored the pointwise log-likelihood in their Stan program (see, for example, the generated quantities block in 1). The argument parameter_name (defaulting to "log_lik") can also be supplied to indicate which parameter or generated quantity corresponds to the log-likelihood. We thank Bob Carpenter, Avraham Adler, Joona Karjalainen, Sean Raleigh, Sumio Watanabe, and Ben Lambert for helpful comments, Juho Piironen for R help, Tuomas Sivula for Python port, and the U.S. National Science Foundation, Institute of Education Sciences, and Office of Naval Research for partial support of this research. Appendix: Implementation in Stan and R Appendix 1: Stan code for computing and storing the pointwise log-likelihood We illustrate how to write Stan code that computes and stores the pointwise log-likelihood using the arsenic example from Sect. 5.3. We save the program in the file logistic.stan: We have defined the log-likelihood as a vector log_lik in the generated quantities block so that the individual terms will be saved by Stan.6 It would seem desirable to compute the terms of the log-likelihood directly without requiring the repetition of code, perhaps by flagging the appropriate lines in the model or by identifying the log likelihood as those lines in the model that are defined relative to the data. But there are so many ways of writing any model in Stan—anything goes as long as it produces the correct log posterior density, up to any arbitrary constant—that we cannot see any general way at this time for computing LOO and WAIC without repeating the likelihood part of the code. The good news is that the additional computations are relatively cheap: sitting as they do in the generated quantities block (rather than in the transformed parameters and model blocks), the expressions for the terms of the log posterior need only be computed once per saved iteration rather than once per HMC leapfrog step, and no gradient calculations are required. Appendix 2: The loo R package for LOO and WAIC The loo R package provides the functions loo() and waic() for efficiently computing PSIS-LOO and WAIC for fitted Bayesian models using the methods described in this paper. These functions take as their argument an \(S \times n\) log-likelihood matrix, where S is the size of the posterior sample (the number of retained draws) and n is the number of data points.7 The required means and variances across simulations are calculated and then used to compute the effective number of parameters and LOO or WAIC. The loo() function returns \(\widehat{\mathrm{elpd}}_\mathrm{loo},\widehat{p}_\mathrm{loo},\mathrm{looic} =-2\, \widehat{\mathrm{elpd}}_\mathrm{loo}\) (to provide the output on the conventional scale of "deviance" or AIC),8 the pointwise contributions of each of these measures, and standard errors. The waic() function computes the analogous quantities for WAIC. Also returned by the loo() function is the estimated shape parameter \({\hat{k}}\) for the generalized Pareto fit to the importance ratios for each leave-one-out distribution. These computations could also be implemented directly in Stan C++, perhaps following the rule that the calculations are performed if there is a variable named log_lik. The loo R package, however, is more general and does not require that a model be fit using Stan, as long as an appropriate log-likelihood matrix is supplied. Using the loo package. Below, we provide R code for preparing and running the logistic regression for the arsenic example in Stan. After fitting the model we then use the loo package to compute LOO and WAIC.9 The printed output shows \(\widehat{\mathrm{elpd}}_\mathrm{loo},\widehat{p}_\mathrm{loo},\mathrm{looic}{} \), and their standard errors: By default, the estimates for the shape parameter \(\hat{k} { ofthe} { generalizedPareto} { distributionare} { alsochecked} { anda} { messageis} { displayedinforming} { theuser} { ifany} \hat{k}{} \) are problematic (see the end of Sect. 2.1). In the example above the message tells us that all of the estimates for \(\hat{k}{} \) are fine. However, if any \(\hat{k}{} \) were between \(1/2\) and \(1\) or greater than \(1\) the message would instead look something like this: If there are any warnings then it can be useful to visualize the estimates to check which data points correspond to the large \({\hat{k}}{} \) values. A plot of the \({\hat{k}}{} \) estimates can also be generated using plot(loo1) and the list returned by the loo function also contains the full vector of \({\hat{k}}{} \) values. Model comparison To compare this model to a second model on their values of LOO we can use the compare function: This new object, loo_diff, contains the estimated difference of expected leave-one-out prediction errors between the two models, along with the standard error: Code for WAIC For WAIC the code is analogous and the objects returned have the same structure (except there are no Pareto \(k\) estimates). The compare() function can also be used to estimate the difference in WAIC between two models: Appendix 3: Using the loo R package with rstanarm models Here we show how to fit the model for the radon example from Sect. 4.6 and carry out PSIS-LOO using the rstanarm and loo packages. After fitting the models we can pass the fitted model objects modelA and modelB directly to rstanarm's loo method and it will call the necessary functions from the loo package internally. This returns: If there are warnings about large values of the estimated Pareto shape parameter \({\hat{k}}{} \) for the importance ratios, rstanarm is also able to automatically carry out the procedure we call PSIS-LOO+ (see Sect. 4.7). That is, rstanarm can refit the model, leaving out these problematic observations one at a time and computing their elpd contributions directly. Then these values are combined with the results from PSIS-LOO for the other observations and returned to the user. We recommended this when there are only a few large \({{\hat{k}}}{} \) estimates. If there are many of them then we recommend \(K\)-fold cross-validation, which is also implemented in the latest release of rstanarm. Appendix 4: Stan code for \(K\)-fold cross-validation To implement \(K\)-fold cross-validation we repeatedly partition the data, with each partition fitting the model to the training set and using it to predict the holdout set. The code for cross-validation does not look so generic because of the need to repeatedly partition the data. However, in any particular example the calculations are not difficult to implement, the main challenge being the increase in computation time by roughly a factor of \(K\). We recommend doing the partitioning in R (or Python, or whichever data-processing environment is being used) and then passing the training data and holdout data to Stan in two pieces. Again we illustrate with the logistic regression for the arsenic example. We start with the model from above, but we pass in both the training data (N_t, y_t, X_t) and the holdout set (N_h, y_h, X_h), augmenting the data block accordingly. We then alter the generated quantities block to operate on the holdout data: LOO could be also implemented in this way, setting \(N_t\) to \(N-1\) and \(N_h\) to 1. But, as discussed in the article, for large datasets it is more practical to approximate LOO using importance sampling on the draws from the posterior distribution fit to the entire dataset. Akaike, H.: Information theory and an extension of the maximum likelihood principle. In: Petrov, B.N., Csaki, F. (eds.) Proceedings of the Second International Symposium on Information Theory, pp. 267–281. Akademiai Kiado, Budapest (1973)Google Scholar Ando, T., Tsay, R.: Predictive likelihood for Bayesian model selection and averaging. Int. J. Forecast. 26, 744–763 (2010)CrossRefGoogle Scholar Arolot, S., Celisse, A.: A survey of cross-validation procedures for model selection. Stat. Surv. 4, 40–79 (2010)MathSciNetCrossRefzbMATHGoogle Scholar Bernardo, J.M., Smith A.F.M.: Bayesian Theory. Wiley, New York (1994)Google Scholar Burman, P.: A comparative study of ordinary cross-validation, \(v\)-fold cross-validation and the repeated learning-testing methods. Biometrika 76, 503–514 (1989)MathSciNetCrossRefzbMATHGoogle Scholar Epifani, I., MacEachern, S.N., Peruggia, M.: Case-deletion importance sampling estimators: central limit theorems and related results. Electron. J. Stat. 2, 774–806 (2008)MathSciNetCrossRefzbMATHGoogle Scholar Gabry, J., Goodrich, B.: rstanarm: Bayesian applied regression modeling via Stan. R package version 2.10.0. (2016). http://mc-stan.org/interfaces/rstanarm Geisser, S., Eddy, W.: A predictive approach to model selection. J. Am. Stat. Assoc. 74, 153–160 (1979)MathSciNetCrossRefzbMATHGoogle Scholar Gelfand, A.E.: Model determination using sampling-based methods. In: Gilks, W.R., Richardson, S., Spiegelhalter, D.J. (eds.) Markov Chain Monte Carlo in Practice, pp. 145–162. Chapman and Hall, London (1996)Google Scholar Gelfand, A.E., Dey, D.K., Chang, H.: Model determination using predictive distributions with implementation via sampling-based methods. In: Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (eds.) Bayesian Statistics, 4th edn, pp. 147–167. Oxford University Press, Oxford (1992)Google Scholar Gelman, A., Carlin, J.B., Stern, H.S., Dunson, D.B., Vehtari, A., Rubin, D.B.: Bayesian Data Analysis, 3rd edn. CRC Press, London (2013)zbMATHGoogle Scholar Gelman, A., Hill, J.: Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press, Cambridge (2007)Google Scholar Gelman, A., Hwang, J., Vehtari, A.: Understanding predictive information criteria for Bayesian models. Stat. Comput. 24, 997–1016 (2014)MathSciNetCrossRefzbMATHGoogle Scholar Gneiting, T., Raftery, A.E.: Strictly proper scoring rules, prediction, and estimation. J. Am. Stat. Assoc. 102, 359–378 (2007)Google Scholar Hoeting, J., Madigan, D., Raftery, A.E., Volinsky, C.: Bayesian model averaging. Stat. Sci. 14, 382–417 (1999)MathSciNetCrossRefzbMATHGoogle Scholar Hoffman, M.D., Gelman, A.: The no-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. J. Mach. Learn. Res. 15, 1593–1623 (2014)MathSciNetzbMATHGoogle Scholar Ionides, E.L.: Truncated importance sampling. J. Comput. Graph. Stat. 17, 295–311 (2008)MathSciNetCrossRefGoogle Scholar Koopman, S.J., Shephard, N., Creal, D.: Testing the assumptions behind importance sampling. J. Econom. 149, 2–11 (2009)MathSciNetCrossRefzbMATHGoogle Scholar Peruggia, M.: On the variability of case-deletion importance sampling weights in the Bayesian linear model. J. Am. Stat. Assoc. 92, 199–207 (1997)MathSciNetCrossRefzbMATHGoogle Scholar Piironen, J., Vehtari, A.: Comparison of Bayesian predictive methods for model selection. Stat. Comput. (2016) (In press). http://link.springer.com/article/10.1007/s11222-016-9649-y Plummer, M.: Penalized loss functions for Bayesian model comparison. Biostatistics 9, 523–539 (2008)CrossRefzbMATHGoogle Scholar R Core Team: R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria (2016). https://www.R-project.org/ Rubin, D.B.: Estimation in parallel randomized experiments. J. Educ. Stat. 6, 377–401 (1981)Google Scholar Spiegelhalter, D.J., Best, N.G., Carlin, B.P., van der Linde, A.: Bayesian measures of model complexity and fit. J. R. Stat. Soc. B 64, 583–639 (2002)MathSciNetCrossRefzbMATHGoogle Scholar Spiegelhalter, D., Thomas, A., Best, N., Gilks, W., Lunn, D.: BUGS: Bayesian inference using Gibbs sampling. MRC Biostatistics Unit, Cambridge, England (1994, 2003). http://www.mrc-bsu.cam.ac.uk/bugs/ Stan Development Team: The Stan C++ Library, version 2.10.0 (2016a). http://mc-stan.org/ Stan Development Team: RStan: the R interface to Stan, version 2.10.1 (2016b). http://mc-stan.org/interfaces/rstan.html Stone, M.: An asymptotic equivalence of choice of model cross-validation and Akaike's criterion. J. R. Stat. Soc. B 36, 44–47 (1977)MathSciNetzbMATHGoogle Scholar van der Linde, A.: DIC in variable selection. Stat. Neerl. 1, 45–56 (2005)MathSciNetCrossRefzbMATHGoogle Scholar Vehtari, A., Gelman, A.: Pareto smoothed importance sampling (2015). arXiv:1507.02646 Vehtari, A., Gelman, A., Gabry, J.: loo: Efficient leave-one-out cross-validation and WAIC for Bayesian models. R package version 0.1.6 (2016a). https://github.com/stan-dev/loo Vehtari, A., Mononen, T., Tolvanen, V., Sivula, T., Winther, O.: Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models. J. Mach. Learn. Res. 17, 1–38 (2016b)Google Scholar Vehtari, A., Lampinen, J.: Bayesian model assessment and comparison using cross-validation predictive densities. Neural Comput. 14, 2439–2468 (2002)CrossRefzbMATHGoogle Scholar Vehtari, A., Ojanen, J.: A survey of Bayesian predictive methods for model assessment, selection and comparison. Stat. Surv. 6, 142–228 (2012)MathSciNetCrossRefzbMATHGoogle Scholar Vehtari, A., Riihimäki, J.: Laplace approximation for logistic Gaussian process density estimation and regression. Bayesian Anal. 9, 425–448 (2014)MathSciNetCrossRefzbMATHGoogle Scholar Watanabe, S.: Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. J. Mach. Learn. Res. 11, 3571–3594 (2010)MathSciNetzbMATHGoogle Scholar Zhang, J., Stephens, M.A.: A new and efficient estimation method for the generalized Pareto distribution. Technometrics 51, 316–325 (2009)MathSciNetCrossRefGoogle Scholar 1.Department of Computer Science, Helsinki Institute for Information Technology HIITAalto UniversityEspooFinland 2.Department of StatisticsColumbia UniversityNew YorkUSA Vehtari, A., Gelman, A. & Gabry, J. Stat Comput (2017) 27: 1413. https://doi.org/10.1007/s11222-016-9696-4 First Online 30 August 2016
CommonCrawl
Simulating the Coronal Evolution of Bipolar Active Regions to Investigate the Formation of Flux Ropes S. L. Yardley ORCID: orcid.org/0000-0003-2802-43811, D. H. Mackay ORCID: orcid.org/0000-0001-6065-85311 & L. M. Green ORCID: orcid.org/0000-0002-0053-48762 Solar Physics volume 296, Article number: 10 (2021) Cite this article The coronal magnetic field evolution of 20 bipolar active regions (ARs) is simulated from their emergence to decay using the time-dependent nonlinear force-free field method of Mackay, Green, and van Ballegooijen (Astrophys. J. 729, 97, 2011). A time sequence of cleaned photospheric line-of-sight magnetograms, which covers the entire evolution of each AR, is used to drive the simulation. A comparison of the simulated coronal magnetic field with the 171 and 193 Å observations obtained by the Solar Dynamics Observatory (SDO)/Atmospheric Imaging Assembly (AIA), is made for each AR by manual inspection. The results show that it is possible to reproduce the evolution of the main coronal features such as small- and large-scale coronal loops, filaments and sheared structures for 80% of the ARs. Varying the boundary and initial conditions, along with the addition of physical effects such as Ohmic diffusion, hyperdiffusion and a horizontal magnetic field injection at the photosphere, improves the match between the observations and simulated coronal evolution by 20%. The simulations were able to reproduce the build-up to eruption for 50% of the observed eruptions associated with the ARs. The mean unsigned time difference between the eruptions occurring in the observations compared to the time of eruption onset in the simulations was found to be ≈5 hrs. The simulations were particularly successful in capturing the build-up to eruption for all four eruptions that originated from the internal polarity inversion line of the ARs. The technique was less successful in reproducing the onset of eruptions that originated from the periphery of ARs and large-scale coronal structures. For these cases global, rather than local, nonlinear force-free field models must be used. While the technique has shown some success, eruptions that occur in quick succession are difficult to reproduce by this method and future iterations of the model need to address this. The solar corona is highly complex in nature. The source of its complexity is largely due to the presence of magnetic fields that are generated in the tachocline (Spiegel and Zahn, 1992): a region close to the base of the convection zone (Charbonneau, 2010, 2014). When magnetic flux tubes at the base of the convection zone become unstable to buoyancy (Parker, 1955; Zwaan, 1985) they rise and the magnetic field breaks through the solar surface manifesting itself as an active region (AR) in the photosphere. The magnetic flux emerges in a non-potential state (Leka et al., 1996) and is further modified by the action of photospheric flows. This results in free magnetic energy being available to drive solar eruptive phenomena. ARs are the source of a wide range of atmospheric solar activity and the type and level of activity is dependent on the evolutionary stage of the AR (for a review on AR evolution see van Driel-Gesztelyi and Green 2015). As a result, it is important to understand the structure and evolution of the magnetic field of an AR over its entire lifetime, from emergence to decay. It is currently difficult to measure the magnetic field in the corona and extreme ultraviolet (EUV) observations of AR coronal loops can only provide indirect and limited information of the coronal structure of ARs. An alternative approach, for the analysis of the coronal structure of ARs, is to construct a model of the coronal magnetic field by using the photospheric magnetic field as the lower boundary condition. This approach relies on the approximation that the corona, a low plasma-\(\beta \) environment that mostly remains in equilibrium, is "force-free". This means that the coronal magnetic field must satisfy the criterion of \(\boldsymbol{j} \times \boldsymbol{B} = 0\) where \(\boldsymbol{j} = \alpha \boldsymbol{B}\). In the case of nonlinear force-free (NLFF) fields the torsion parameter \(\alpha \) is a scalar function that can vary as a function of position, but must remain constant along magnetic field lines. There are numerous NLFF field techniques that can be used to generate models of the coronal magnetic field. These NLFF field models can be divided into two categories: models that are static or time-dependent. Static models either use a vector magnetogram as the lower boundary condition and extrapolate the NLFF fields into the corona (e.g. Schrijver et al. 2006, De Rosa et al. 2009, Canou and Amari 2010, Wiegelmann and Sakurai 2012, Jiang et al. 2014), or they take an initial coronal field, which is either a potential or linear force-free (LFF), and evolve this field into a NLFF state. The latter approach can make use of the magnetofrictional relaxation technique (Yang, Sturrock, and Antiochos, 1986) to generate a static model of the magnetic field of an AR. Examples of static modelling using magnetofrictional relaxation include the magnetofrictional extrapolation method of Valori, Kliem, and Keppens (2005) and the flux rope insertion method (van Ballegooijen, 2004; Bobra, van Ballegooijen, and DeLuca, 2008; Savcheva et al., 2012; Yardley et al., 2019). The extrapolation methods mentioned above produce a coronal field model at a single snapshot in time. A series of independent, static extrapolations may be produced but there is no direct evolution from one extrapolation to the next. The magnetofrictional relaxation technique can also be used as a simulation method to construct a continuous time-dependent series of NLFF fields. In this case, the normal component of the magnetic field is specified along with an initial field and a time series of horizontal boundary motions. The resulting coronal structures are due to the applied boundary motions injecting non-potentiality into the corona over timescales of hours or days. The coronal field, which is in non-equilibrium, is then relaxed back to a NLFF field equilibrium using magnetofrictional relaxation. This has been applied to global simulations (Mackay and van Ballegooijen, 2006a,b) where a flux transport model is applied at the photospheric boundary or to simulate AR evolution using a time series of line-of-sight (LoS) magnetograms (Mackay, Green, and van Ballegooijen, 2011; Gibb et al., 2014) or more recently vector magnetograms (e.g. Pomoell, Lumme, and Kilpua 2019). In the recent study by Yardley, Mackay, and Green (2018b) a continuous time-dependent series of NLFF field models of AR 11437 were created using the time-dependent NLFF field method of Mackay, Green, and van Ballegooijen (2011). Photospheric LoS magnetograms from the SDO/Helioseismic Magnetic Imager (HMI) instrument were used as lower boundary conditions to drive the simulation and continuously evolve the coronal field through a series of NLFF equilibria. When the results from the simulation were compared to SDO/AIA observations it was found that the simulation was able to capture the majority of the characteristics of the coronal field evolution. Flux ropes that formed in the simulation showed signatures of eruption onset for two out of three of the observed eruptions, approximately 1 and 10 hrs before the eruptions occurred in the observations. A parameter study was also conducted to test whether varying the initial condition and boundary conditions along with the inclusion of Ohmic diffusion, hyperdiffusion, and an additional horizontal magnetic field injection at the photosphere affect the coronal evolution and timings of the eruption onset. The results showed that the coronal evolution and timings of eruption onset were not significantly changed by these variations and inclusions, indicating that the main element in replicating the coronal field evolution is the Poynting flux from the boundary evolution of the LoS magnetograms. AR 11437 is also included in this current study. In this paper, we extend the set of simulations carried out in Yardley, Mackay, and Green (2018b) of a single AR by simulating the coronal magnetic field evolution of 20 bipolar ARs. The observational analysis of the same set of bipolar ARs was conducted by Yardley et al. (2018a) in order to probe the role of flux cancellation as an eruption trigger mechanism. The study of Yardley et al. (2018a) analysed both photospheric and coronal observations taken by SDO over the entire lifetime of the ARs. Through simulating a much larger sample of ARs we can obtain more general results than those found in Yardley, Mackay, and Green (2018b), which only considered a single region (AR 11437). We aim to determine whether the simulation of a series of NLFF fields using the magnetofrictional technique can capture the coronal evolution and also the build-up phase that brings the coronal field to the point of eruption. The analysis carried out here is similar to that of Yardley, Mackay, and Green (2018b) in which the NLFF field method was tested. However, due to the large-scale analysis of 20 ARs the results are presented in less detail than those given in Yardley, Mackay, and Green (2018b). The outline of the paper is as follows. Section 2 outlines the observations including the criteria for AR selection, coronal evolution and eruptions produced by each AR. Section 3 describes the technique used to simulate the coronal field including the lower boundary conditions used. Results from the simulations can be found in Section 4, which includes simulations using the simplest initial and boundary conditions and also the inclusion of additional effects. Section 5 discusses the results and Section 6 provides a conclusion to the study. AR Selection The 20 ARs presented in Yardley et al. (2018a) are the same regions used in this study. We now briefly summarise the data selection method used by Yardley et al. (2018a) to identify and select these ARs and refer the reader to that paper for more details on each region. ARs were selected using the following criteria: i) The ARs must be bipolar and have low complexity. The regions must have two dominant photospheric magnetic polarities with no major mixing of the opposite polarities. The ARs must be isolated with minimal interaction occurring between the AR and other ARs or the background quiet Sun magnetic field. The ARs must be observable from their first emergence and form east of central meridian. This allows the full evolution from emergence to decay to be simulated during disk transit. iv) The ARs first emergence must be no more than 60∘ from central meridian as instrumental effects become increasingly significant at large centre-to-limb angles. These selection criteria led to a sample of 20 ARs being chosen during the HMI era, spanning a time period from March 2012 to November 2015. All ARs, apart from AR 11867, were monitored during their flux emergence and decay phases, which included dispersal and flux cancellation. AR 11867 remained in its emergence phase during the time period studied and did not exhibit flux cancellation at its internal PIL. Representative AR examples are given in Figure 1 with Supplementary Movie 1 showing the full evolution of AR 11446. Table 1 provides summary information of AR locations, photospheric flux evolution, and observed eruption times taken from Yardley et al. (2018a). Photospheric flux values were obtained using the 720 s data series (Couvidat et al., 2016) generated by the Helioseismic Magnetic Imager (HMI) (Schou et al., 2012) on board the Solar Dynamics Observatory (SDO); Pesnell, Thompson, and Chamberlin 2012). SDO/HMI LoS magnetograms that show three AR examples (ARs 11437, 11446 & 11680). The images show each AR at the time of the peak unsigned magnetic flux measurement, where unsigned refers to half the total absolute positive and negative flux. The saturation levels of the images are ± 100 G with white (black) representing positive (negative) photospheric magnetic field. As an example, the entire photospheric field evolution of AR 11437 can be seen online in Supplementary Movie 1. Table 1 The 20 bipolar ARs simulated in this study. The table includes the NOAA number assigned to the AR and the heliographic coordinates of the AR at the time of emergence. The value of peak unsigned flux (half the total absolute positive and negative flux), the start of emergence, peak unsigned flux and end of observation times are also given. The timings of the events that originate from low altitude along the internal PIL, along external PILs, and from high altitude are listed in the final columns. The time and GOES class of four flares and the timings of the two CMEs that are observed in LASCO/C2 that are associated with the ARs are given in the footnotes. The AR properties in this Table have been taken from the previous study of Yardley et al. (2018a). Coronal Evolution and Eruptive Activity The observed coronal evolution of each AR was analysed in Yardley et al. (2018a) in order to identify the time and location of any eruptions. These ejections are referred to as eruptions as opposed to CMEs because the coronal signatures in the EUV data are relatively subtle and most do not show any clear evidence of a CME in the white-light coronagraph data. This implies that they are either confined/failed eruptions or are ejective but have a low plasma density. The coronal evolution was monitored using both 171 and 193 Å images taken by the Atmospheric Imaging Assembly (AIA; Lemen et al. 2012) on board SDO. AIA provides full-disk observations with a high spatial and temporal resolution of 1.5″ and 12 s, respectively. At least two or more of the following coronal signatures were used to identify the occurrence of an eruption: the eruption of a filament or an EUV loop system, the rapid disappearance of coronal loops and post-eruption arcade formation (flare arcade), flares and flare ribbons, and/or coronal dimmings. As detailed in Yardley et al. (2018a) the eruptions were then categorized into the following types to investigate which eruptive structures might have formed as a consequence of flux cancellation: Internal PIL events are the eruption of a low altitude structure originating along the internal PIL of the AR. External PIL events are the eruption of a low altitude structure originating along an external PIL that is formed between the periphery of the AR and the magnetic field of the quiet Sun. High altitude events are the eruption of a high altitude structure which cannot be associated with an internal/ external PIL (which are at low altitude). In total, 24 eruptions were observed, with 13 of the 20 ARs producing at least one ejection. Eight of these ARs produced low corona events originating from either the internal or external PIL and the other five produced high altitude events. Two of the eruptions were observed as a CME in the LASCO/C2 coronagraph data. There were also four B/C GOES class flares associated with four ARs that did not occur at the time of the eruptions. For examples of the different event categories see Figure 1 in Yardley et al. (2018a). The timings of these events, which are also taken from Yardley et al. (2018a), are given in Table 1. The NLFF Field Simulation Coronal Magnetic Field Evolution The NLFF field method of Mackay, Green, and van Ballegooijen (2011) is applied to SDO/HMI LoS magnetograms to simulate the evolution of the coronal magnetic field of each AR. A key element of this method is that the magnetic field evolves through a continuous time series both at the photosphere and in the coronal volume where flux is preserved. Therefore, the coronal magnetic field evolution can be analysed. When using our method we do not apply any additional observational constraints such as the use of EUV coronal images, rather the solution obtained at any one time is purely based on the initial field, the applied boundary motions and any additional coronal physics (see Section 4.2). This technique has been previously tested on AR 11437 (Yardley, Mackay, and Green, 2018b), one of the ARs also included in this study. Therefore, the quantitative analysis that has previously been carried out for AR 11437 will not be described in this paper. Here, we present the overarching results from the qualitative analysis of 20 bipolar ARs, where each AR has been studied using the methodology described in Yardley, Mackay, and Green (2018b). A time series of NLFF fields is generated using HMI LoS magnetograms for each lower boundary condition (see Section 3.2). The HMI LoS magnetograms are cleaned and re-scaled before the simulations are carried out. The clean-up procedure includes time-averaging, low magnetic flux value removal, removal of small-scale magnetic elements, and if required, flux balancing. This procedure ensures that the large-scale AR evolution is kept but small-scale quiet Sun elements and random noise are removed (see Appendix A for more details). In the simulation, the evolution of the 3D magnetic field \(\boldsymbol{B}\) is described by $$ \frac{\partial \boldsymbol{A}}{\partial t} = \boldsymbol{v} \times \boldsymbol{B}, $$ where \(\boldsymbol{A}\) represents the magnetic vector potential, \(\boldsymbol{B} = \nabla \times \boldsymbol{A}\) is the magnetic field, and \(\boldsymbol{v}\) is the magnetofrictional velocity. The magnetofrictional relaxation technique of Yang, Sturrock, and Antiochos (1986) is employed to ensure that the coronal field is evolved through a series of force-free equilibria. Therefore, the magnetofrictional velocity inside the computational box takes the form $$ \boldsymbol{v} = \frac{1}{\nu '} \boldsymbol{j} \times \boldsymbol{B}, $$ where \({\nu '}\) is the friction coefficient and \(\boldsymbol{j} = \nabla \times \boldsymbol{B}\). The coefficient of friction ensures that, as the magnetic field is perturbed by motions at the boundary, the field remains close to a force-free equilibrium in the corona. A cartesian staggered grid is used to carry out the computations to obtain second-order accuracy for \(\boldsymbol{A}\), \(\boldsymbol{B}\), and \(\boldsymbol{j}\). The computational domain represents the solar corona where the photosphere is represented by the bottom of the box. The size of the computational domain ranges from \(0 < x, y, z < 6\) in non-dimensionalized units, where the size of the computational box in physical units is on the order of 105 km. The exact domain size depends upon the dimensions of the original magnetograms and how the magnetograms are re-scaled within the computational box (see Section 3.2). The sides of the computational domain have closed boundary conditions whereas, the top of the box can have either open or closed boundaries. When the top of the computational domain is open then the magnetograms do not need to be flux balanced. However, when the top of the box is closed then the magnetograms require flux balancing to ensure that \(\nabla \cdot \boldsymbol{B} = 0\) in the computational volume. In this particular study, both open and closed boundary conditions are used for the top of the box. The generation of the photospheric boundary and initial conditions are described below. Photospheric Boundary Conditions To be able to simulate the full evolution of the bipolar ARs we use the full disk HMI 720s LoS magnetograms (hmi.M_720s series). For each AR, we use a time sequence of LoS magnetograms with a chosen cadence of 96 minutes. We create cut-outs of the magnetograms centred on each AR and apply clean-up processes to the time series of partial disk magnetograms (see Appendix A). We use LoS magnetograms in this study as we want to simulate the full evolution of ARs from emergence to decay. We would also like to quantify how well this computationally efficient modelling technique that uses the LoS magnetograms performs in simulating the coronal evolution of a large number of ARs. Regarding the medium cadence used, prior to the present study, we have conducted a number of investigations varying the cadence of the HMI magnetograms from 12 minutes to 3 hours (Gibb, 2015) and have found very similar results. Therefore, we have chosen to use a medium cadence of 96 minutes as it is sufficient to capture the large-scale evolution of the ARs. Also, any future L5 space weather mission is likely to have a cadence more comparable to that of the Michelson Doppler Imager (MDI) rather than the cadence presently provided by HMI. Initially, each simulation is run using a relatively simple set-up. That is, a potential magnetic field is used as the initial condition along with either a closed or open boundary at the top of the computational volume. The simulation results are then compared with the observations to determine whether there is a good agreement between the two. This is assessed by comparing the evolution of the simulated coronal field to the coronal evolution in SDO/AIA 171 and 193 Å observations by visual inspection and using qualitative scoring criteria given in Section 4.1. If the simulation results do not provide a good fit to the observations then the simulation is re-run varying a number of terms one-by-one. First a LFF field initial condition is used then a variety of additional physical effects are included in succession until a better fit is achieved (see Section 4.2 and also the method of Yardley, Mackay, and Green 2018b). For the present simulations we only use a potential or a LFF field as the initial condition for the simulations. The ARs modelled in this study are young ARs, with the majority emerging at a centre-to-limb angle around 60∘ longitude. Due to the large distance from central meridian the vector magnetograms (where they may exist) contain significant errors where these errors could introduce spurious results in the simulation. Therefore, using an initial NLFF field condition is currently beyond the scope of this paper but this will be considered in a future study. The simulations use the cleaned LoS magnetograms (see Appendix A), which have a been scaled to a lower resolution of 2562, as the lower boundary conditions. The original size of the magnetograms depends upon the size of the AR but the LoS magnetograms are always larger than 2562. To take into account boundary effects, the magnetograms are also re-scaled to fill 60–70% of the area of the bottom of the computational box. The simulation generates a continuous series of lower boundary conditions using the corrected LoS magnetograms that are designed to replicate, pixel by pixel the LoS magnetograms, every 96 minutes. The series of cleaned magnetograms give the prescribed distribution of \(B_{z}\) on the base. Hence the horizontal components of the vector potential (\(A_{xb}, A_{yb}\)) are determined on the base for each discrete time interval of 96 minutes by solving for the scalar potential \(\phi \), where \(\boldsymbol{A} = \nabla \times (\phi \hat{\boldsymbol{z}}) \). To specify the evolution of \(B_{z}\) on the base in terms of \(A_{xb}\) and \(A_{yb}\) between the prescribed distributions the rate of change of the horizontal components of the magnetic vector potential and therefore an electric field is determined. To evolve \(A_{xb} (t)\) and \(A_{yb} (t)\) to \(A_{xb} (t+1)\) and \(A_{yb} (t+1)\) we assume that the process is linearly applied between each discrete time interval \(t\) and \(t+1\), where \(t\) represents the discrete 96 minute time index. Therefore, the horizontal components (\(A_{xb}, A_{yb}\)) are linearly interpolated between each 96 minute time interval to produce a time sequence that is continuous between the observed distributions. Thus, every 96 minutes the simulated photospheric field identically matches that found in the cleaned observations. By using this technique, we are effectively evolving the magnetic field from one fixed magnetogram to the next. Also, undesirable effects such as the pile-up of magnetic flux at sites of flux cancellation and numerical overshoot do not occur. As the surface field evolves in this manner it injects electric currents and free energy into the coronal field, which responds through Equation 1. By using this numerical method it means that there are two timescales involved in the lower boundary condition evolution. The first timescale is due to the 96 minute time cadence of the observations and the second is the linear evolution timescale. The second timescale is introduced to advect the photospheric magnetic polarities between the observed states, inject Poynting flux into the corona and to relax the coronal field. The method applied to interpolate the boundary magnetic field is very similar to Gibb et al. (2014) and Yardley, Mackay, and Green (2018b), however, to satisfy the Courant–Friedrichs–Levy (CFL) condition the timestep is determined from the minimum cell crossing time for the magnetofrictional velocity or the diffusion terms and its maximum is equal to a fifth of this value. Within the simulations the initial condition satisfies the Coulomb gauge. In addition to this, during the evolution of the field between the fixed points given by the magnetograms, we also maintain the Coulomb gauge. This is carried out numerically by including a \({\mathbf{\nabla }} \cdot \mathbf{A}\) term, which does not effect the value of the magnetic field in the simulations. The complete description of this process can be found in Mackay and van Ballegooijen (2009), Mackay, Green, and van Ballegooijen (2011) and the references therein. Magnetic Field Evolution The simulated coronal field evolution of the 20 bipolar ARs will now be discussed for the simplest case where a potential field is used as the initial condition and the top boundary of the computational box is closed. To determine whether the simulated coronal evolution is able to capture that of the real Sun in each AR, the simulated field is compared to the SDO/AIA 171 and 193 Å plasma emission structures by manual inspection. The main coronal features that are used to make the comparison between the observed coronal structure and simulated coronal magnetic field of each AR include small- and large-scale coronal loops, filaments and sheared structures. The 171 and 193 Å wavebands are used for the comparison as the evolution of coronal loops, filaments and sheared structures are well captured in these wavebands compared to the other AIA wavebands. These wavebands are also the primary wavebands that were analysed in the observational study of Yardley et al. (2018a). The simulated magnetic field and observed coronal plasma emission structures are then compared at various times (roughly once per day, see Figure 2) throughout the evolution of each AR. Example ARs shown for each scoring criteria (1, 2, and 3). The odd rows (1, 3, and 5) show the evolution of the ARs through SDO/AIA 171 Å images, where the NOAA AR number is labelled on the image in the final column (panels d, h, and l). The even rows (2, 4, and 6) show the corresponding simulated coronal field at the time of the coronal observation shown directly above, where the score is labelled on the simulated coronal field in the final column. The red (blue) contours shown in the images of the simulated coronal field evolution represent the positive (negative) photospheric magnetic field polarities. The black arrows indicate the observed coronal features such as sheared structures, filaments and small- and large-scale loops, which are reproduced by the simulations. The simulation results are also analysed to determine whether or not there is good agreement between the timings and location of the ejections seen in the observations and the corresponding signatures of eruption onset in the simulations. The following criteria are used to assign a score to quantitatively describe the level of agreement between the simulations and observations: Score 1: If the simulation is able to reproduce the main coronal features (small- and large-scale loops, filaments and sheared structures) for the majority of the AR evolution then there is deemed to be a good match between the observations and simulations. If an eruption is observed to originate from the AR, then the simulation must be able to successfully model the build-up to the eruption within a ±12 hr time window pre- or post-observed eruption time. If there are multiple observed eruptions then the simulation must be able to successfully follow the build-up to eruption for the majority of the eruptions associated with the AR. Score 2: Some of the coronal features (small- and large-scale loops, filaments and sheared structures) that are seen in the observations are reproduced by the simulation for most of the AR evolution. Therefore, the match between the coronal features present in the observations and the simulations is deemed to be acceptable. If one or multiple eruptions are observed to originate from the AR, the build-up phase may or may not be followed by the simulation for any eruption. Score 3: A minority or none of the coronal features (small- and large-scale loops, filaments and sheared structures) seen in the observations are reproduced for most of the AR evolution. Therefore, the evolution of the simulated coronal field is deemed not to match the observed coronal evolution. The simulation fails to model the build-up to eruption for any observed eruptions associated with the AR. An example AR for each of the scoring criteria is shown in Figure 2, which compares the observed coronal evolution (odd rows) to the simulated coronal evolution (even rows). The first example shows AR 11437 (Score 1), where the sheared J-shaped structure, small- and large-scale coronal loops that are present in the observations are captured by the simulation for the majority of the AR evolution (see black arrows in Figure 2). The simulation is also able to replicate the build-up to the point of eruption for 2 out of 3 of the observed eruptions. The signatures of eruption onset in the simulations are discussed in the next paragraph. The second example shows AR 12455 (Score 2), where the simulation is able to reproduce the structure of the small- and large-scale coronal loops, although the match to the observations is better in the northern part of the AR compared to the south (black arrows in Figure 2). There are no eruptions observed to be associated with this AR. Finally, for AR 12229 (Score 3) the simulation is unable to produce the structure of the small- and large-scale loops seen in the observations of the AR. The eruption onset signatures, which indicate that a loss of equilibrium in the simulation has occurred, are not present for any of the four eruptions observed to originate from this AR. The simulations carried out, focus on modelling the build-up of non-potential magnetic fields and flux ropes within ARs. We do not try to reproduce and follow the dynamics of the observed eruptions as full magnetohydrodynamic (MHD) simulations are required to do this (e.g. see Rodkin et al. 2017). Therefore, to determine whether the simulations successfully follow the build-up to eruption, the simulated coronal field evolution was examined for signatures of eruption onset. The signatures present in the simulations that indicate the build-up to an eruption include: a flux rope rising, which subsequently reaches the top or side boundaries of the computational box indicating that a loss of equilibrium has occurred. Reconnection occurring underneath the flux rope which leads to small, more potential loops forming beneath the flux rope similar to the post-eruption (flare) arcades that are visible in the observations. These signatures of eruption onset in the simulations must occur at the same location and timings as those identified in the observations. The simulation results are analysed in a time window of ≈12 hrs pre- and post-observed eruption for the above signatures of eruption onset. The signatures of eruption onset in the simulation of AR 11437 are shown in Figure 3. In this case, a flux rope, which has formed along the internal PIL, rises in the domain and reconnection occurs underneath the flux rope. This leads to small, more potential loops forming below the flux rope axis. Eventually the flux rope reaches the side boundary of the domain. A similar scenario is seen in the observations where a sheared structure and post-eruption loops that form underneath this structure are observed at the same location as in the simulations. Sample field lines of AR 11437 that show the presence of a flux rope in the simulation in the build-up to eruption in comparison to the eruption signatures in the observations. The series of plots in the top panel show a cross-section in the \(x\)–\(z\) plane where the axis of a flux rope and a post-eruption arcade are visible. The middle panel shows the eruption onset of the flux rope in the \(x\)–\(y\) plane. In this case, the side and top boundaries of the computational box are closed and so the flux rope is unable to escape the coronal volume, however, the flux rope does reach the side boundaries of the domain. The bottom panel shows the eruption signatures present in the 171 Å observations. The red and blue contours represent the positive and negative photospheric magnetic polarities, respectively. For the simplest case, where the coronal evolution of each of the 20 bipolar ARs is simulated using a potential field initial condition and a closed top boundary, the results (see Table 2) are as follows. The NLFF field method is able to capture the majority of the coronal structure for ten ARs, a reasonable amount of the structure for six ARs, and little or no structure for four ARs (see Table 3). Therefore, the method is able to capture a reasonable amount of the structure for 80% of the AR sample, and failed to capture the structure for 20% of the ARs. Table 2 Results of the NLFF field simulations. The table includes the NOAA AR number, the number of time steps or magnetograms used to simulate the coronal evolution, and whether there was a flux imbalance present between the positive and negative photospheric polarities of the AR. This is followed by the agreement between the simulations and observations, the number of observed eruptions the simulation can follow the build-up to eruption for, and the time difference between the signatures of flux rope eruption onset that occurred in the simulations and the eruptions in the observations. The final column gives additional information such as the location of the AR during emergence if close to 60∘, as well as the surrounding magnetic field environment the region emerges into. It also gives the initial conditions, boundary conditions and additional global parameters that were used to improve the performance of the simulation. If improvements were made, the new score is given in brackets in column four. Table 3 The simulation performance results. The number and percentage of ARs in each scoring category when using the simplest initial and boundary conditions in the simulation are given. The results for the simulations where additional global parameters, Gaussian smoothing and LFF field conditions are used are given in brackets. In total, the simulations are able to successfully follow the build-up to eruption in a ≈12 hr time window prior to or post-eruption for 12 out of the 24 observed eruptions. The time difference between eruption onset in the simulations compared to the time determined from observations is given in Figure 4 for each AR. The time of eruption onset in the simulation is determined by using the time halfway between the time step where the signatures of eruption onset in the simulation have been identified, and the previous time step where there are no signatures of eruption onset. By time step we are referring to the primary timescale of the simulation that is set by the cadence of the magnetograms, which in this case is 96 minutes. The time of the eruption onset identified from the simulation is then compared to the eruption time taken from the observations to give the time difference. The mean time difference between the initiation of the eruption in the simulations compared to the observations is ≈5 hrs with a standard deviation of ≈4 hrs. It is possible to successfully follow the build-up to eruption in the simulations for all four eruptions (100%) that were observed to originate from low in the corona along the internal PIL by Yardley et al. (2018a). The time difference in hours between the eruption onset of flux ropes in the simulation of each AR compared to the eruption seen in the observations. Signatures of eruption onset occur between two time steps in the simulation and so the values represent the time difference taken between the observed eruptions and the central time between the two time steps in the simulation. This indicates that by applying the method of Mackay, Green, and van Ballegooijen (2011) to construct a time series of NLFF fields, using the simplest initial and boundary conditions, it is possible to capture the key features of the observable coronal structures in the sample of ARs. To improve on these results the effect on the simulated coronal magnetic field of additional physical effects as well as varying the initial and boundary conditions are examined in the following section. Consequences of Additional Physical Effects Although it is possible to simulate the coronal field evolution of an AR using only the LoS magnetic field as the lower boundary condition combined with a potential field as the initial condition, such a simple model does not work in all cases. There were several issues that were encountered in the simulation when using the simplest initial and boundary conditions (an initial potential field condition and closed top boundary). Firstly, the presence of highly twisted field near the side boundaries of the box. Boundary effects can be rectified by re-scaling the magnetograms to occupy a smaller area at the bottom of the computational box during the clean-up procedure (Appendix A). If the magnetograms contain large amounts of small-scale magnetic field that affect the simulated coronal evolution, these can be removed by smoothing the magnetograms with a Gaussian kernel (see Appendix B). This process is applied in addition to the clean-up procedure detailed in Appendix A. If the simulation runs for long time periods, twisted magnetic field can build-up in the computational volume. By adding coronal diffusion, in the form of Ohmic diffusion or hyperdiffusion, this can help prevent the build-up of highly twisted field by decreasing the amount of poloidal flux. However, despite the inclusion of additional coronal diffusion terms, flux ropes are still able to form and reach instability in the simulation and the overall evolution of the simulated coronal field remains significantly unaffected (Mackay and van Ballegooijen, 2006a; Yardley, Mackay, and Green, 2018b). The energy and non-potentiality of the coronal field in the simplest simulation setup only originates from the Poynting flux due to horizontal motions. For the cases where the simple model is insufficient to describe the observations (ARs with a score of 2 or 3) there could be additional physical effects that are acting. For example, the initial configuration of the coronal magnetic field could be non-potential and therefore a LFF field initial condition could be implemented to represent any non-potential effects present before the start of the simulation. When a LFF field initial condition is used the force-free parameter \(\alpha \) is assigned a small value with a magnitude of \(10^{-9}\) – \(10^{-8}\) m−1 (see Table 2), to match the weak shear seen in the coronal observations. The range in the force-free parameter is constrained by the size of the computational domain which scales as \(1/L\), where \(L\) varies from one AR to the next. This is due to the nature of the LFF field solution requiring a decaying (non-oscillatory) solution with height. The sign of \(\alpha \) is taken from the sense of twist from the magnetic tongues present in the observations (Luoni et al., 2011). The sign and value of \(\alpha \) in our simulations is therefore selected in a similar manner to our previous study (Yardley, Mackay, and Green, 2018b). There may also be other sources of energy or helicity injection, which are not captured by the evolution of the normal component of the magnetic field that have to be taken into account, such as the presence of vertical motions or torsional Alfvén waves. Along with these additional injection mechanisms non-ideal processes may also have to be considered. These effects are implemented one at a time in the simulation by modifying the induction equation to include the physical effects through three additional terms: $$ \frac{\partial \boldsymbol{A}}{\partial t} = \boldsymbol{v} \times \boldsymbol{B} - \eta \boldsymbol{j} + \frac{\boldsymbol{B}}{B^{2}} \nabla \cdot (\eta _{4} B^{2} \nabla \alpha ) - \nabla _{z} (\zeta B_{z}), $$ $$ \alpha = \frac{\boldsymbol{B} \cdot \nabla \times \boldsymbol{B}}{B^{2}}. $$ The first additional term is Ohmic diffusion, where \(\eta \) represents the resistive coefficient. The second additional term, is hyperdiffusion (Boozer, 1986; Strauss, 1988; Bhattacharjee and Yuan, 1995). This diffusion term is artificial and is introduced to reduce gradients that are present in the force-free parameter \(\alpha \), while total magnetic helicity remains conserved (van Ballegooijen and Mackay, 2007). The third additional term represents the injection of a horizontal magnetic field or twist component at the photospheric boundary. In this term \(\nabla _{z}\) is the vertical component of the gradient operator and \(\zeta \) is an injection parameter that has the dimensions of a diffusivity. The parameter \(\zeta \) is only non-zero at the photospheric boundary (\(z=0\)) hence, the injection of the horizontal field only occurs at this location. This term leads to a change in \(A_{z}\) half a grid point into the domain, and the subsequent injection of a horizontal magnetic field and magnetic helicity into the corona. Applying this injection in \(A_{z}\) this leaves the vertical component of the magnetic field unchanged. The sign of the injection parameter \(\zeta \) determines the sign of the magnetic helicity that is injected via the horizontal field. A positive (negative) value of \(\zeta \) leads to the injection of negative (positive) magnetic helicity. Once injected, the horizontal field and twist component propagate upwards along the magnetic field lines through the \(\boldsymbol{v} \times \boldsymbol{B}\) term in the induction equation above (Equation 3). This term is mathematically equivalent to that used in Mackay, DeVore, and Antiochos (2014) to model the helicity condensation process of Antiochos (2013). For the present simulations this term does not represent helicity condensation rather it is used to add an additional non-potential contribution that is not captured by a potential field initial condition or the applied horizontal motions on the photospheric surface alone. Additional sources of helicity may originate from the prior evolution of an AR that is not captured from the initial potential field, the presence of vertical motions or the propagation of torsional Alfvén waves from below the photosphere into the corona. The additional injection of horizontal magnetic field at the photosphere, along with the Ohmic and hyperdiffusion terms are included in the simulation through user-defined constants. We now modify the top boundary, initial condition and include non-ideal terms in the simulations. This is to determine whether it is possible to improve the simulation results, obtained for the ARs in Section 4.1, where only a reasonable or minimal amount of the coronal structure was captured (ARs assigned a scoring criteria of 2 or below). To improve the results obtained by using the simplest initial and boundary conditions additional physical effects, Gaussian smoothing, and LFF field initial conditions are used. The simulations that were performed for each AR to improve the previous results are described in the comments (final) column of Table 2. If the performance of the simulation improved, the new score is included in brackets in the Score (fourth) column of Table 2. An example can be seen in Figure 5 where AR 12455 improves from a score of 2 to 1. The original simulation captured the evolution of the large-scale coronal loops in the north of the AR relatively well, however, failed to reproduce the large-scale loops present in the south. It also failed to capture the bright core of the AR (see Figure 5 (a)). By removing the small-scale magnetic field at the AR periphery using a Gaussian kernel and then introducing Ohmic diffusion the simulation is able to replicate the small and large-scale coronal structure for the entire AR evolution including the sheared structure present at the start of the AR evolution. The SDO/AIA 171 Å images (a–d) and corresponding sample field lines (e–l) showing the evolution of AR 12455. The second row (e–h) shows sample field lines from the simulation run with closed top boundary conditions and an initial potential field i.e. the simplest initial and boundary conditions. The third row (i–l) shows the results when Ohmic diffusion, \(\eta \) is added with a value of 25 km2 s−1 and small-scale field has been removed. The score given to each simulation is given at the bottom right of panels (h) and (l). The positive (negative) photospheric magnetic field is represented by the red (blue) contours. The new results are as follows. The enhanced simulations are able to capture the majority of the coronal structure for 12 ARs, a reasonable amount of structure for five ARs and little or no coronal structure for three ARs (see brackets in Table 3). The new results show that one AR moved from scoring category 3 to 2 and two ARs moved from 2 to 1 indicating there was an overall improvement of 20% when a mix of additional physical effects are included. Therefore, the NLFF field simulation is able to capture a reasonable amount of the structure for 85% of the ARs and only failed to capture the structure for 15% of the ARs from the sample. This is a slight improvement on the previous result, where the simplest initial and boundary conditions were used. The improvement in the results is mainly due to the use of a LFF field initial condition. However, the application of Gaussian smoothing to remove additional small-scale magnetic field near the AR periphery and the addition of Ohmic diffusion also improved the results. When considering the build-up to eruption in the simulations, no improvement is made on the previous results as the simulations again successfully follows the build-up to eruption for 12 out of the 24 observed eruptions. We have used the method of Mackay, Green, and van Ballegooijen (2011) to simulate the full coronal evolution of 20 bipolar ARs, from emergence to decay, using a time series of LoS magnetograms as the lower boundary condition. To reproduce the full coronal evolution of the ARs requires a series of magnetograms that extends over the entire lifespan of each AR. Numerous clean-up processes (see Appendix A) have been applied to the raw magnetograms including time-averaging, removal of isolated features, removal of low flux values, and flux balancing before carrying out the simulations. The application of these procedures produces a series of cleaned magnetograms with a smooth and continuous evolution of the photospheric magnetic field. By using a series of cleaned magnetograms as the lower boundary condition it is easier to simulate the large-scale coronal magnetic field evolution of the ARs as the inclusion of small-scale magnetic elements and random noise could potentially lead to numerical problems in the simulations. The method has not yet been tested using vector magnetograms as the lower boundary conditions of the simulation. However, an initial qualitative comparison between the vector components at the simulation boundary to the observed vector data of one AR (AR 11561) in our sample shows a relatively good agreement (see Appendix C for more details). In a follow-up study we will expand on this qualitative comparison between the simulated and observed vector magnetic field components. Initially, the ARs were simulated using the simplest initial and boundary conditions i.e. a potential field initial condition and closed top boundary. We conclude, after a manual comparison with the observations, that the simulations reproduced a reasonable amount of the coronal structure and evolution for 80% of the ARs. This result is improved slightly to 85% by applying Gaussian smoothing to remove additional small-scale magnetic field in the magnetograms, using a LFF field initial condition, and including additional effects such as non-ideal terms in the simulations. For the ARs where the simulation failed to reproduce the main coronal features, particularly during the early stages of the AR evolution, a NLFF field initial condition may be more appropriate. We will implement the use of a NLFF field initial condition in the future by constructing a NLFF field extrapolation using the technique described by Valori, Kliem, and Keppens (2005). Thus a potential field will be extrapolated from a magnetogram, the horizontal field components will be set using a vector magnetogram, and magnetofrictional relaxation will be applied to relax the magnetic field to a force-free equilibrium. We do not constrain the simulations with coronal observations therefore, the coronal structures reproduced by the simulation are the result of the non-potential effects produced by the boundary evolution. Therefore, the accuracy of the coronal field models in this study has been judged qualitatively by a manual inspection and visual comparison to the coronal observations. To make the comparison to observations less time-consuming and to remove the subjective nature of this analysis an optimisation method could be developed to minimize the deviation between the field lines from the simulation and the intensity observations. An optimization technique will be considered in future studies. We now discuss when and where the NLFF field simulations were able to reproduce the build-up to eruption. By reproducing the build-up to eruption, we are referring to the ability to identify a flux rope that has formed in the simulation that loses equilibrium or becomes unstable at the same location and at a similar time to the eruption that occurred in the observations. We do not aim to recreate the full dynamics of the eruptions as this requires a MHD simulation. The simulations were able to replicate the formation and eruption onset of flux rope structures at the internal PIL of an AR where the flux rope was created by flux cancellation and magnetic reconnection occurring at low atmospheric heights. Signatures of eruption onset were found in the simulations for all four low corona eruptions that originated from the internal PIL of ARs 11437, 11561, 11680, and 12382. The simulations were analysed within a ± 12 hr window of the eruption occurring in the observations and the mean unsigned time difference of eruption onset taking place in the simulations compared to the observed eruptions in these four ARs was found to be ≈5 hrs. These simulation results support the van Ballegooijen and Martens (1989) scenario and show that the physical processes can be replicated on a similar timescale to that which the Sun evolves over. The technique failed to capture the onset of some of the eruptions that originated from low in the corona along an external PIL or at high-altitudes. There are a number of possible reasons for this. Capturing the initiation of eruptions that occurred during the early stages of the simulations proved challenging since these eruptions occur during the flux emergence phase of the ARs. Using an initial potential field condition, combined with the short time over which the coronal field is being evolved, means that insufficient shear and free energy will have built-up in the simulated coronal field. To combat this issue we can vary the initial or boundary conditions and include additional non-ideal effects in the simulation. For six of the ARs we constructed a LFF field initial condition to see how this affected the results. We chose the magnitude and sign of the force-free parameter to reflect the weak shear seen in the coronal observations. Ideally, vector data can be used to calculate the value of \(\alpha \) to use to construct the LFF field initial conditions for the simulation. In the future, we aim to use the observed \(\alpha \) value for the LFF field initial condition in our simulations or use a NLFF field initial condition when possible. The simulation method also fails to capture the eruption onset for ejections that occur in quick succession as it is impossible to separate them from one another in the simulation. To recreate the dynamics of multiple eruptions over short timescales requires the use of full MHD simulations. For example, four eruptions from external PILs were observed to occur in quick succession during the first 12 hrs of the emergence phase of AR 12229. The build-up to these eruptions was not captured by the simulation and this AR accounted for a large number of the missed eruptions. There was also a large imbalance in the magnetic flux during emergence due to the AR emerging at ≈50∘ longitude into negative quiet Sun magnetic field. Additionally, eruptions that originated along an external PIL were observationally found to occur due to flux cancellation that takes place between the periphery of the AR and quiet Sun magnetic field during the emergence phase (Yardley et al., 2018a). Eruptions that form at the external PIL are harder to simulate because much of the small-scale field is removed during the "cleaning" process or is not included in the local simulations. At present the simulation method is designed to capture the local and internal evolution of the ARs. In Yardley et al. (2018a) the origin of each high-altitude event was not studied in detail but it was suggested that they could be the result of the formation of a high-altitude structure during the evolution of the AR or the destabilization of a pre-existing external structure. The build-up to the high-altitude eruptions that were observed in ARs 11446, 11886, and 12336 were not replicated by the simulations. This could be taken into account in future work by using a NLFF field initial condition on a case-by-case basis if a flux rope is present at the start or early stages of the simulation. However, if the high-altitude events are a result of the destabilisation of pre-existing structures then this technique will not be able to capture their formation. Therefore, to be able to capture the onset of eruptions that arise due to the interaction of external magnetic fields or large-scale coronal structures, non-local effects need to be taken into account by using global NLFF field models (e.g. Mackay and van Ballegooijen 2006a) to simulate the evolution of the large-scale corona. Presently, we have focussed on simulating the evolution of a set of relatively simple, bipolar ARs that produce faint eruption signatures and a limited number of CMEs. However, this is necessary to test the method before simulating larger, more complex ARs. In the future, we will simulate a broader range of ARs, including multipolar regions and large AR complexes that produce multiple CMEs. Given the results of the applied technique simulating larger, multipolar and non-isolated regions should be possible but will require a larger computational domain. In this study, the coronal evolution of 20 bipolar ARs was simulated from emergence to decay. The simulations were carried out in order to test whether the evolution of the coronal magnetic field through a series of NLFF states driven by boundary motions could successfully reproduce the observed coronal features of the ARs and the onset of eruption. The coronal magnetic field evolution was simulated by applying the NLFF field method of Mackay, Green, and van Ballegooijen (2011) to LoS magnetograms taken by SDO/HMI that were used as the lower boundary conditions. The simulated coronal field evolution for each AR was manually compared to the 171 and 193 Å emission structures as seen by SDO/AIA. The first simulation results were obtained using the simplest initial and boundary conditions i.e. a potential field initial condition and a closed top boundary. By using this approach it was possible to reproduce a reasonable amount of the coronal structure and evolution for 80% of the AR sample. In total, the build-up to eruption was successfully followed in the simulations within a ± 12 hr window of the eruptions occurring in the observations for 12 out of the 24 (50%) of the observed eruptions. To improve the simulation results we varied the boundary (from closed to open) and initial condition (from potential to LFF) and included additional parameters such as Ohmic diffusion, hyperdiffusion, and an additional injection of horizontal magnetic field and magnetic helicity in the simulations. We also took into account boundary effects by re-scaling the magnetogram at the bottom of the computational box and removed small-scale magnetic features that affect the large-scale evolution of the coronal field by applying Gaussian smoothing to the magnetograms. These steps were in addition to the clean-up processes and were carried out one at a time. Through considering various combinations of additional terms there was a slight improvement in the results, as one AR moved from scoring category 3 to 2 and two ARs moved from category 2 to 1. Therefore, by varying the boundary and initial conditions and including additional physical effects in the simulation there was an overall improvement of 20%. Overall, the simulations were able to capture a reasonable amount of coronal structure for 85% of the AR sample, only failing to capture the structure of 15% of the regions. Despite varying the boundary and initial conditions and including additional global parameters the simulations are only able to successfully follow the build-up to eruption for 50% of the observed eruptions associated with the AR sample. For the successful cases, the key component in reproducing the coronal evolution and build-up to eruption for the ARs is the use of LoS magnetograms, as the lower boundary conditions to the simulations as changing the side/top boundary conditions, initial condition and including additional physical affects had an insignificant effect on the simulated coronal field evolution. The unsigned mean time difference between the signatures of eruption onset in the simulations compared to the observed eruptions was ≈5 hrs. The simulations were carried out over a time period of roughly 96 – 120 hrs therefore, a mean time difference between eruption onset occurring in the observations compared to the simulations of ≈5 hrs is a very favourable result (within 3 applied magnetograms). As current space weather forecasting methods can only provide a warning post-eruption and 1 – 3 days before the arrival of a CME at Earth with an uncertainty of 12 hrs, our results are well within the present time error. Also, as our approach is computationally efficient we can reproduce the coronal magnetic field evolution of ARs over several days within a few hours of computation time on a desktop machine. In fact, Pagano, Mackay, and Yardley (2019a,b) have demonstrated how eruption metrics based on the NLFF field simulations may be used to distinguish eruptive from non-eruptive ARs. This work has also demonstrated how it is possible to provide near-real time alerts of eruptions using the observed LoS magnetograms, the NLFF field simulations and the projection of the simulations forward in time. The analysis carried out in these studies includes four ARs taken from our AR sample in this paper. The initial results from Pagano, Mackay, and Yardley (2019a,b) are promising but additional work is required, including addressing the issues outlined in Section 5, before the method can identify the exact eruption time and be implemented for CME forecasting purposes. In summary, the full coronal magnetic field evolution of 20 bipolar ARs was simulated using the time-dependent NLFF field method of Mackay, Green, and van Ballegooijen (2011). Using this method, it was possible to reproduce the main coronal features present in the observations for 85% of the AR sample. The simulations were also able to successfully follow the build-up to and onset of eruption within a ±12 hr window for 12 out of the 24 eruptions (50%) that were identified in the observations. The mean unsigned time difference between the eruptions occurring in the observations compared to the time of eruption onset in the simulations was found to be ≈5 hrs. It is important to acknowledge that, for all four eruptions that took place along the internal PIL of the ARs, the simulations were able to model the timings of eruption onset with a mean unsigned time difference of ≈7 hrs. Therefore, the simulations were able to successfully reproduce the local evolution for the majority of the ARs in the sample. Antiochos, S.K.: 2013, Astrophys. J. 772, 72. DOI. ADS Article Google Scholar Bhattacharjee, A., Yuan, Y.: 1995, Astrophys. J. 449, 739. DOI. Bobra, M.G., van Ballegooijen, A.A., DeLuca, E.E.: 2008, Astrophys. J. 672, 1209. DOI. Bobra, M.G., Sun, X., Hoeksema, J.T., Turmon, M., Liu, Y., Hayashi, K., Barnes, G., Leka, K.D.: 2014, Solar Phys. 289, 3549. DOI. Boozer, A.H.: 1986, J. Plasma Phys. 35, 133. DOI. Canou, A., Amari, T.: 2010, Astrophys. J. 715, 1566. DOI. Charbonneau, P.: 2010, Living Rev. Solar Phys. 7, 3. DOI. Charbonneau, P.: 2014, Annu. Rev. Astron. Astrophys. 52, 251. DOI. SunPy Community Mumford, J.S., Christe, S., Pérez-Suárez, D., Ireland, J., Shih, A.Y., Inglis, A.R., Liedtke, S., Hewett, R.J., Mayer, F., Hughitt, K., Freij, N., Meszaros, T., and, .: 2015, Comput. Sci. Discov. 8, 14009. DOI. Couvidat, S., Schou, J., Hoeksema, J.T., Bogart, R.S., Bush, R.I., Duvall, T.L., Liu, Y., Norton, A.A., Scherrer, P.H.: 2016, Solar Phys. 291, 1887. DOI. De Rosa, M.L., Schrijver, C.J., Barnes, G., Leka, K.D., Lites, B.W., Aschwanden, M.J., Amari, T., Canou, A., McTiernan, J.M., Régnier, S., Thalmann, J.K., Valori, G., Wheatland, M.S., Wiegelmann, T., Cheung, M.C.M., Conlon, P.A., Fuhrmann, M., Inhester, B., Tadesse, T.: 2009, Astrophys. J. 696, 1780. DOI. Gibb, G.P.S.: 2015, The formation and eruption of magnetic flux ropes in solar and stellar coronae. PhD Thesis. http://hdl.handle.net/10023/7069. Gibb, G.P.S., Mackay, D.H., Green, L.M., Meyer, K.A.: 2014, Astrophys. J. 782, 71. DOI. Jiang, C., Wu, S.T., Feng, X., Hu, Q.: 2014, Astrophys. J. Lett. 786, L16. DOI. Leka, K.D., Canfield, R.C., McClymont, A.N., van Driel-Gesztelyi, L.: 1996, Astrophys. J. 462, 547. DOI. Lemen, J.R., Title, A.M., Akin, D.J., Boerner, P.F., Chou, C., Drake, J.F., Duncan, D.W., Edwards, C.G., Friedlaender, F.M., Heyman, G.F., Hurlburt, N.E., Katz, N.L., Kushner, G.D., Levay, M., Lindgren, R.W., Mathur, D.P., McFeaters, E.L., Mitchell, S., Rehse, R.A., Schrijver, C.J., Springer, L.A., Stern, R.A., Tarbell, T.D., Wuelser, J., Wolfson, C.J., Yanari, C., Bookbinder, J.A., Cheimets, P.N., Caldwell, D., Deluca, E.E., Gates, R., Golub, L., Park, S., Podgorski, W.A., Bush, R.I., Scherrer, P.H., Gummin, M.A., Smith, P., Auker, G., Jerram, P., Pool, P., Soufli, R., Windt, D.L., Beardsley, S., Clapp, M., Lang, J., Waltham, N.: 2012, Solar Phys. 275, 17. DOI. Luoni, M.L., Démoulin, P., Mandrini, C.H., van Driel-Gesztelyi, L.: 2011, Solar Phys. 270, 45. DOI. Mackay, D.H., DeVore, C.R., Antiochos, S.K.: 2014, Astrophys. J. 784, 164. DOI. Mackay, D.H., Green, L.M., van Ballegooijen, A.: 2011, Astrophys. J. 729, 97. DOI. Mackay, D.H., van Ballegooijen, A.A.: 2006a, Astrophys. J. 641, 577. DOI. Mackay, D.H., van Ballegooijen, A.A.: 2006b, Astrophys. J. 642, 1193. DOI. Mackay, D.H., van Ballegooijen, A.A.: 2009, Solar Phys. 260, 321. DOI. Müller, D., Nicula, B., Felix, S., Verstringe, F., Bourgoignie, B., Csillaghy, A., Berghmans, D., Jiggens, P., García-Ortiz, J.P., Ireland, J., Zahniy, S., Fleck, B.: 2017, Astron. Astrophys. 606, A10. DOI. Pagano, P., Mackay, D.H., Yardley, S.L.: 2019a, Astrophys. J. 883, 112. DOI. Pagano, P., Mackay, D.H., Yardley, S.L.: 2019b, Astrophys. J. 886, 81. DOI. Parker, E.N.: 1955, Astrophys. J. 121, 491. DOI. Pesnell, W.D., Thompson, B.J., Chamberlin, P.C.: 2012, Solar Phys. 275, 3. DOI. Pomoell, J., Lumme, E., Kilpua, E.: 2019, Solar Phys. 294, 41. DOI. Rodkin, D., Goryaev, F., Pagano, P., Gibb, G., Slemzin, V., Shugay, Y., Veselovsky, I., Mackay, D.H.: 2017, Solar Phys. 292, 90. DOI. Savcheva, A.S., Green, L.M., van Ballegooijen, A.A., DeLuca, E.E.: 2012, Astrophys. J. 759, 105. DOI. Schou, J., Scherrer, P.H., Bush, R.I., Wachter, R., Couvidat, S., Rabello-Soares, M.C., Bogart, R.S., Hoeksema, J.T., Liu, Y., Duvall, T.L., Akin, D.J., Allard, B.A., Miles, J.W., Rairden, R., Shine, R.A., Tarbell, T.D., Title, A.M., Wolfson, C.J., Elmore, D.F., Norton, A.A., Tomczyk, S.: 2012, Solar Phys. 275, 229. DOI. Schrijver, C.J., De Rosa, M.L., Metcalf, T.R., Liu, Y., McTiernan, J., Régnier, S., Valori, G., Wheatland, M.S., Wiegelmann, T.: 2006, Solar Phys. 235, 161. DOI. Spiegel, E.A., Zahn, J.-P.: 1992, Astron. Astrophys. 265, 106. ADS Google Scholar Strauss, H.R.: 1988, Astrophys. J. 326, 412. DOI. Valori, G., Kliem, B., Keppens, R.: 2005, Astron. Astrophys. 433, 335. DOI. van Ballegooijen, A.A.: 2004, Astrophys. J. 612, 519. DOI. van Ballegooijen, A.A., Mackay, D.H.: 2007, Astrophys. J. 659, 1713. DOI. van Ballegooijen, A.A., Martens, P.C.H.: 1989, Astrophys. J. 343, 971. DOI. van Driel-Gesztelyi, L., Green, L.M.: 2015, Living Rev. Solar Phys. 12, 1. DOI. Wiegelmann, T., Sakurai, T.: 2012, Living Rev. Solar Phys. 9, 5. DOI. Yang, W.-H., Sturrock, P.A., Antiochos, S.K.: 1986, Astrophys. J. 309, 383. DOI. Yardley, S.L., Mackay, D.H., Green, L.M.: 2018b, Astrophys. J. 852, 82. DOI. Yardley, S.L., Green, L.M., Williams, D.R., van Driel-Gesztelyi, L., Valori, G., Dacie, S.: 2016, Astrophys. J. 827, 151. DOI. Yardley, S.L., Green, L.M., van Driel-Gesztelyi, L., Williams, D.R., Mackay, D.H.: 2018a, Astrophys. J. 866, 8. DOI. Yardley, S.L., Savcheva, A., Green, L.M., van Driel-Gesztelyi, L., Long, D., Williams, D.R., Mackay, D.H.: 2019, Astrophys. J. 887, 240. DOI. Zwaan, C.: 1985, Solar Phys. 100, 397. DOI. The authors would like to thank SDO/HMI and AIA consortia for the data, and also for being able to browse this data through JHelioviewer (http://jhelioviewer.org, Müller et al. 2017). The analysis in this paper has made use of SunPy, which is an open-source and free community-developed solar data analysis package written in Python (SunPy Community et al., 2015). S.L.Y. would like to acknowledge STFC for support via the Consolidated Grant SMC1/YST025. D.H.M. would like to thank STFC, the Leverhulme Trust and the ERC under the Synergy Grant: The Whole Sun, grant agreement no. 810218 for financial support. L.M.G. is thankful to the Royal Society for a University Research Fellowship and the Leverhulme Trust. School of Mathematics and Statistics, University of St Andrews, North Haugh, St Andrews, Fife, KY16 9SS, UK S. L. Yardley & D. H. Mackay Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK L. M. Green S. L. Yardley D. H. Mackay Correspondence to S. L. Yardley. The authors declare that they have no conflicts of interest. 11207_2020_1749_MOESM1_ESM.mov The photospheric magnetic field evolution of AR 11437 where white (black) represents positive (negative) photospheric magnetic field saturated at ±100 G. (MOV 7.8 MB) Appendix A: Clean-up Processes To analyse the coronal evolution of the 20 bipolar ARs magnetogram data taken from the SDO/HMI instrument were utilised. Full disk LoS magnetograms were used from the 720 s data series (hmi.M_720s), which have a pixel size and a noise level of 0.5″ and 10 G, respectively. The number of magnetograms used to study the evolution of each AR varied depending upon the ARs lifetime and selection criteria. We apply the following cosine correction before the clean-up procedures are implemented to estimate the radial magnetic field component: $$ B_{R} = \frac{B_{\mathrm{LOS}}}{\cos {\theta } \cos {\phi }}, $$ where \(B_{\mathrm{LOS}}\) is the line-of-sight magnetic field and \(\theta \) and \(\phi \) are expressed in heliocentric coordinates (see Section 2.3 of Yardley et al. (2018a) for further details). Each corrected magnetogram is then differentially rotated to account for area foreshortening that occurs at large distances from central meridian. Cut-outs of the corrected, de-rotated magnetograms are taken, centred on the AR. This is a relatively simple correction, however, this method has been used in previous studies (for example see Yardley, Mackay, and Green 2018b). The clean-up procedure described in this section is comparable to the procedure that is used in both Gibb et al. (2014) and Yardley, Mackay, and Green (2018b). The noise level in the raw magnetograms is high, particularly in the early and late stages of AR evolution when the AR is located far from central meridian. Therefore, before the magnetograms are used as the lower boundary conditions of the simulation a number of clean-up processes are implemented. First, the magnetograms are time-averaged by applying the following Gaussian kernel: $$ C_{i} = \frac{ \sum ^{n}_{j=1} \mathrm{exp}(-[i-j]/\tau )^{2} F_{j} }{\sum ^{n}_{j=1} \mathrm{exp}(-[i-j]/\tau )^{2} }, $$ where \(C_{i}\) is the \(i\)th cleaned frame and takes values between 1 to \(n\) where \(n\) is the number of magnetograms in the sequence. \(F_{J}\) is the \(j\)th raw frame, and \(\tau \) represents the frame separation where the weighting decreases by \(1/ \mathrm{e}\). In this study, the frame separation is set to two meaning that each cleaned frame is a linear combination of the total number of raw frames where the two frames before and after the current frame are weighted the highest. This procedure removes random noise and retains the large-scale features of the ARs. As previously stated, this study focuses on the large-scale evolution of the AR magnetic field and not small-scale elements of the quiet Sun. The next step in the clean-up procedure includes the removal of small-scale isolated field pixel-by-pixel by evaluating the eight nearest neighbours of each pixel. When fewer than four of the neighbouring pixels have the same sign of magnetic flux then the value of magnetic flux of that pixel is set to zero. Therefore, the pixels at the edge of the magnetogram also have their values set to zero as they have less than four nearest neighbours. In addition, any pixels that have a magnetic flux value below a 25 Mx cm−2 threshold are part of the background magnetic field of the quiet Sun and are also set to zero. At this point the user can choose how to place the magnetograms within the box i.e. the magnetograms can be scaled up/down to fit the computational box or a custom scaling can be applied. In this study, to avoid boundary effects, we rescale the magnetograms to fill 60 – 70% of the computational box (Figure 6). The raw and cleaned magnetograms for AR 11867 taken at 15:59 UT on 2013 October 13 when the region reaches its maximum unsigned magnetic flux. The saturation level of the two magnetograms is ±100 G. The flux-weighted central coordinates for the positive and negative photospheric magnetic polarities are represented by the red and green asterisks, respectively. The last clean-up process is implemented when the top boundary condition in the simulation is set to closed and the magnetograms need to be flux balanced. To flux balance the magnetograms the signed magnetic flux of each frame is calculated. The pixels of non-zero value are summed for each frame and the signed magnetic flux is divided by this total. From every pixel that has a non-zero value the imbalanced magnetic flux per pixel is deducted. As the maximum correction is less than 25 Mx cm−2 no pixels change sign during the balancing of magnetic flux. This is the same threshold that is used to set pixels that form part of the background quiet Sun magnetic field to zero. Appendix B: Gaussian Smoothing In some cases the small-scale magnetic field at the periphery of the AR is removed before the clean-up procedure is applied (Figure 7). This is achieved by using a method similar to Yardley et al. (2016). Firstly, a Gaussian filter is applied with a standard deviation (width) of 7 pixel units to smooth the data. Secondly, the weighted average of the value of magnetic flux density of the neighbouring pixels must exceed a 40 G cut-off. Then, the largest regions that are identified that make up at least 60% of the selected area are kept whereas, smaller features at large distances from the AR are discarded. This procedure removes small-scale quiet Sun features that are not part of the AR, and does not affect the coronal evolution as we are only interested in the large-scale coronal evolution of the AR. The raw and cleaned magnetograms for simulation frame 35 taken on 2013 October 13 when AR 11867 is at its maximum unsigned magnetic flux value. A Gaussian kernel is used to remove small-scale magnetic field surrounding the AR before the clean-up processes described in Section A are applied. For both magnetograms the saturation levels of the photospheric magnetic field are ±100 G. The flux-weighted central coordinates for the positive and negative photospheric magnetic polarities are represented by the red and green asterisks, respectively. Appendix C: Comparison to Vector Magnetic Field Observations The method used to simulate the coronal evolution of our AR sample uses a time series of LoS magnetograms as the photospheric boundary condition. This boundary condition injects electric currents into the coronal magnetic field which then evolves through a time series of NLFF fields using the magnetofrictional relaxation process. At no point in the simulation do we constrain the solution using the observed vector magnetic field or with coronal observations. Therefore we allow the boundary evolution to self-consistently produce the horizontal field and subsequently the coronal structures. The reproduced coronal structures are therefore due to non-potential effects produced by the horizontal evolution of the LoS magnetic fields along with any flux emergence or cancellation. To show that our magnetic field at the photosphere is consistent with the observations and that the simulated coronal structures can be compared with the observed ones we have included a comparison of our simulated vector field at the boundary with the observed vector field. Figure 8 shows the vector magnetic field components from the simulation on the base compared to the observed vector data for AR 11561 where the comparison is carried out midway through its evolution. To produce the observed vector field components we have used the space weather HMI active region patches (SHARPS, Bobra et al. 2014) data that have been projected to the Lambert cylindrical equal-area (CEA) Cartesian coordinate system i.e. the hmi.sharp_cea_720s series. The figure shows that there is a relatively good agreement between the simulated horizontal field components and the observed horizontal field, particularly in the strong field regions where the signal-to-noise ratio is high. To determine quantitatively whether there is a close correspondence between the horizontal components derived from the base of the simulation and the observed vector components we will conduct an in-depth comparison in a follow-up study. This will include a detailed comparison of the sign and distribution of the three magnetic field components for both the vector field of the simulation and the observed vector data. The vertical and horizontal components of the magnetic field of AR 11561 taken from the simulation (top panel) and the corresponding observed vector data (bottom panel). The vertical component of the magnetic field is shown where the positive (negative) photospheric polarities of the AR are represented by the black (white) contours saturated at 500 G. The red arrows represent the magnitude and direction of the horizontal magnetic field components. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Yardley, S.L., Mackay, D.H. & Green, L.M. Simulating the Coronal Evolution of Bipolar Active Regions to Investigate the Formation of Flux Ropes. Sol Phys 296, 10 (2021). https://doi.org/10.1007/s11207-020-01749-2 Active regions
CommonCrawl
From formulasearchengine Template:Shortlead Template:Infobox file format Mathematical Markup Language (MathML) is a mathematical markup language, an application of XML for describing mathematical notations and capturing both its structure and content. It aims at integrating mathematical formulae into World Wide Web pages and other documents. It is a recommendation of the W3C math working group and part of HTML5. 1.1 MathML version 3 2 Presentation and semantics 2.1 Presentation MathML 2.2 Content MathML 3 Example and comparison to other formats 4 Embedding MathML in HTML/XHTML files 5 Software support 5.1 Web browsers 5.3 Handwriting recognition 5.4 Conversion 5.5 Web conversion 5.6 Support of software developers 6 Other standards MathML 1 was released as a W3C recommendation in April 1998 as the first XML language to be recommended by the W3C. Version 1.01 of the format was released in July 1999 and version 2.0 appeared in February 2001. In October 2003, the second edition of MathML Version 2.0 was published as the final release by the W3C math working group. MathML was originally designed before the finalization of XML namespaces. However it was assigned a namespace immediately after the Namespace Recommendation was completed, and for XML use, the elements should be in the namespace with namespace URI http://www.w3.org/1998/Math/MathML. When MathML is used in HTML (as opposed to XML) this namespace is automatically inferred by the HTML parser and need not be specified in the document. MathML version 3 Version 3 of the MathML specification was released as a W3C Recommendation on 20 October 2010. A recommendation of A MathML for CSS Profile was later released on 7 June 2011;[1] this is a subset of MathML suitable for CSS formatting. Another subset, Strict Content MathML, provides a subset of content MathML with a uniform structure and is designed to be compatible with OpenMath. Other content elements are defined in terms of a transformation to the strict subset. New content elements include <bind> which associates bound variables (<bvar>) to expressions, for example a summation index. The new <share> element allows structure sharing.[2] The development of MathML 3.0 went through a number of stages. In June 2006 the W3C rechartered the MathML Working Group to produce a MathML 3 Recommendation until February 2008 and in November 2008 extended the charter to April 2010. A sixth Working Draft of the MathML 3 revision was published in June 2009. On 10 August 2010 version 3 graduated to become a "Proposed Recommendation" rather than a draft.[2] The Second Edition of MathML 3.0 was published as a W3C Recommendation on April 10, 2014.[3] Presentation and semantics MathML deals not only with the presentation but also the meaning of formula components (the latter part of MathML is known as "Content MathML"). Because the meaning of the equation is preserved separate from the presentation, how the content is communicated can be left up to the user. For example, web pages with MathML embedded in them can be viewed as normal web pages with many browsers, but visually impaired users can also have the same MathML read to them through the use of screen readers (e.g. using the MathPlayer plugin for Internet Explorer, Opera 9.50 build 9656+ or the Fire Vox extension for Firefox). Presentation MathML Presentation MathML focuses on the display of an equation, and has about 30 elements. The elements' names all begin with m. A Presentation MathML expression is built up out of tokens that are combined using higher-level elements, which control their layout (there are also about 50 attributes, which mainly control fine details). Token elements generally only contain characters (not other elements). They include: <mi>x</mi> – identifiers; <mo>+</mo> – operators; <mn>2</mn> – numbers. <mtext>non zero</mtext> – text. Note however that these token elements may be used as extension points, allowing markup in host languages. MathML in HTML5 allows most inline HTML markup in mtext, and <mtext><b>non</b> zero</mtext> is conforming, with the HTML markup being used within the MathML to mark up the embedded text (making the first word bold in this example). These are combined using layout elements, that generally contain only elements. They include: <mrow> – a horizontal row of items; <msup>, <munderover> , and others – superscripts, limits over and under operators like sums, etc.; <mfrac> – fractions; <msqrt> and <mroot> – roots; <mfenced> - surrounding content with fences, such as parentheses. As usual in HTML and XML, many entities are available for specifying special symbols by name, such as &pi; and &RightArrow;. An interesting feature of MathML is that entities also exist to express normally-invisible operators, such as &InvisibleTimes; for implicit multiplication. They are: U+2061 FUNCTION APPLICATION; U+2062 INVISIBLE TIMES; U+2063 INVISIBLE SEPARATOR; and U+2064 INVISIBLE PLUS. The full specification of MathML entities [1] is closely coordinated with the corresponding specifications for use with HTML and XML [2] in general. Thus, the expression a x 2 + b x + c {\displaystyle ax^{2}+bx+c} requires two layout elements: one to create the overall horizontal row and one for the superscripted exponent. Including only the layout elements and the (not yet marked up) bare tokens, the structure looks like this: <mrow> a &InvisibleTimes; <msup>x 2</msup> + b &InvisibleTimes; x + c </mrow> However, the individual tokens also have to be identified as identifiers (mi), operators (mo), or numbers (mn). Adding the token markup, the full form ends up as: <mi>a</mi> <mo>&InvisibleTimes;</mo> <msup><mi>x</mi><mn>2</mn></msup> <mo>+</mo><mi>b</mi><mo>&InvisibleTimes;</mo><mi>x</mi> <mo>+</mo><mi>c</mi> A valid MathML document typically consists of the XML declaration, DOCTYPE declaration, and document element. The document body then contains MathML expressions which appear in <math> elements as needed in the document. Often, MathML will be embedded in more general documents, such as HTML, DocBook, or other XML schemas. A complete document that consists of just the MathML example above, is shown here: <!DOCTYPE math PUBLIC "-//W3C//DTD MathML 2.0//EN" "http://www.w3.org/Math/DTD/mathml2/mathml2.dtd"> <math xmlns="http://www.w3.org/1998/Math/MathML"> <mi>a</mi> <mo>&InvisibleTimes;</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <mi>b</mi> <mo>&InvisibleTimes; </mo> <mi>c</mi> </math> Content MathML Content MathML focuses on the semantics, or meaning, of the expression rather than its layout. Central to Content MathML is the <apply> element that represents function application. The function being applied is the first child element under <apply>, and its operands or parameters are the remaining child elements. Content MathML uses only a few attributes. Tokens such as identifiers and numbers are individually marked up, much as for Presentation MathML, but with elements such as ci and cn. Rather than being merely another type of token, operators are represented by specific elements, whose mathematical semantics are known to MathML: times, power, etc. There are over a hundred different elements for different functions and operators (see [3]). For example, <apply><sin/><ci>x</ci></apply> represents sin ⁡ ( x ) {\displaystyle \sin(x)} and <apply><plus/><ci>x</ci><cn>5</cn></apply> represents x + 5 {\displaystyle x+5} . The elements representing operators and functions are empty elements, because their operands are the other elements under the containing <apply>. The expression a x 2 + b x + c {\displaystyle ax^{2}+bx+c} could be represented as <math> <apply> <plus/> <times/> <ci>a</ci> <power/> <ci>x</ci> <cn>2</cn> </apply> <ci>b</ci> <ci>c</ci> Content MathML is nearly isomorphic to expressions in a functional language such as Scheme. <apply>...</apply> amounts to Scheme's (...), and the many operator and function elements amount to Scheme functions. With this trivial literal transformation, plus un-tagging the individual tokens, the example above becomes: (plus (times a (power x 2)) (times b x) This reflects the long-known close relationship between XML element structures, and LISP or Scheme S-expressions.[4][5] Example and comparison to other formats The well-known quadratic formula: x = − b ± b 2 − 4 a c 2 a {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}} would be marked up using LaTeX syntax like this: x=\frac{-b \pm \sqrt{b^2 - 4ac}}{2a} in troff/eqn like this: x={-b +- sqrt{b sup 2 – 4ac}} over 2a in Apache OpenOffice Math and LibreOffice Math like this (all three are valid): x={-b plusminus sqrt {b^2 – 4 ac}} over {2 a} x={-b ± sqrt {b^2 – 4ac}} over 2a x={-b +- sqrt {b^2 – 4ac}} over 2a in ASCIIMathML like this: x = (-b +- sqrt(b^2 – 4ac)) / (2a) The above equation could be represented in Presentation MathML as an expression tree made up from layout elements like mfrac or msqrt elements: <math mode="display" xmlns="http://www.w3.org/1998/Math/MathML"> <mo>=</mo> <mfrac> <mo form="prefix">&#x2212;<!-- − --></mo> <mo>&#x00B1;<!-- &PlusMinus; --></mo> <msqrt> <mo>&#x2212;<!-- − --></mo> <mo>&#x2062;<!-- &InvisibleTimes; --></mo> </msqrt> </mfrac> <annotation encoding="TeX"> x=\frac{-b\pm\sqrt{b^2-4ac}}{2a} <annotation encoding="StarMath 5.0"> x={-b plusminus sqrt {b^2 - 4 ac}} over {2 a} This example uses the Template:Tag element, which can be used to embed a semantic annotation in non-XML format, for example to store the formula in the format used by an equation editor such as StarMath or the markup using LaTeX syntax. Although less compact than TeX, the XML structuring promises to make it widely usable and allows for instant display in applications such as Web browsers and facilitates a straightforward interpretation of its meaning in mathematical software products.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} MathML is not intended to be written or edited directly by humans.[6] Embedding MathML in HTML/XHTML files MathML, being XML, can be embedded inside other XML files such as XHTML files using XML namespaces. Recent browsers such as Firefox 3+ and Opera 9.6+ (support incomplete) can display Presentation MathML embedded in XHTML. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1 plus MathML 2.0//EN" "http://www.w3.org/Math/DTD/mathml2/xhtml-math11-f.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <title>Example of MathML embedded in an XHTML file</title> <meta name="description" content="Example of MathML embedded in an XHTML file"/> <h1>Example of MathML embedded in an XHTML file</h1> The area of a circle is <mi>&#x03C0;<!-- π --></mi> <mi>r</mi> </math>. A rendering of the formula for a circle in MathML+XHTML using Firefox 22 on Mac OS X Inline MathML is also supported in HTML5 files in the current versions of WebKit (Safari), Gecko (Firefox). There is no need to specify namespaces like in the XHTML. <title>Example of MathML embedded in an HTML5 file</title> <h1>Example of MathML embedded in an HTML5 file</h1> <mi>&pi;</mi> Of the major web browsers, Gecko-based browsers (e.g., Firefox and Camino) have the most complete native support for MathML.[7][8] While the WebKit layout engine has a development version of MathML,[9] this feature is only available in version 5.1 and higher of Safari,[10] Chrome 24[11][12] but not in later versions of Chrome.[13] Google removed support of MathML claiming architectural security issues and low usage do not justify their engineering time.[14] Template:As of, the WebKit/Safari implementation has numerous bugs.[15] Opera, between version 9.5 and 12, supports MathML for CSS profile,[16][17] but is unable to position diacritical marks properly.[18] Prior to version 9.5 it required User JavaScript or custom stylesheets to emulate MathML support.[19] Starting with Opera 14, Opera drops support for MathML by switching to the Chromium 25 engine.[20] Internet Explorer does not support MathML natively. Support for IE6 through IE9 can be added by installing the MathPlayer plugin.[21] IE10 has some crashing bugs with MathPlayer and Microsoft decided to completely disable in IE11 the binary plug-in interface that MathPlayer needs.[22] MathPlayer has a license that may limit its use or distribution in commercial webpages and software. Using or distributing the MathPlayer plugin to display HTML content via the WebBrowser control in commercial software may also be forbidden by this license. The KHTML-based Konqueror currently does not provide support for MathML.[23] The quality of rendering of MathML in a browser depends on the installed fonts. The STIX Fonts project have released a comprehensive set of mathematical fonts under an open license. The Cambria Math font supplied with Microsoft Windows had a slightly more limited support.[24] According to a member of the MathJax team, none of the major browser makers paid any of their developers for any MathML-rendering work; whatever support exists is overwhelmingly the result of unpaid volunteer time/work.[25] Some editors with native MathML support (including copy and paste of MathML) are MathFlow and MathType from Design Science, MathMagic, Publicon from Wolfram Research, and WIRIS.[26] Full MathML editor list at W3C.[27] MathML is also supported by major office products such as Apache OpenOffice (via OpenOffice Math), LibreOffice (via LibreOffice Math), Calligra Suite (former KOffice), and MS Office 2007, as well as mathematical software products such as Mathematica, Maple and the Windows version of the Casio ClassPad 300. The W3C Browser/Editor Amaya can also be mentioned as a WYSIWYG MathML-as-is editor. Firemath, an addon for Firefox, provides a WYSIWYG MathML editor. Most editors will only produce presentation MathML. The MathDox formula editor is an OpenMath editor also providing presentation and content MathML. Formulator MathML Weaver uses WYSIWYG style to edit Presentation, Content and mixed markups of MathML. Web Equation can convert handwriting to MathML.Windows 7 has a built-in tool called Math Input Panel. It converts handwriting to MathML.[28] (Unlike the Microsoft Office suite, the Math Input Panel does not use the OMML format, but Office applications can covert/paste from MathML into their preferred internal format.) The underlying technology is also exposed for use in other applications as an ActiveX control called Math Input Control.[29] Several utilities for converting to and from MathML are available. W3.org maintains a list of MathML related software for download.[30] Web conversion ASCIIMathML[31] provides a JavaScript library to rewrite a convenient Wiki-like text syntax used inline in web pages into MathML on the fly; it works in Gecko-based browsers, and Internet Explorer with MathPlayer. LaTeXMathML[32] does the same for (a subset of) the standard LaTeX mathematical syntax. ASCIIMathML syntax would also be quite familiar to anyone used to electronic scientific calculators. Equation Server for .NET from soft4science can be used on the server side (ASP.NET) for TeX-Math[35] (Subset of LaTeX math syntax) to MathML conversion. It can also create bitmap images (Png, Jpg, Gif, etc.) from TeX-Math or MathML input. LaTeXML is a perl utility to convert LaTeX documents to HTML, optionally either using MathML or converting mathematical expressions to bitmap images. Support of software developers Support of MathML format accelerates software application development in such various topics, as computer-aided education (distance learning, electronic textbooks and other classroom materials); automated creation of attractive reports; computer algebra systems; authoring, training, publishing tools (both for web and desktop-oriented), and many other applications for mathematics, science, business, economics, etc. Several software vendors propose a component edition of their MathML editors, thus providing the easy way for software developers to insert mathematics rendering/editing/processing functionality in their applications. For example, Formulator ActiveX Control from Hermitech Laboratory can be incorporated into an application as a MathML-as-is editor, Design Science offer a toolkit for building web pages that include interactive math (MathFlow Developers Suite,[37]). Another standard called OpenMath that has been designed (largely by the same people who devised Content MathML) more specifically for storing formulae semantically can also be used to complement MathML. OpenMath data can be embedded in MathML using the <annotation-xml encoding="OpenMath"> element. OpenMath content dictionaries can be used to define the meaning of <csymbol> elements. The following would define P1(x) to be the first Legendre polynomial <csymbol encoding="OpenMath" definitionURL="http://www.openmath.org/cd/contrib/cd/orthpoly1.xhtml#legendreP"> <msub><mi>P</mi><mn>1</mn></msub> </csymbol> The OMDoc format has been created for markup of larger mathematical structures than formulae, from statements like definitions, theorems, proofs, or example, to theories and text books. Formulae in OMDoc documents can either be written in Content MathML or in OpenMath; for presentation, they are converted to Presentation MathML. The ISO/IEC standard Office Open XML (OOXML) defines a different XML math syntax, derived from Microsoft Office products. However, it is partially compatible[38] through relatively simple XSL Transformations. List of document markup languages Comparison of document markup languages Formula editors LaTeX2HTML ↑ Template:Cite web ↑ 2.0 2.1 Mathematical Markup Language Version 3.0 W3C Recommendation. W3.org. Retrieved on 9 May 2012. ↑ MathML Version 3.0 2nd Edition. W3.org. Retrieved on 8 July 2014. ↑ Steven DeRose. The SGML FAQ Book: Understanding the Relationship of SGML and XML, Kluwer Academic Publishers, 1997. ISBN 978-0-7923-9943-8. ↑ Canonical S-expressions#cite note-0 ↑ {{#invoke:citation/CS1|citation |CitationClass=citation }} ↑ MathML – The Opera MathML blog. My.opera.com (1 November 2007). Retrieved on 9 May 2012. ↑ UserJS for MathML 2.0. My.opera.com. Retrieved on 9 May 2012. ↑ WIRIS editor page describing the use of MathML. Wiris.com. Retrieved on 9 May 2012. ↑ MathML Software – Editors at W3C. W3.org (24 April 2012). Retrieved on 9 May 2012. ↑ {{#invoke:citation/CS1|citation |CitationClass=book }} ↑ ASCIIMathML: Math on the web for everyone. .chapman.edu. Retrieved on 9 May 2012. ↑ LaTeXMathML: a dynamic LaTeX mathematics to MathML converter. Maths.nottingham.ac.uk. Retrieved on 9 May 2012. ↑ MathJax MathML Support. Mathjax.org. Retrieved on 9 May 2012. ↑ TeX-Math. TeX-Math. Retrieved on 9 May 2012. ↑ jqMath – Put Math on the Web. Mathscribe.com. Retrieved on 9 May 2012. ↑ MathFlow. Dessci.com. Retrieved on 9 May 2012. W3C Recommendation: Mathematical Markup Language (MathML) 1.01 Specification W3C Recommendation: Mathematical Markup Language (MathML) Version 2.0 (Second Edition) W3C Recommendation: Mathematical Markup Language (MathML) Version 3.0 (Third Edition) W3C Math Home — Contains the specifications, a FAQ, and a list of supporting software. Template:W3C Standards Template:Web browsers {{ safesubst:#invoke:Unsubst||$N=Use dmy dates |date=__DATE__ |$B= }} Retrieved from "https://en.formulasearchengine.com/index.php?title=MathML&oldid=222786" Use dmy dates from May 2012 Mathematical markup languages World Wide Web Consortium standards XML-based standards About formulasearchengine
CommonCrawl
A formula for the partition function that "counts" Andrew Sills (Georgia Southern University) Abstract: A partition of an integer n is a representation of n as a sum of positive integers where the order of the summands is considered irrelevant. Thus we see that there are five partitions of the integer 4, namely 4, 3+1, 2+2, 2+1+1, 1+1+1+1. The partition function p(n) denotes the number of partitions of n. Thus p(4) = 5. The first exact formula for p(n) was given by Hardy and Ramanujan in 1918. Twenty years later, Hans Rademacher improved the Hardy-Ramanujan formula to give an infinite series that converges to p(n). The Hardy-Ramanujan-Rademacher series is revered as one of the truly great accomplishments in the field of analytic number theory. In 2011, Ken Ono and Jan Bruinier surprised everyone by announcing a new formula which attains p(n) by summing a finite number of complex numbers which arise in connection with the multiset of algebraic numbers that are the union of Galois orbits for the discriminant -24n + 1 ring class field. Thus the known formulas for p(n) involve deep mathematics, and are by no means "combinatorial" in the sense that they involve summing a finite or infinite number of complex numbers to obtain the correct (positive integer) value. In this talk, I will show a new formula for the partition function as a multisum of positive integers, each term of which actually counts a certain class of partitions, and thus appears to be the first truly combinatorial formula for p(n). The idea behind the formula is due to Yuriy Choliy, and the work was completed in collaboration with him. We will further examine a new way to approximate p(n) using a class of polynomials with rational coefficients, and observe this approximation is very close to that of using the initial term of the Rademacher series. The talk will be accessible to students as well as faculty, and anyone interested is encouraged to attend! Entanglement branes in a two-dimensional string theory Gabriel Wong (Virginia Physics) Abstract: There is an emerging viewpoint that classical spacetime emerges from highly entangled states of more fundamental constituents. In the context of AdS/CFT, these fundamental constituents are strings, with a dual description as a large-N gauge theory. To understand entanglement in string theory, we consider the simpler context of two-dimensional large-N Yang-Mills theory, and its dual string theory description due to Gross and Taylor. We will show how entanglement in the gauge theory is described in terms of the string theory as thermal entropy of open strings whose endpoints are anchored on a stretched entangling surface which we call an entanglement brane. Compressed Learning II Marius Junge (UIUC) Abstract: Part II MacMahon's partial fractions Abstract: A. Cayley used ordinary partial fractions decompositions of $1/[(1-x)(1-x^2)\ldots(1-x^m)]$ to obtain direct formulas for the number of partitions of $n$ into at most $m$ parts for several small values of $m$. No pattern for general m can be discerned from these, and in particular the rational coefficients that appear in the partial fraction decomposition become quite cumbersome for even moderate sized $m.$ Later, MacMahon gave a decomposition of $1/[(1-x)(1-x^2). . .(1-x^m)]$ into what he called "partial fractions of a new and special kind" in which the coefficients are "easily calculable numbers" and the sum is indexed by the partitions of $m$. While MacMahon's derived his "new and special" partial fractions using "combinatory analysis," the aim of this talk is to give a fully combinatorial explanation of MacMahon's decomposition. In particular, we will observe a natural interplay between partitions of $n$ into at most $m$ parts and weak compositions of $n$ with $m$ parts. Mathematics Colloquium: Tondeur Lectures in Mathematics Operads from TQFTs Ulrike Tillmann (Oxford University) Abstract: Manifolds give rise to interesting operads, and in particular TQFTs define algebras over these operads. In the case of Atiyah's 1+1 dimensional theories these algebras are well-known to correspond to certain algebras. Surprisingly, independent of the dimension of the underlying manifolds, in the topologically enriched setting the manifold operads detect infinite loop spaces. We will report on joint work with Basterra, Bobkova, Ponto, Yeakel. The Tondeur Lectures in Mathematics will be held April 25-27, 2017. A reception will be held following the first lecture from 5-6 pm April 25 in 239 Altgeld Hall.
CommonCrawl
Propensity score interval matching: using bootstrap confidence intervals for accommodating estimation errors of propensity scores Wei Pan1 & Haiyan Bai2 Propensity score methods have become a popular tool for reducing selection bias in making causal inference from observational studies in medical research. Propensity score matching, a key component of propensity score methods, normally matches units based on the distance between point estimates of the propensity scores. The problem with this technique is that it is difficult to establish a sensible criterion to evaluate the closeness of matched units without knowing estimation errors of the propensity scores. The present study introduces interval matching using bootstrap confidence intervals for accommodating estimation errors of propensity scores. In interval matching, if the confidence interval of a unit in the treatment group overlaps with that of one or more units in the comparison group, they are considered as matched units. The procedure of interval matching is illustrated in an empirical example using a real-life dataset from the Nursing Home Compare, a national survey conducted by the Centers for Medicare and Medicaid Services. The empirical example provided promising evidence that interval matching reduced more selection bias than did commonly used matching methods including the rival method, caliper matching. Interval matching's approach methodologically sounds more meaningful than its competing matching methods because interval matching develop a more "scientific" criterion for matching units using confidence intervals. Interval matching is a promisingly better alternative tool for reducing selection bias in making causal inference from observational studies, especially useful in secondary data analysis on national databases such as the Centers for Medicare and Medicaid Services data. Observational studies are common in medical research because of practical or ethical barriers to random assignment of units (e.g., patients) into treatment conditions (e.g., treatment vs. comparison); consequently, observational studies likely yield results with limited validity for causal inference due to selection bias resulted from non-randomization. To reduce selection bias, Rosenbaum and Rubin [1] proposed propensity score methods for balancing the distributions of observed covariates between treatment conditions and, therefore, approximating a situation that is normally achieved through randomization. A propensity score is defined as the probability of a unit being assigned to the treatment group [1]. Propensity score methods normally comprise four major steps [2]: Estimate a propensity score for each unit using a logistic regression of treatment conditions on covariates or other propensity score estimation methods [2, 3]; Match each unit in the treatment group with one or more units in the comparison group based on the closest distance between their propensity scores; Evaluate the matching quality in terms of how much selection bias is reduced; and Conduct intended outcome analysis on the matched data or on the original data with propensity score adjustment or weighting. Although propensity score methods have become increasingly popular in medical research over the past three decades as an effective tool for reducing selection bias in making causal inference based on observational data, propensity score matching (PSM), as a crucial step in propensity score methods, still has limitations [2]. For example, in the existent PSM techniques, matching is done primarily based on the distance between point estimates of propensity scores, and thus, it is difficult to establish a meaningful criterion to evaluate the closeness of the matched units without knowing the estimation errors (or standard errors) of the estimated propensity scores. Previously, Cochran and Rubin [4] proposed caliper matching, which uses a caliper band (e.g., a pre-specified distance between propensity scores) to avoid "bad" matches that are not close enough. Unfortunately, a caliper band is expressed as a proportion to the pooled standard deviation of propensity scores across all the units, and therefore, it is unit-invariant; that is, a caliper band takes the same value for all the units. Therefore, a caliper band does not possess a feature that can gauge the unit-specific standard error of the estimated propensity score for each individual unit. The purpose of the present study was to extend caliper matching to a new matching technique, interval matching, by using unit-specific bootstrap confidence intervals (CIs) [5] for gauging the standard error of the estimated propensity score for each unit. In interval matching, if the confidence interval of a unit in the treatment group overlaps with that of one or more units in the comparison group, they are considered as matched units. In the present study, the procedure of interval matching is illustrated in an empirical example using a real-life sample from a publicly available database of the Nursing Home Compare [6], a national survey conducted by the Centers for Medicare and Medicaid Services (CMS) in the United States. PSM assumptions Suppose one has N units. In addition to a response value Y i , each of N units has a covariate value vector X i = (Xi1, …, X iK )′, where i = 1, …, N, and K is the number of covariates. Let T i be the treatment condition. T i = 1 indicates that unit i is in the treatment group and T i = 0 the comparison group. Rosenbaum and Rubin [1] defined a propensity score for unit i as the probability of the unit being assigned to the treatment group, conditional on the covariate vector X i ; that is, $$ p\left({\mathbf{X}}_i\right)= Pr\left({T}_i=1\Big|{\mathbf{X}}_i\right). $$ PSM is based on the following two strong ignorability assumptions in treatment assignment [1]: (1) (Y1i, Y0i) ⊥ T i | X i ; and (2) 0 < p(X i ) < 1. The first assumption states a condition that treatment assignment T i and response (Y1i, Y0i) are conditionally independent, given X i ; the second one ensures a common support between the treatment and comparison groups. Rosenbaum and Rubin [1] further demonstrated in their Theorem 3 that ignorability conditional on X i implies ignorability conditional on p(X i ); that is, $$ \left({Y}_{1i},{Y}_{0i}\right)\perp {T}_i\left|{\mathbf{X}}_i\Rightarrow \left({Y}_{1i},{Y}_{0i}\right)\perp {T}_i\right|p\left({\mathbf{X}}_i\right). $$ Thus, under the assumptions of the strong ignorability in treatment assignment, if a unit in the treatment group and a corresponding matched unit in the comparison group have the same propensity score, the two matched units will have, in probability, the same value of the covariate vector X i . Therefore, outcome analysis on the matched data after matching tends to produce unbiased estimates of treatment effects due to reduced selection bias through balancing the distributions of observed covariates between the treatment and comparison groups [1, 2, 7]. In practice, the logit of propensity score, l(X i ) = ln{p(X i )/[1 – p(X i )]}, rather than the propensity score p(X i ) itself, is commonly used because l(X i ) has a better property of normality than does p(X i ) [1]. PSM methods The basis of PSM is nearest neighbor matching [8], which matches unit i in the treatment group with unit j in the comparison group with the closest distance between the two units' logit of their propensity scores expressed as follows: $$ d\left(i,j\right)={ \min}_j\left\{\left|l\left({\mathbf{X}}_i\right)\hbox{--} l\left({\mathbf{X}}_j\right)\right|\right\}. $$ Alternatively, caliper matching [4] matches unit i in the treatment group with unit j in the comparison group within a pre-set caliper band b; that is, $$ d\left(i,j\right)={ \min}_j\left\{\left|l\left({\mathbf{X}}_i\right)\hbox{--} l\left({\mathbf{X}}_j\right)\right|<b\right\}. $$ Based on Cochran and Rubin's work [4], Rosenbaum and Rubin [8] recommend b equals 0.25 of the pooled standard deviation (SD) of the propensity scores. Austin [9] further asserted that b = 0.20 × SD of the propensity scores is the optimal caliper bandwidth. Correspondingly, Mahalanobis metric matching (or Mahalanobis metric matching including the propensity score) and Mahalanobis caliper matching (or Mahalanobis metric matching within a propensity score caliper) [8] are two additional matching techniques similar to nearest neighbor matching and caliper matching, respectively, but use a diffident distance measure. In Mahalanobis metric matching, unit i in the treatment group is matched with unit j in the comparison group with the closest Mahalanobis distance measured as follows: $$ d\left(i,j\right)={ \min}_j\left\{{D}_{ij}\right\}, $$ where D ij = (Z i ′ – Zj′)′S−1(Zi′ – Zj′), Z• (• = i or j) is a new vector (X•, l(X•)), and S is the sample variance-covariance matrix of the vector for the comparison group. Mahalanobis caliper matching is a variant of Mahalanobis metric matching and it uses $$ d\left(i,j\right)={ \min}_j\left\{{D}_{ij}<b\right\}, $$ where the selection of the caliper band b is the same as in caliper matching. Data reduction after matching is a common and inevitable phenomenon in PSM. Loss of data in the comparison group seems a problem, but what we lose is unmatched cases that are assumed to potentially cause selection bias, and therefore, those unmatched units would have a negative impact on estimation of treatment effects. The matched data that may have a smaller sample size will, however, produce more valid (or less biased) estimates than do the original data. It is true that if we have small samples, which is not uncommon in medical research, PSM may not be applicable in such situations, but PSM is particularly useful in secondary data analysis on national databases such as the CMS data. PSM algorithms All aforementioned PSM methods can be implemented by using either greedy matching or optimal matching algorithm [10]. Both matching algorithms usually produce similar matched data when the size of the comparison group is large; whereas optimal matching gives rise to smaller overall distances within matched units [11, 12]. All the matching techniques, either using greedy matching or optimal matching, are based on the distance between point estimates of propensity scores. The problem with this approach is that it is difficult to establish a meaningful criterion to evaluate the closeness of the matched units without knowing the standard errors of the estimated unit-specific propensity scores. Simply put, without knowing the standard errors of l(X i ) and l(X j ), we do not know if l(X j ) in the comparison group is the best matched score with l(X i ) in the treatment group. In other words, a score a little smaller than l(X j ) might be a better matched one with l(X i ); or conversely, l(X j ) might be matched better with a score a little larger than l(X i ). Although caliper matching, one of the most effective matching methods [13–15], uses a caliper band to avoid "bad" matches, a caliper band is fixed (or unit-invariant) and cannot capture the unit-specific standard error of the estimated propensity score for each unit. Therefore, a new matching technique is needed for gauging standard errors of propensity scores. Interval matching Interval matching extends caliper matching for accommodating the estimation error (or standard error) of the estimated propensity score by establishing a CI of the estimated propensity score for each unit. In interval matching, if the CI of a unit in the treatment group overlaps with that of one or more units in the comparison group, they are considered as matched units. Because the true distribution of propensity scores is unknown, the bootstrap [5] is utilized for obtaining a unit-specific CI for each unit. The bootstrap is a statistical method of assessing the accuracy (e.g., standard errors and CIs) of sample estimates to population parameters, based on the empirical distribution of sample estimates from random resamples of a given sample whose distribution is unknown. Let {X1, …, X N } be a random sample of size N from an unknown distribution F; θ(F) is a parameter of interest. The specific procedure of the bootstrap for computing a CI of the parameter estimate, [\( {\widehat{\theta}}_{a/2} \) (X1, …, X N ), \( {\widehat{\theta}}_{1-a/2} \) (X1, …, X N )], where (1 - α) is the confidence level, consists of the following four steps: Obtain a bootstrap sample {X1*, …, X N *} that is randomly resampled with replacement from the empirical distribution F N represented by the original sample {X1, …, X N }; Calculate the parameter estimate \( \widehat{\theta} \) (X1*, …, X N *) for the quantity θ(F N ) = θ(X1, …, X N ); Repeat the same independent resampling-calculating scheme B times (typically 500 times), resulting in B bootstrap estimates \( \widehat{\theta} \) (X1*(b), …, X N *(b)), b = 1, …, B, which constitute an empirical distribution (or sampling distribution) of the estimate \( \widehat{\theta} \) (X1, …, X N ); and Obtain the estimated CI of the parameter estimate, [\( {\widehat{\theta}}_{a/2} \) (X1, …, X N ), \( {\widehat{\theta}}_{1-a/2} \) (X1, …, X N )], by computing the (α/2)th and (1 – α/2)th percentiles of the sampling distribution, \( {\widehat{\theta}}_{a/2} \) (X1*, …, X N *) and \( {\widehat{\theta}}_{1-a/2} \) (X1*, …, X N *). To obtain the bootstrap CIs for interval matching, one can simply follow the steps described above. First, conduct the bootstrap resampling B times on units in the sample data (T, X), where T is the indicator of the treatment conditions and X is the covariate value matrix (X1, …, X N )′, resulting in B bootstrap samples (T(b), X(b)), where X(b) = (X1*(b), …, X N *(b))′, b = 1, …, B. Second, a logistic regression (or other propensity score estimation model) is repeatedly applied to each of the B bootstrap samples, resulting in B propensity scores for each unit i (i = 1, …, N): p(X i *(1)), …, p(X i *(B)); then, their logit, l(X i *(1)), …, l(X i *(B)), are calculated. Last, for each unit i, a CI at certain confidence level (e.g., 68 %CI) is obtained by calculating the corresponding percentiles of the sampling distribution of the logit of B bootstrap propensity scores. Specifically, an estimated bootstrap 68 %CI for the logit of the propensity score of unit i would be [l.16(X i *), l.84(X i *)] (see Fig. 1 for an illustration). The procedure of obtaining bootstrap 68 %CIs of the logit of propensity scores Once a CI of the estimate of the logit of propensity score is obtained for each unit, interval matching can be conducted by examining whether the CI for a unit in the treatment group overlaps with that for one or more units in the comparison group. In other words, if the two CIs overlap; that is, $$ \left[{l}_{.16}\left({\mathbf{X}}_i*\right),{l}_{.84}\left({\mathbf{X}}_i*\right)\right]\cap \left[{l}_{.16}\left({\mathbf{X}}_j*\right),{l}_{.84}\left({\mathbf{X}}_j*\right)\right]\ne \varnothing, $$ the two units are taken as matched units. In practice, one can do either 1:1 or 1:K interval matching. In 1:1 interval matching, one needs to take only one unit that has the closest distance, as defined by the matching method (e.g., Equation 3 for nearest neighbor matching and Equation 6 for Mahalanobis caliper matching), between the logit of the propensity scores among all the units in the comparison group whose CIs overlap with that of the unit in the treatment group. If there are two or more units in the comparison group within the overlap having the same closest distance, the program will randomly select one as the matched unit. In 1:K interval matching, one can simply take K closest units in the comparison group whose CIs overlap with that of the unit in the treatment group. It is worth noting that using the logit of propensity score l(X i ) is particularly important in interval matching because the distribution of logit l(X i ) is more symmetric than the propensity score p(X i ); therefore, interval matching based on logit l(X i ) will be more balanced in terms of matching from both sides (left or right) of the distribution of logit l(X i ). The procedure of interval matching is illustrated in an empirical example that was stemmed from Lutfiyya, Gessert, and Lipsky's comparative study [16]. They compared nursing home quality between rural and urban facilities using the CMS Nursing Home Compare data in 2010 on the past performance of all Medicare- and Medicaid-certified nursing homes in the United States [6]. The data were downloaded from the CMS Nursing Home Compare Website on more than 10,000 nursing homes with the geographical location (rural vs. urban) information extracted from the 2003 rural–urban county continuum codes developed by the Economic Research Service of the United States Department of Agriculture [17]. Quality ratings on nursing home performance were measured on three domains: health inspection, staffing, and quality measures [6]. An overall rating was also computed as a weighted average of the three domains. Lutfiyya, Gessert, and Lipsky [16] concluded that rural nursing home quality was not comparable to that of urban nursing homes with mixed findings: rural nursing homes had significantly higher quality ratings on the overall rating (p < .001) and health inspections rating (p < .001) than did urban nursing homes, but significantly lower on the quality measures rating (p < .001) than did urban nursing homes; while there was no significant difference in nursing staffing rating (p = .480) between rural and urban nursing homes. The problem in Lutfiyya, Gessert, and Lipsky's study [16] is that the geographical location (rural vs. urban) of nursing homes was not randomly assigned, and consequently, unbalanced background characteristics of nursing homes created potential selection bias between rural and urban nursing homes. Propensity score methods would be an appropriate technique to deal with this selection bias problem in such observational study. For illustration purposes only, the data used in this empirical example were a 50 % random sample from the same publicly available database, the CMS Nursing Home Compare in 2010. The sample data consisted of total N = 6,317 nursing homes (nR = 1,990 rural nursing homes and nU = 4,327 urban nursing homes) with 74 covariates of the ownership and size of nursing homes, qualification of nursing staff, and safety measures (see Additional file 1 for a full list of the 74 covariates). The 74 covariates were hypothesized to be related to the quality ratings and/or group assignment and, thus, all included in this empirical example. Due to the scope of this example and the space limit, general guidelines on covariate selection is not discussed here but available elsewhere [18]. It is also worth noting that due to the purpose of this example which is to illustrate the procedure of interval matching, replicating Lutfiyya, Gessert, and Lipsky's study [16] of testing the difference in nursing home quality between rural and urban nursing homes was not the main focus of this example; instead, this example focused on evaluating the effectiveness of interval matching along with other commonly used PSM methods for reducing selection bias (or balancing covariates) between rural and urban nursing homes. Also, without loss of generality, 1:1 interval matching was illustrated; the present example can be easily extended to 1:K interval matching without any difficulty. Propensity score bootstrap CIs Five hundred bootstrap samples were first resampled from the data using SAS® PROC SURVEYSELECT [19], and then for each of the 500 bootstrap samples, logistic regression of rural vs. urban nursing homes on the 74 covariates was conducted to obtain the probability (or the propensity score) of being a rural nursing home for each nursing home. There are some other propensity score estimation models, but without loss of generality, logistic regression was used in this example for illustration purposes only. Next, the logit of the propensity score for each nursing home was computed, and bootstrap 50 %, 68 %, and 95 %CIs of the logit for each nursing home were constructed by calculating the 25th percentile and the 75th percentile, the 16th percentile and the 84th percentile, and the 2.5th percentile and the 97.5th percentile, respectively, of the 500 bootstrap logit values. The purpose of computing the bootstrap CIs at different confidence levels was to examine the effect of the confidence level on the selection bias reduction in interval matching. Analogous to caliper bandwidth in caliper matching, the average of the half widths of the 6,317 bootstrap CIs was 0.20, ranging from 0.06 to 7.96 with a standard deviation of 0.19, for 50 %CIs; 0.29, ranging from 0.09 to 11.03 with a standard deviation of 0.29, for 68 %CIs; and 0.59, ranging from 0.20 to 26.35 with a standard deviation of 0.70, for 95 %CIs. Matching and evaluation of matching quality The effectiveness of interval matching for reducing selection bias was evaluated along with the basic neighbor matching and the related caliper matching as well as other two commonly used matching methods, Mahalanobis caliper matching and optimal matching. All but optimal matching methods were implemented using a modified SAS® Macro based on Coca-Perraillon [20]. The optimal matching was conducted using an R package, MatchIt [12]. The pooled SD of the logit of the propensity scores l(X i ) (i = 1, 2, …, 6,317) was 1.86; the caliper band for caliper matching in this example was b = 0.20 × SD = 0.20 × 1.86 = 0.37. Figure 2 displays the distributions of the logit of propensity scores between the rural and urban nursing homes prior to and post matching. By visually inspecting the distributions of the logit of propensity scores, it can be seen that interval matching as well as caliper matching did better in balancing the distributions between the rural and urban nursing homes than did nearest neighbor matching, optimal matching, and Mahalanobis caliper matching. Three statistical criteria were also used to evaluate the effectiveness of the matching methods in balancing the distributions. They were the mean difference (or selection bias [B]), the standardized bias (SB), and the percent bias reduction (PBR). The distributions of the logit of propensity scores across the rural vs. urban nursing homes prior to and post matching The selection bias for each covariate X k (k = 1, …, K) is the mean difference between the rural and urban nursing homes as follows: $$ B={M}_1\left({X}_k\right)\hbox{--} {M}_0\left({X}_k\right), $$ where M1(X k ) is the mean of the covariate for the rural nursing homes and M0(X k ) is the mean of the covariate for the urban nursing homes. The SB associated with each covariate was defined by Rosenbaum and Rubin [8] as follows: $$ SB=\frac{B}{\sqrt{\frac{V_1\left({X}_k\right)+{V}_0\left({X}_k\right)}{2}}}\times 100\%, $$ where V1(X k ) is the variance of the covariate for the rural nursing homes and V0(X k ) is the variance of the covariate for the urban nursing homes. According to Caliendo and Kopeinig [21], if the absolute SB is reduced to 5 % or less after matching, the matching method is considered effective in reducing selection bias. The PBR on the covariate was proposed by Cochran and Rubin [4] and it can be expressed as follows: $$ PBR=\frac{\left|{B}_{prior\ to\ matching}\left|-\right|{B}_{post\ matching}\right|}{\left|{B}_{prior\ to\ matching}\right|}\times 100\%. $$ Note that the original expression of PBR in the literature [2, 4, 22, 23] did not impose the absolute values for B; here PBR (Equation 10) includes the absolute values to make the criterion more meaningful because both positive and negative Bs indicate unbalanced distributions of the covariate. Table 1 displays a summary of selection bias prior to matching and bias reduction post matching (see Additional file 2 for selection bias prior to matching and bias reduction post matching for all 74 covariates). From Table 1, we can see that selection bias prior to matching was evident in that the average of the 74 absolute SBs was 16.22 %. In addition, the selection bias is also indicated by the severely unbalanced distributions of the logit of the propensity scores with SB = 78.73 %. Table 1 A summary of selection bias prior to matching and bias reduction post matching The results of applying nearest neighbor matching, optimal matching, Mahalanobis caliper matching, caliper matching, and three interval matching methods are also presented in Table 1. First of all, the average of absolute SBs, and average PBRs across all 74 covariates demonstrated that the three interval matching methods as well as caliper matching were superior to all other matching methods by all means. Furthermore, by examining the statistical criteria for the logit of propensity scores—"arguably the most important variable" ([8], p. 36) in balancing the distributions of the covariates, the data suggested that 68 % interval matching outperformed caliper matching because the interval matching removed 99.64 % of the selection bias with remaining SB = −0.46 %, compared to 98.41 % for caliper matching (remaining SB = 1.96 %). In addition, this favorable phenomenon to 68 % interval matching was also echoed by the average PBR across all covariates (\( \overline{PBR} \) = 79.24 %), compared to \( \overline{PBR} \) = 76.78 % for caliper matching; only the average \( \overline{SB} \) of 68 % interval matching was slightly larger than but comparable to that of caliper matching (1.25 % vs. 1.0 %). Individual covariate balancing is also summarized in a graphical display (see Fig. 3) of SBs prior to and post the five matching methods. It is clearly seen that both interval matching and caliper matching significantly reduced more selection bias than did other matching methods because all the SBs of interval matching and caliper matching were within 5 %; whereas a substantial amount of the SBs of other matching were larger than 5 %. The standardized bias demonstrating the covariate balance prior to and post matching The present study used bootstrap CIs at 50 %, 68 %, and 95 % confidence levels in the empirical example which demonstrated that 68 %CIs performed the best among the three (see Table 1) and to some extent better than caliper matching. When the empirical distribution of the logit of the estimated propensity score is normally distributed, a 68 %CI will be a range of ±1 standard error away from the mean; whereas the caliper band in caliper matching uses 0.20 standard deviation of the logit of the propensity score. A higher level of percentage (i.e., confidence level) (>68 %) will lead to more possible matched units and a lower level of percentage (<68 %) will lead to more rigid matching and, thus, possible fewer matched units. In practice, researchers can determine what percentage of CI to use for accommodating a different size of a comparison group. In general, a smaller percentage of CI may be used for a larger comparison group. In addition, 500 bootstrap samples were used in the empirical example. If some units are not selected in a bootstrap sample, a larger number of bootstrap samples may be used to avoid the situation where few bootstrap propensity scores are obtained for the unit. As a side note, the difference in nursing home quality between rural and urban homes were compared using the matched data with 68 % interval matching, and the results (see Table 2) are different from those of Lutfiyya, Gessert, and Lipsky's study [16]. Specifically, Table 2 shows that rural nursing homes had lower quality ratings on all the ratings than urban nursing homes, but only quality measures rating was significant (p < .001). Table 2 Means (standard deviations) of nursing home quality ratings and independent samples t-test on the matched data with 68 % interval matching (nrural = nurban = 1,538) The normal procedure of current PSM is to match each unit in the treatment group with one or more units in the comparison group based on the distance between the point estimates of propensity scores. Unfortunately, the point estimates cannot capture estimation errors (or standard errors) of propensity scores. The present study proposed interval matching using bootstrap CIs for accommodating unit-specific standard errors of (the logit of) propensity scores. Interval matching's approach methodologically sounds more meaningful than its competing matching methods because interval matching develop a more "scientific" criterion for matching units using confidence intervals. Besides accommodating standard errors of propensity scores using confidence intervals, interval matching has another methodologically sound property. That is, CIs of the logit of estimated propensity scores in relatively sparse areas where it is less likely to find matched units would be wider than those in the area with more dense data where it is more likely to find matched units. This curve-linear relationship between the width of CIs and the density of the distribution of the logit of propensity scores may lead to more matched units in sparse areas to balance out the area with more dense data (see Fig. 4); whereas caliper matching has a fixed caliper bandwidth (e.g., b = 0.37 for this empirical example) for all the values of the logit of propensity scores regardless the density of the distribution of the logit of propensity scores. The curve-linear relationship (green) between the half width of the bootstrap 68 %CI and the logit of the propensity score, compared with the unit-invariant caliper band (red) Because these beneficial properties of interval matching, the empirical example demonstrated that interval matching is not only a viable alternative to caliper matching, but also produced promisingly more balanced data than did all other matching methods including caliper matching. It is true that the computation in interval matching is somewhat more labor intensive than that in other PSM methods. However, it should not be a problem in today's fast computing technology, which makes the encouraging results in interval matching overweigh its intensive computation. In future research, we would like to further explore the effectiveness of interval matching on reducing selection bias in a simulation study by creating different scenarios, such as 1:K matching, matching with replacement, sample size ratio of treatment group to comparison group, and size of common support between treatment and comparison groups. In addition to the effectiveness of interval matching on reducing selection bias, it would be also desirable to examine the effectiveness of interval matching on reducing estimation bias for treatment effects under various scenarios, comparing with some other matching techniques mainly for bias reduction in estimating treatment effects, such as full matching, subclassification, kernel matching (or difference-in-deference matching), as well as different propensity score estimation models. In sum, interval matching possess sound methodological properties and is a promisingly better alternative tool for reducing selection bias in making causal inference from observational studies, especially helpful in secondary data analysis on national databases such as the CMS data as demonstrated in the empirical example. mean difference (or selection bias) CMS: the Centers for Medicare and Medicaid Services PBR: percent bias reduction PSM: SB: standardized bias Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70(1):41–55. Pan W, Bai H, editors. Propensity score analysis: Fundamentals and developments. New York, NY: The Guilford Press; 2015. McCaffrey DF, Ridgeway G, Morral AR. Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychological methods. 2004;9(4):403–25. Cochran WG, Rubin DB. Controlling bias in observational studies: A review. Sankhyā: The Indian Journal of Statistics, Series A. 1973;35(4):417–46. Efron B, Tibshirani RJ. An introduction to the bootstrap. New York, NY: CRC Press LLC; 1998. Design for Nursing Home Compare five-star quality rating system: Technical users' guide [http://www.cms.gov/Medicare/Provider-Enrollment-and-Certification/CertificationandComplianc/Downloads/usersguide.pdf] Ho DE, Imai K, King G, Stuart EA: Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Political Analysis 2007. Rosenbaum PR, Rubin DB. Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. The American Statistician. 1985;39(1):33–8. Austin PC. Optimal caliper widths for propensity-score matching when estimating differences in means and differences in proportions in observational studies. Pharmaceutical statistics. 2011;10(2):150–61. Rosenbaum PR. Optimal matching for observational studies. Journal of the American Statistical Association. 1989;84(408):1024–32. Gu XS, Rosenbaum PR. Comparison of multivariate matching methods: Structures, distances, and algorithms. Journal of Computational and Graphical Statistics. 1993;2(4):405–20. Ho DE, Imai K, King G, Stuart EA. MatchIt: Nonparametric preprocessing for parametric causal inference. Journal of Statistical Software. 2011;42(8):1–28. Austin PC. An Introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate behavioral research. 2011;46(3):399–424. Bai H. A comparison of propensity score matching methods for reducing selection bias. International Journal of Research & Method in Education. 2011;34(1):81–107. Guo S, Barth RP, Gibbons C. Propensity score matching strategies for evaluating substance abuse services for child welfare clients. Children and Youth Services Review. 2006;28(4):357–83. Lutfiyya MN, Gessert CE, Lipsky MS. Nursing home quality: A comparative analysis using CMS Nursing Home Compare data to examine differences between rural and nonrural facilities. Journal of the American Medical Directors Association. 2013;14(8):593–8. Rural–urban continuum codes [http://www.ers.usda.gov/data-products/rural-urban-continuum-codes/documentation.aspx] Brookhart MA, Schneeweiss S, Rothman KJ, Glynn RJ, Avorn J, Stürmer T. Variable selection for propensity score models. Am J Epidemiol. 2006;163(12):1149–56. Don't be loopy: Re-sampling and simulation the SAS® way [http://www2.sas.com/proceedings/forum2007/183-2007.pdf] Local and global optimal propensity score matching [http://www2.sas.com/proceedings/forum2007/185-2007.pdf] Caliendo M, Kopeinig S. Some practical guidance for the implementation of propensity score matching. Journal of Economic Surveys. 2008;22(1):31–72. Rubin DB. Multivariate matching methods that are equal percent bias reducing, II: Maximums on bias Rreduction for fixed sample sizes. Biometrics. 1976;32(1):121–32. Rubin DB. Multivariate matching methods that are equal percent bias reducing, I: Some examples. Biometrics. 1976;32(1):109–20. We would like to thank Diane Holditch-Davis for the content review on the example of the Nursing Home Compare and Judith C. Hays for the language editorial review. School of Nursing, Duke University, DUMC 3322, 307 Trent Drive, Durham, NC, 27710, USA Wei Pan Department of Educational and Human Sciences, University of Central Florida, PO Box 161250, Orlando, FL, 32816, USA Haiyan Bai Correspondence to Wei Pan. WP developed the methods, conducted the study, and wrote the initial draft of the manuscript. HB conceived of the study, participated in its design, and helped to draft the manuscript. All authors read and approved the final manuscript. A list of the 74 covariates in the example data from the CMS nursing home compare database. (DOCX 18 kb) Selection bias prior to matching and bias reduction post matching for all 74 covariates. (XLS 62 kb) Pan, W., Bai, H. Propensity score interval matching: using bootstrap confidence intervals for accommodating estimation errors of propensity scores. BMC Med Res Methodol 15, 53 (2015). https://doi.org/10.1186/s12874-015-0049-3 Observational studies Propensity score methods Nearest neighbour matching Caliper matching The bootstrap Confidence intervals
CommonCrawl
Initial Pointwise Bounds and Blow-up for Parabolic Choquard-Pekar Inequalities DCDS Home Virtual billiards in pseudo–euclidean spaces: Discrete hamiltonian and contact integrability October 2017, 37(10): 5191-5209. doi: 10.3934/dcds.2017225 The Riemann Problem at a Junction for a Phase Transition Traffic Model Mauro Garavello , and Francesca Marcellini Department of Mathematics and its Applications, University of Milano Bicocca, Via R. Cozzi 55, 20125 Milano, Italy * Corresponding author: M. Garavello Received October 2016 Revised May 2017 Published June 2017 Fund Project: The authors were partially supported by the INdAM-GNAMPA 2015 project "Balance Laws in the Modeling of Physical, Biological and Industrial Processes" Full Text(HTML) Figure(10) We extend the Phase Transition model for traffic proposed in [8], by Colombo, Marcellini, and Rascle to the network case. More precisely, we consider the Riemann problem for such a system at a general junction with $n$ incoming and $m$ outgoing roads. We propose a Riemann solver at the junction which conserves both the number of cars and the maximal speed of each vehicle, which is a key feature of the Phase Transition model. For special junctions, we prove that the Riemann solver is well defined. Keywords: Phase transition model, hyperbolic systems of conservation laws, continuum traffic models, Riemann problem, Riemann solver. Mathematics Subject Classification: Primary: 35L65; Secondary: 90B20. Citation: Mauro Garavello, Francesca Marcellini. The Riemann Problem at a Junction for a Phase Transition Traffic Model. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5191-5209. doi: 10.3934/dcds.2017225 A. Aw and M. Rascle, Resurrection of "second order" models of traffic flow, SIAM J. Appl. Math., 60 (2000), 916-938. doi: 10.1137/S0036139997332099. Google Scholar S. Blandin, D. Work, P. Goatin, B. Piccoli and A. Bayen, A general phase transition model for vehicular traffic, SIAM J. Appl. Math., 71 (2011), 107-127. doi: 10.1137/090754467. Google Scholar G. M. Coclite, M. Garavello and B. Piccoli, Traffic flow on a road network, SIAM J. Math. Anal., 36 (2005), 1862-1886. doi: 10.1137/S0036141004402683. Google Scholar R. M. Colombo, Hyperbolic phase transitions in traffic flow, SIAM J. Appl. Math., 63 (2002), 708-721. doi: 10.1137/S0036139901393184. Google Scholar R. M. Colombo, Phase transitions in hyperbolic conservation laws, In Progress in analysis, Vol. I, II (Berlin, 2001), pages 1279-1287. World Sci. Publ., River Edge, NJ, 2003. Google Scholar R. M. Colombo, P. Goatin and B. Piccoli, Road networks with phase transitions, J. Hyperbolic Differ. Equ., 7 (2010), 85-106. doi: 10.1142/S0219891610002025. Google Scholar R. M. Colombo and F. Marcellini, A mixed ODE-PDE model for vehicular traffic, Math. Methods Appl. Sci., 38 (2015), 1292-1302. doi: 10.1002/mma.3146. Google Scholar R. M. Colombo, F. Marcellini and M. Rascle, A 2-phase traffic model based on a speed bound, SIAM J. Appl. Math., 70 (2010), 2652-2666. doi: 10.1137/090752468. Google Scholar M. Garavello, K. Han and B. Piccoli, Models for Vehicular Traffic on Networks Volume 9 of AIMS Series on Applied Mathematics, American Institute of Mathematical Sciences (AIMS), Springfield, MO, 2016. Google Scholar M. Garavello and B. Piccoli, Traffic flow on a road network using the Aw-Rascle model, Comm. Partial Differential Equations, 31 (2006), 243-275. doi: 10.1080/03605300500358053. Google Scholar M. Garavello and B. Piccoli, Traffic Flow on Networks volume 1 of AIMS Series on Applied Mathematics, American Institute of Mathematical Sciences (AIMS), Springfield, MO, 2006. Google Scholar P. Goatin, The Aw-Rascle vehicular traffic flow model with phase transitions, Math. Comput. Modelling, 44 (2006), 287-303. doi: 10.1016/j.mcm.2006.01.016. Google Scholar M. Herty, S. Moutari and M. Rascle, Optimization criteria for modelling intersections of vehicular traffic flow, Netw. Heterog. Media, 1 (2006), 275-294. doi: 10.3934/nhm.2006.1.275. Google Scholar M. Herty and M. Rascle, Coupling conditions for a class of second-order models for traffic flow, SIAM Journal on Mathematical Analysis, 38 (2006), 595-616. doi: 10.1137/05062617X. Google Scholar H. Holden and N. H. Risebro, A mathematical model of traffic flow on a network of unidirectional roads, SIAM J. Math. Anal., 26 (1995), 999-1017. doi: 10.1137/S0036141093243289. Google Scholar J. P. Lebacque, X. Louis, S. Mammar, B. Schnetzlera and H. Haj-Salem, Modélisation du trafic autoroutier au second ordre, Comptes Rendus Mathematique, 346 (2008), 1203-1206. doi: 10.1016/j.crma.2008.09.024. Google Scholar M. J. Lighthill and G. B. Whitham, On kinematic waves. Ⅱ. A theory of traffic flow on long crowded roads, Proc. Roy. Soc. London. Ser. A., 229 (1955), 317-345. doi: 10.1098/rspa.1955.0089. Google Scholar F. Marcellini, Free-congested and micro-macro descriptions of traffic flow, Discrete Contin. Dyn. Syst. Ser. S, 7 (2014), 543-556. doi: 10.3934/dcdss.2014.7.543. Google Scholar S. Moutari and M. Rascle, A hybrid Lagrangian model based on the Aw-Rascle traffic flow model, SIAM J. Appl. Math., 68 (2007), 413-436. doi: 10.1137/060678415. Google Scholar P. I. Richards, Shock waves on the highway, Operations Res., 4 (1956), 42-51. doi: 10.1287/opre.4.1.42. Google Scholar H. Zhang, A non-equilibrium traffic model devoid of gas-like behavior, Transportation Research Part B: Methodological, 36 (2002), 275-290. doi: 10.1016/S0191-2615(00)00050-3. Google Scholar Figure 1. The free phase $F$ and the congested phase $C$ resulting from (1) in the coordinates, from left to right, $(\rho,\eta)$ and $(\rho, \rho v)$. In the $(\rho,\eta)$ plane, the curves $\eta= \check w \rho $, $\eta= \hat w \rho $ and the curve $\eta= \frac{V_{\max}}{\psi(\rho)}\rho $ that divides the two phases are represented. The densities $\sigma_-$ and $\sigma_+$ are given by the intersections between the previous curves. Similarly in the $(\rho, \rho v)$ plane, the curves $\rho v= \check w \psi(\rho)\rho $, $\rho v= \hat w \psi(\rho)\rho $ and the densities $\sigma_-$ and $\sigma_+$ are represented Figure Options Download as PowerPoint slide Figure 2. The case $(\bar \rho,\bar \eta)\in C$. The set $\mathcal T_{inc}\left(\bar \rho, \bar \eta\right)$ it is represented in red in the coordinates, from left to right, $(\rho,\eta)$ and $(\rho, \rho v)$. The set $\mathcal T_{inc}^f \left(\bar \rho, \bar \eta\right)$ is represented on the $\rho v$ axis in the $(\rho, \rho v)$ plane Figure 3. The case $(\bar \rho,\bar \eta)\in F$. The set $\mathcal T_{inc}\left(\bar \rho, \bar \eta\right)$ it is represented in red in the coordinates, from left to right, $(\rho,\eta)$ and $(\rho, \rho v)$. The set $\mathcal T_{inc}^f \left(\bar \rho, \bar \eta\right)$ is represented on the $\rho v$ axis in the $(\rho, \rho v)$ plane Figure 4. The case $(\bar \rho,\bar \eta)\in F$. The set $\mathcal T_{out}\left(w, \bar \rho, \bar \eta \right)$ it is represented in red in the coordinates, from left to right, $(\rho,\eta)$ and $(\rho, \rho v)$. The set $\mathcal T_{out}^f \left(w,\bar \rho, \bar \eta\right)$ is represented on the $\rho v$ axis in the $(\rho, \rho v)$ plane Figure 5. The case $(\bar \rho,\bar \eta)\in C$. The set $\mathcal T_{out}\left(w, \bar \rho, \bar \eta \right)$ it is represented in red in the coordinates, from left to right, $(\rho,\eta)$ and $(\rho, \rho v)$. The set $\mathcal T_{out}^f \left(w,\bar \rho, \bar \eta\right)$ is represented on the $\rho v$ axis in the $(\rho, \rho v)$ plane Figure 6. The case $(\bar \rho,\bar \eta)\in F$ in an outgoing road for the approach in Subsection 4.1. The set of all the possible traces it is represented in red in the coordinates, from left to right, $(\rho,\eta)$ and $(\rho, \rho v)$. The corresponding set of flows is represented on the $\rho v$ axis in the $(\rho, \rho v)$ plane Figure 7. The case $(\bar \rho,\bar \eta)\in C$ in an outgoing road for the approach in Subsection 4.1. The set of all the possible traces it is represented in red in the coordinates, from left to right, $(\rho,\eta)$ and $(\rho, \rho v)$. The corresponding set of flows is represented on the $\rho v$ axis in the $(\rho, \rho v)$ plane Figure 8. The situation in the outgoing road related to the approach of Subsection 4.1. Left, in the $\left(\rho, \eta\right)$-plane, the states $\left(\rho_3^\ast, \eta_3^\ast\right)$ and $\left(\bar \rho_3, \bar \eta_3\right)$, connected through the middle state $\left(\rho^m, \eta^m\right)$. Right, in the $(t,x)$-plane, the waves generated by the Riemann problem. Note that the first wave has negative speed, so that it is not contained in the feasible region of the outgoing road Figure 9. The case $\gamma_{1}^{*}+\gamma_{2}^{*}=\Gamma_{3}^{w_{3}}$. At left the case $\gamma_1^* < \Gamma_1$ and $\gamma_2^* < \Gamma_2$. At right the case $\gamma_1^* = \Gamma_1$ Figure 10. The case $\gamma_{1}^{*}+\gamma_{2}^{*}<\Gamma_{3}^{w_{3}}$ João-Paulo Dias, Mário Figueira. On the Riemann problem for some discontinuous systems of conservation laws describing phase transitions. Communications on Pure & Applied Analysis, 2004, 3 (1) : 53-58. doi: 10.3934/cpaa.2004.3.53 Constantine M. Dafermos. A variational approach to the Riemann problem for hyperbolic conservation laws. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 185-195. doi: 10.3934/dcds.2009.23.185 Yanbo Hu, Wancheng Sheng. The Riemann problem of conservation laws in magnetogasdynamics. Communications on Pure & Applied Analysis, 2013, 12 (2) : 755-769. doi: 10.3934/cpaa.2013.12.755 Yu Zhang, Yanyan Zhang. Riemann problems for a class of coupled hyperbolic systems of conservation laws with a source term. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1523-1545. doi: 10.3934/cpaa.2019073 Weishi Liu. Multiple viscous wave fan profiles for Riemann solutions of hyperbolic systems of conservation laws. Discrete & Continuous Dynamical Systems - A, 2004, 10 (4) : 871-884. doi: 10.3934/dcds.2004.10.871 Zhi-Qiang Shao. Lifespan of classical discontinuous solutions to the generalized nonlinear initial-boundary Riemann problem for hyperbolic conservation laws with small BV data: shocks and contact discontinuities. Communications on Pure & Applied Analysis, 2015, 14 (3) : 759-792. doi: 10.3934/cpaa.2015.14.759 Alberto Bressan, Anders Nordli. The Riemann solver for traffic flow at an intersection with buffer of vanishing size. Networks & Heterogeneous Media, 2017, 12 (2) : 173-189. doi: 10.3934/nhm.2017007 Anupam Sen, T. Raja Sekhar. Structural stability of the Riemann solution for a strictly hyperbolic system of conservation laws with flux approximation. Communications on Pure & Applied Analysis, 2019, 18 (2) : 931-942. doi: 10.3934/cpaa.2019045 Francesca Marcellini. Existence of solutions to a boundary value problem for a phase transition traffic model. Networks & Heterogeneous Media, 2017, 12 (2) : 259-275. doi: 10.3934/nhm.2017011 Tai-Ping Liu, Shih-Hsien Yu. Hyperbolic conservation laws and dynamic systems. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 143-145. doi: 10.3934/dcds.2000.6.143 Tatsien Li, Wancheng Sheng. The general multi-dimensional Riemann problem for hyperbolic systems with real constant coefficients. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 737-744. doi: 10.3934/dcds.2002.8.737 Alberto Bressan, Marta Lewicka. A uniqueness condition for hyperbolic systems of conservation laws. Discrete & Continuous Dynamical Systems - A, 2000, 6 (3) : 673-682. doi: 10.3934/dcds.2000.6.673 Gui-Qiang Chen, Monica Torres. On the structure of solutions of nonlinear hyperbolic systems of conservation laws. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1011-1036. doi: 10.3934/cpaa.2011.10.1011 Stefano Bianchini. A note on singular limits to hyperbolic systems of conservation laws. Communications on Pure & Applied Analysis, 2003, 2 (1) : 51-64. doi: 10.3934/cpaa.2003.2.51 Fumioki Asakura, Andrea Corli. The path decomposition technique for systems of hyperbolic conservation laws. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 15-32. doi: 10.3934/dcdss.2016.9.15 Wancheng Sheng, Tong Zhang. Structural stability of solutions to the Riemann problem for a scalar nonconvex CJ combustion model. Discrete & Continuous Dynamical Systems - A, 2009, 25 (2) : 651-667. doi: 10.3934/dcds.2009.25.651 Peng Zhang, Tong Zhang. The Riemann problem for scalar CJ-combustion model without convexity. Discrete & Continuous Dynamical Systems - A, 1995, 1 (2) : 195-206. doi: 10.3934/dcds.1995.1.195 Alberto Bressan, Fang Yu. Continuous Riemann solvers for traffic flow at a junction. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4149-4171. doi: 10.3934/dcds.2015.35.4149 Boris Andreianov, Mohamed Karimou Gazibo. Explicit formulation for the Dirichlet problem for parabolic-hyperbolic conservation laws. Networks & Heterogeneous Media, 2016, 11 (2) : 203-222. doi: 10.3934/nhm.2016.11.203 Mauro Garavello. Boundary value problem for a phase transition model. Networks & Heterogeneous Media, 2016, 11 (1) : 89-105. doi: 10.3934/nhm.2016.11.89 HTML views (19) Mauro Garavello Francesca Marcellini
CommonCrawl
Find ${\begin{vmatrix} 1 & a & a^2 \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$ Determinants In the given $3 \times 3$ matrix, $1$, $a$ and $a^2$ are the entries in the first row, $1$, $b$ and $b^2$ are the elements in the second row and $1$, $c$ and $c^2$ are the entries in the third row. ${\begin{vmatrix} 1 & a & a^2 \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$ The determinant of the matrix of the order $3$ has to calculate in this matrix problem. So, let's learn how to evaluate the determinant of this square matrix. Subtract the Second row from First row $1$ is an entry in the first row and the first column. $1$ is also an element in the second row and the first column. Subtract the elements in the second row from the corresponding elements in the first row and substitute the difference of the entries in the respective positions of first row in the matrix. $R_1-R_2 \,\to\, R_1$ It makes the element in the first row and the first column to become $0$. $=\,\,\,$ ${\begin{vmatrix} 1-1 & a-b & a^2-b^2 \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$ $=\,\,\,$ ${\begin{vmatrix} 0 & a-b & a^2-b^2 \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$ In the first row of $3 \times 3$ matrix, $0$, $a-b$ and $a^2-b^2$ are the entries in the matrix. The element $a^2-b^2$ represents the difference of the squares and they can be expressed in factor form as per the difference rule of squares. $=\,\,\,$ ${\begin{vmatrix} 0 & a-b & (a-b)(a+b) \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$ In the first row, $a-b$ is a factor in both second and third columns but there is no factor in the first column. However, it can be written as follows for our convenience. $=\,\,\,$ ${\begin{vmatrix} 0 \times (a-b) & 1 \times (a-b) & (a-b) \times (a+b) \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$ Now, $a-b$ is a common factor in each entry of the first row in this $3 \times 3$ square matrix. So, it can be taken out common from the entries of the first row. $=\,\,\,$ $(a-b){\begin{vmatrix} 0 & 1 & a+b \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$ Subtract the Third row from Second row $1$ is an element in the second row and the first column. $1$ is also an entry in the third row and the first column. Find the subtraction of the entries in the third row from the corresponding elements in the second row of the matrix. Substitute them in their respective positions of the second row. The subtraction process makes the entry to become $0$ in the second row and the first column. $=\,\,\,$ $(a-b){\begin{vmatrix} 0 & 1 & a+b \\ 1-1 & b-c & b^2-c^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$ $=\,\,\,$ $(a-b){\begin{vmatrix} 0 & 1 & a+b \\ 0 & b-c & b^2-c^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$ In the second row of the square matrix of order $3$, the elements are $0$, $b-c$ and $b^2-c^2$. The element $b^2-c^2$ expresses the difference of two terms. So, the difference of the terms can be written in factor form by using the difference rule of the squares in factor form. $=\,\,\,$ $(a-b){\begin{vmatrix} 0 & 1 & a+b \\ 0 & b-c & (b-c)(b+c) \\ 1 & c & c^2 \\ \end{vmatrix}}$ In the second row, $b-c$ is a factor in both second and third elements but there is no factor like that in the first column. So, the factor $b-c$ can be included as follows for a cause. $=\,\,\,$ $(a-b){\begin{vmatrix} 0 & 1 & a+b \\ 0 \times (b-c) & 1 \times (b-c) & (b-c) \times (b+c) \\ 1 & c & c^2 \\ \end{vmatrix}}$ In all three elements of the second row, $b-c$ is a common factor and it can be taken out common from them. $=\,\,\,$ $(a-b)(b-c){\begin{vmatrix} 0 & 1 & a+b \\ 0 & 1 & b+c \\ 1 & c & c^2 \\ \end{vmatrix}}$ $1$ is an element in the first row and the second column. $1$ is also an entry in the second row and the second column. Let's find the difference of them by subtracting the entries in the second row from the respective elements in the first row. Substitute the difference of the elements in the respective positions in the first row. The idea behind finding the subtraction of the elements is to make the entry to become $0$ in the first row and the second column. $=\,\,\,$ $(a-b)(b-c){\begin{vmatrix} 0-0 & 1-1 & a+b-(b+c) \\ 0 & 1 & b+c \\ 1 & c & c^2 \\ \end{vmatrix}}$ $=\,\,\,$ $(a-b)(b-c){\begin{vmatrix} 0 & 0 & a+b-b-c \\ 0 & 1 & b+c \\ 1 & c & c^2 \\ \end{vmatrix}}$ $=\,\,\,$ $(a-b)(b-c){\begin{vmatrix} 0 & 0 & a+\cancel{b}-\cancel{b}-c \\ 0 & 1 & b+c \\ 1 & c & c^2 \\ \end{vmatrix}}$ $=\,\,\,$ $(a-b)(b-c){\begin{vmatrix} 0 & 0 & a-c \\ 0 & 1 & b+c \\ 1 & c & c^2 \\ \end{vmatrix}}$ Find the Determinant of the matrix In the square matrix of the order $3$, the first and second elements are $0$. So, the determinant of the simplified matrix can be evaluated by the third element. $=\,\,\,$ $(a-b)(b-c)\Big((-1)^{1+3} \times (a-c) \times (0 \times c\,-\,1 \times 1)\Big)$ $=\,\,\,$ $(a-b)(b-c)\Big((-1)^{4} \times (a-c) \times (0\,-\,1)\Big)$ $=\,\,\,$ $(a-b)(b-c)\Big(1 \times (a-c) \times (0\,-\,1)\Big)$ $=\,\,\,$ $(a-b)(b-c)\Big((a-c) \times (-1)\Big)$ $=\,\,\,$ $(a-b)(b-c)\Big((-1) \times (a-c)\Big)$ $=\,\,\,$ $(a-b)(b-c)(-a+c)$ $=\,\,\,$ $(a-b)(b-c)(c-a)$
CommonCrawl
Development and evaluation of a meat mitochondrial metagenomic (3MG) method for composition determination of meat from fifteen mammalian and avian species Mei Jiang1, Shu-Fei Xu2, Tai-Shan Tang3, Li Miao4, Bao-Zheng Luo5, Yang Ni6, Fan-De Kong2 & Chang Liu1 Bioassessment and biomonitoring of meat products are aimed at identifying and quantifying adulterants and contaminants, such as meat from unexpected sources and microbes. Several methods for determining the biological composition of mixed samples have been used, including metabarcoding, metagenomics and mitochondrial metagenomics. In this study, we aimed to develop a method based on next-generation DNA sequencing to estimate samples that might contain meat from 15 mammalian and avian species that are commonly related to meat bioassessment and biomonitoring. In this project, we found the meat composition from 15 species could not be identified with the metabarcoding approach because of the lack of universal primers or insufficient discrimination power. Consequently, we developed and evaluated a meat mitochondrial metagenomics (3MG) method. The 3MG method has four steps: (1) extraction of sequencing reads from mitochondrial genomes (mitogenomes); (2) assembly of mitogenomes; (3) mapping of mitochondrial reads to the assembled mitogenomes; and (4) biomass estimation based on the number of uniquely mapped reads. The method was implemented in a python script called 3MG. The analysis of simulated datasets showed that the method can determine contaminant composition at a proportion of 2% and the relative error was < 5%. To evaluate the performance of 3MG, we constructed and analysed mixed samples derived from 15 animal species in equal mass. Then, we constructed and analysed mixed samples derived from two animal species (pork and chicken) in different ratios. DNAs were extracted and used in constructing 21 libraries for next-generation sequencing. The analysis of the 15 species mix with the method showed the successful identification of 12 of the 15 (80%) animal species tested. The analysis of the mixed samples of the two species revealed correlation coefficients of 0.98 for pork and 0.98 for chicken between the number of uniquely mapped reads and the mass proportion. To the best of our knowledge, this study is the first to demonstrate the potential of the non-targeted 3MG method as a tool for accurately estimating biomass in meat mix samples. The method has potential broad applications in meat product safety. Meat represents a significant portion of daily human consumption. However, meat adulteration has become a global issue. Valuable and expensive meat, such as beef and mutton, is often detected mixed with cheaper chicken, duck, pork, mink and animal meat [1, 2]. For instance, two of the nine beef samples examined by Erol et al. contained horse and deer meat [3]. Such adulteration harms consumers' rights and interests [4] and disrupts market order [5]. Therefore, identifying adulterated ingredients in meat and meat products is essential. Based on next-generation DNA sequencing, many methods for determining the biological composition of mixed samples have been developed, including metabarcoding [6], metagenomics [7, 8] and mitochondrial metagenomics (MMG) [9]. The metabarcoding approach depends on the PCR amplification of a particular marker for species determination. The metagenomics approach consists of two steps for species determination and biomass quantification, namely, shotgun sequencing and mapping of read to whole nuclear genomes. MMG is essentially a metagenomic method using mitochondrial genomes (mitogenome) instead of nuclear genomes as references. The PCR amplification-dependent metabarcoding method is the workhorse for the molecular determination of biological composition. Numerous markers have been tested on animals, including 18S rRNA genes from the nuclear genome, 16S rRNA gene and cytochrome c oxidase I (COX1, CO1 or COI) gene from the mitogenome [10]. However, these PCR-dependent methods have limitations. Firstly, they require universal primers targeting particular markers, usually lacking across all taxa [11]. Different sets of universal markers and primer pairs complicate data integration when different markers are used, and different primer pairs are used for the same markers. Secondly, even with universal primers, template DNA molecules with different sequences have different melting properties, leading to amplification bias [12]. Consequently, the direct quantification of template DNA molecules with different sequences is difficult. All-Food-Seq (AFS) is a recently developed metagenomics method [8], in which the non-targeted deep sequencing of total genomic DNA from foodstuff, followed by bioinformatics analysis, can identify species from all kingdoms of life with high accuracy. It facilitates the quantitative measurement of the main ingredients and detection of unanticipated food components. Conceptually, the AFS method has set up a framework for ultimate bio-surveillance. However, the AFS method has several practical limitations. Firstly, the method is probably extremely complex for bioassessment and biomonitoring because a whole genome has a high degree of complexity. Secondly, although whole-genome databases have expanded rapidly, obtaining high-quality whole-genome sequences for a species requires many years. The effect of genomic diversity on bioassessment and biomonitoring is unknown. Thirdly, this study used simulated data rather than experimental data. MMG delimits closely related species from mixed samples [13, 14]. This method is desirable because of its advantages. Firstly, a mitogenome and its genes are common phylogenetic, DNA barcoding and metabarcoding markers. Secondly, the structures of mitogenomes are conserved, whereas sequences can be highly diverse. Thirdly, mitogenomes are small and easy to obtain and can be directly reconstructed using bioinformatics methods. Fourthly, large numbers of mitogenomes are available in public databases. More than 10,000 mitogenomes have been included in the GenBank in December 2020. The performance and accuracy of metabarcoding and MMG in biomass estimation in invertebrate community samples have been evaluated [15]. Overall, MMG yields more informative predictions of biomass content from bulk macroinvertebrate communities than metabarcoding. However, despite that MMG has been applied to ecological assessment [9, 16,17,18,19,20,21], the use of MMG in mammalian and avian meat mixed samples have not been examined to the best of our knowledge. In this study, we intended to use either metabarcoding or MMG to detect the potential mixing of meat from 15 mammalian and avian species on the basis of a market survey. Preliminary studies suggested that the most commonly used metabarcoding markers, COI and 16S, are unsuitable for simultaneously detecting meat from these 15 species. Thus, we tested MMG in mixed meat samples. This approach, called 'meat mitochondrial mitogenome (3MG)', circumvent the problem of marker selection, PCR bias and sequencing bias. Additionally, this approach takes advantage of the availability of mitogenomes for many species. The results showed that it can accurately determine the biological composition of meat mix samples and accurately estimate biomass. The method has a wide range of applications in food and pharmaceutical industries involving animal products. Meat samples and mock mixed meat samples We prepared mock samples with meat from the legs of 15 mammalian and avian species: Anas platyrhynchos (duck), Bos taurus (cattle), Camelus bactrianus (camel), Canis lupus familiaris (dog), Equus caballus (horse), Gallus gallus (chicken), Mus musculus (mouse), Mustela putorius voucher (ferret), Myocastor coypus (nutria), Nyctereutes procyonoides (raccoon dog), Oryctolagus cuniculus (rabbit), Ovis aries (sheep), Rattus norvegicus (rat), Sus scrofa domesticus (pig) and Vulpes vulpes (fox). Efforts had been made to extract meat samples with homogenous compositions intraspecificly and interspecificly. We obtained camel, nutria, fox, donkey and deer meat from breeding farms. Nanjing Medical University provided the mouse, rabbit and rat samples. The Entry-exit Inspection and Quarantine Bureau provided other meat samples. The detailed information regarding sample origin, particularly cities and institutions, is provided in Table 1. Table 1 Information for meat samples used in this study Two methods were used in mixing the samples. One mix contained meat samples in equal amounts from 15 species. This mix was referred to as the 'mix containing meat from 15 species' or 'M1'. The other mix contained meat from S. scrofa domesticus (pig) and G. gallus (chicken) in the following proportions: 10:0 ('sample 1; mix containing two species' or 'M2-S1'), 8:2 (M2-S2), 6:4 (M2-S3), 4:6 (M2-S4), 2:8 (M2-S5) and 0:10 (M2-S6). Each M1 or M2 sample has three replicates. Loop-mediated isothermal amplification (LAMP) We performed loop-mediated isothermal amplification (LAMP) experiments to validate the composition of mock samples (M1 and M2). LAMP methods for detecting ingredients that contain cattle, sheep, pig, chickens and duck meat were developed by the Technology center of Xiamen Entry-Exit Inspection and Quarantine Bureau of the People's Republic of China [22,23,24]. The probe and primer sequences target cytB genes from the corresponding species were provided in Table S1. The PCR reaction mix contained isothermal master mix (15 μL), primer mix (FIP, 2 μL; BIP, 2 μL; F3, 1 μL; B3, 1 μL) and DNA (1 μL). We added RNase-free water to the final reaction of 50 μL. The experimental conditions were as follows: amplification at 60 °C for 90 min and annealing from 98 °C to 80 °C at a rate of 0.05 °C per second. DNA extraction, library construction and next-generation sequencing (NGS) We extracted genomic DNA with a modified sodium dodecyl sulfate (SDS)-based method [25]. The integrity and concentration of the extracted DNA were detected through electrophoresis in 1% (w/v) agarose gel and spectrophotometer (Nanodrop 2000; Thermo Fisher Scientific, USA). The extracted DNA samples (100 ng) were subjected to library construction using NEBNext® Ultra™ II DNA library prep kit for Illumina® (New England BioLabs, USA) according to the manufacturer's recommendations. Each library had an insert size of 500 bp. The quantity and quality of the libraries were analysed using Agilent 2100 Bioanalyser (Agilent Technologies, USA). We sequenced the libraries using the HiSeq X reagent kits (Illumina, USA) in an Illumina Hiseq X sequencer. We deposited the data generated in this study in GenBank. The accession numbers were SRR9107560 and SRR9140737. Construction of mitogenome reference databases We constructed a database (15MGDB), which had complete mitogenome sequences from the 15 species. The 15 mitogenome sequences were downloaded from GenBank, with the following accession numbers: A. platyrhynchos (NC_009684), B. taurus (NC_006853), C. bactrianus (NC_009628), C. lupus familiaris (NC_002008), E. caballus (NC_001640), G. gallus (NC_001323), M. musculus (NC_005089), M. putorius voucher (NC_020638), M. coypus (NC_035866), O. cuniculus (NC_001913), O. aries (NC_001941), N. procyonoides (NC_013700), R. norvegicus (NC_001665), S. scrofa domesticus (NC_012095), V. vulpes (NC_008434). The sequences in 15MGDB were used in constructing a searchable database with the makeblastdb command from the BLAST+ (v2.7.1) software package [26] and with the option '-dbtype nucl -parse_seqids'. Development of the 3MG analysis pipeline The 3MG pipeline was developed using Python 2.7.15 with the following third-party software applications: pandas module in python, BBMap (v35.66; https://sourceforge.net/projects/bbmap/), MITOBim (v1.9.1) [27], Blast+ (v2.7.1) [26], bowtie2 (v2.3.4) [28] and samtools (v1.9) [29]. The source code, sample data and instruction for using the locally installed copy of the 3 mg pipeline and a singularity container for running the 3 mg pipeline can be found using the following link: http://1kmpg.cn/3mg/. Determination of 3MG detection errors using simulation We generated 21 sets of data through simulation. Reads from an M2-S1 sample containing 100% pork was used as the background. Reads were then extracted from the reads of M2-S6 containing only chicken with the seqtk program (v1.3-r106) and with the option 'seqtk sample -s100'. The reads extracted from M2-S6 were mixed with those from M2-S1 in the following percentages: 0.01, 0.1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100%. We prepared five replicates of simulated data for each percentage level and used default seeds. The resulting sample sets were then analysed using the 3MG pipeline. We calculated relative detection errors with the following formula as described previously [8]. $$\mathrm{Relative}\ \mathrm{error}= \mid \left(\mathrm{Number}\ \mathrm{of}\ \mathrm{chicken}\ \mathrm{reads}\ \mathrm{detected}-\mathrm{Number}\ \mathrm{of}\ \mathrm{chicken}\ \mathrm{reads}\ \mathrm{in}\ \mathrm{the}\ \mathrm{sample}\right) \mid /\left(\mathrm{Number}\ \mathrm{of}\ \mathrm{chicken}\ \mathrm{reads}\ \mathrm{in}\ \mathrm{the}\ \mathrm{sample}\right)$$ Comparison of the reference and assembled mitogenomes We aligned the assembled sequences with their reference sequences for each species using the CLUSTALW2 (v2.0.12) program [30] with option '-type = dna -output = phylip'. We used these aligned sequences in constructing phylogenetic trees with the maximum likelihood (ML) method implemented in RaxML (v8.2.4) [31]. We calculated the intra-specific and inter-specific distances among mitogenomes using the distmat program from EMBOSS (v6.3.1) [28] with the options '-nucmethod = 0'. Corrections for multiple substitutions cannot be made through this method. Finally, we calculated the p-distances among mitogenomes with MEGA (v7) [32]. Detection of other contaminating biological composition Taxon content in reads unmapped to mitogenomes were analysed using the RDP classifier (v2.12) [33]. The unmapped reads were assigned to COX1 and 16S rRNA database with an assignment confidence cutoff of 0.8. The 16S rRNA database is part of the RDP Classifier package. The COX1 database was downloaded from https://github.com/terrimporter/CO1Classifier/releases/tag/v3.2 [34]. The results were visualised using MEGAN (v6) [35] with the following LCA parameters: 'minSupportPercent = 0.02, minSupport = 1, minScore = 50.0, maxExpected = 0.01, topPercent = 10.0 and readAssignmentMode = readCount'. Evaluation of the metabarcoding method for the 15 mammalian and avian species To determine whether the mixture containing 15 species can be identified using metabarcoding, we analysed the availability of universal primers and the ability of their amplified products (if applicable) to distinguish the 15 species. For the COX1 gene, no primer matched the sequences from all the species. For instance, the maximum number of matched species was five when the primer pair I-B1 and COI-C04 was used (Table S2). For the 16S rRNA gene, only one primer set, 16Sbr-H, matched the sequences of all the species, and the amplified products showed high degrees of variations that were sufficient for distinguishing the 15 species (Fig. S1). For the 18S rRNA, only the primer Uni18S was found in the sequences of all the species, but the amplified products were highly conserved and could not be used in distinguishing the 15 species (Fig. S2). Previously, the performance of COI metabarcoding and that of shotgun mitogenome sequencing were compared. Shotgun sequencing can provide highly significant correlations between read number and biomass [17]. As a result, we focused on developing the metagenomic approach for the direct biomass estimation of meat samples from the 15 species. Development of 3MG method The 3MG pipeline can be divided into four steps (Fig. 1). The first step is 'extracting mitochondrial reads'. We searched next-generation sequencing (NGS) reads against 15MGDB by using the BLASTN command with the following parameters: -evalue = 1e-5 and –outfmt = 6. We extracted the matched reads using the 'filterbyname.sh' command in the BBMap software package (v35.66). The extracted reads were called 'mitochondrial reads' and used in the subsequent procedures. Flow chart of the 3MG pipeline. The 15 species used for the qualitative analyses and the setup of meat from the two species for the quantitative analyses are shown on the top. The four steps are labeled as S1, S2, S3 and S4. The results of each step are shown in the black rectangle. The third-party tools are shown on the right side of the corresponding process The second step is assembling mitogenomes from mitochondrial reads. The mitogenomes in the public database might have originated from a particular individual or subspecies. Thus, the sequences from the samples might differ from those in the public database because of intra-specific variations. To ensure accurate qualitative and quantitative analyses, we assembled the mitogenomes according to the NGS reads and used MITOBim (v1.9.1) [27] with the default parameters. The mitogenome sequences downloaded from GenBank were used as references. They were called reference mitogenomes in the subsequent text. The third step is mapping reads to the assembled mitogenomes. We mapped the reads to each assembled sequence of the species with bowtie2 (v2.3.4) [28], using default parameters. We then extracted the mapped reads using samtools (v1.9) [29] with the following command: 'samtools view -bF 4.' The fourth step is identifying and counting reads uniquely mapped to the assembled mitogenomes. The mitogenome sequences were highly conserved. Some reads may be mapped to the mitogenomes of multiple species. We calculated the p-distances among these mitogenomes to determine how conserved they were. The p-distances among the 15 mitogenomes ranged from 0.14 to 0.49 (Fig. S3). These numbers indicated a high degree of mitogenome sequence conservation. All the mapped reads may have originated from multiple sources. To overcome this problem, we developed a custom python script to remove non-specific reads. Specifically, we obtained 15 files recording the mapped reads of each species. We compared the mapped reads of the target specie with those of other 14 species and deleted non-specific reads appeared multiple files. The proportions of unique reads mapped to the mitogenome of a particular species in all mitochondrial reads were calculated. When the proportion was greater than 2% (the cutoff of 2% was set according to the results of Determination of detection sensitivity for 3MG methods based on simulated datasets section), the species was called 'presence'. Determination of detection sensitivity for 3MG methods based on simulated datasets We constructed 21 simulated datasets (Table 2, SD01-SD21) mixed with 30,000 pork (major composition) and chicken mitochondrial reads (minor composition) to determine detection sensitivity. The percentages of chicken mitochondrial reads were 0.01, 0.1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20,30, 40, 50, 60, 70, 80, 90 and 100% of all the mitochondrial reads in the simulated dataset. We prepared five replicates at each proportion. We analysed the data using the 3MG analysis pipeline. We calculated the relative detection error at each proportion (Table 2). At a high proportion, the quantitative detected results were similar to the simulation results. At 2–100%, we detected the minor composition with a relative error of < 5%. However, the accuracy was significantly reduced at 1, 0.1 and 0.01%. These results indicated that the method can detect a contaminant at a proportion of 2% and has an error rate of < 5%. Table 2 Relative detection errors determined using the simulated dataset Qualitative determination of biological composition with the 3MG method Sequencing results and characterisation We constructed mixed samples containing meat from 15 animals (M1). The animals fall into several categories. For example, pork, cattle, chicken, lamb, duck and rabbit are primarily used for human consumption. Ferret, nutria, raccoon dog and fox are commonly used in the fur industry. Dogs are companion animals. Camel and horse are used for multiple purposes. Rat and mouse co-inhabit with humans and their meat can potentially contaminate other meat for human consumption. Adulteration of meat not meant for human consumption have been reported, particularly through the addition of meat of fur animals to pork or beef or substitution of pork with horse meat [2]. The motivation is primarily economic gain, as profits can be made when expensive meat is replaced with cheap meat. Some of these adulterations can be culturally offensive, for example, adding pork in food for Islamic consumers [36, 37]. We constructed three M1 samples labeled as 'R1', 'R2' and 'R3', respectively. We obtained 23.45, 24.10 and 28.56 GB for each of the three replicates (Table 3). The percentages of bases having Quality scores ≥30 were 89.51, 88.97 and 89.97%. Table 3 Summary of sequencing data for samples M1 and M2 Qualitative analysis of M1 sample's biological composition We analysed the NGS data generated from the M1 samples using the pipeline 3MG. In step one, 331,866 (0.43%), 222,702 (0.28%) and 267,495 (0.28%) reads were mapped to the mitogenomes and were extracted as mitochondrial reads (Table 4). In step two, we successfully assembled all 15 mitogenomes from mitochondrial reads. We constructed a phylogenetic tree using 15 pairs of assembled and reference mitogenomes (Fig. 2). The alignment of the 15 pairs of mitogenomes is shown in Fig. S4. The reference and assembled mitogenomes for the same species were clustered together (left part of Fig. 2). Table 4 Summary of reads mapped to the mitogenomes of the 15 species Phylogenetic analysis of the reference ('R') and assembled ('A') mitogenomes. The intra-specific and inter-specific nucleotide distances of mitogenomes are shown in the squares to the right of the tree. The intra-specific nucleotide distance was calculated between the reference and assembled mitogenomes for the same species. We calculated the inter-specific nucleotide distances between each of the 14 pairs of mitogenomes. Each pair consisted of one focal mitogenome and a mitogenome. The average inter-specific distances are shown To compare the intra-specific and inter-specific distances, we calculated the distances, as shown in the right part of Fig. 2. Intra-specific distance was the distance between the assembled and reference mitogenomes for a particular species. By contrast, inter-specific distance was the average distance between the assembled mitogenomes of the focal species and those of the other 14 species. Intra-specific nucleotide distances were much smaller than the inter-specific distances. Thus, we assembled the mitogenomes of specific species from the metagenomic data with high accuracy. Our assembled mitogenomes unlikely contained chimeric sequences because the inter-specific distances were significantly larger than the intra-specific distances. In step three, we mapped these mitochondrial reads to the 15 assembled mitogenomes. Approximately 10,000–90,000 reads were mapped to each mitogenome (Table 4). However, an average of 52.07% reads was mapped to multiple species. Thus, using the total number of mapped reads led to the overestimation of the meat content of a particular species. For example, 79.77% of the reads mapped to the B. taurus (cattle) mitogenome were non-specific, and 63.70% of the reads mapped to the O. aries (sheep) mitogenome were non-specific. Using the total number of reads in estimating the beef and lamb content led to overestimation. Hence, 3MG determines biological composition according to the number of reads uniquely mapped to a particular mitogenome. In step four, we identified reads uniquely mapped to the mitogenome of each species. The proportion of unique reads to all mitochondrial reads was more than 2% for 12 species in at least one replicate sample (Table 4). The mapped read proportions for B. taurus, O. cuniculus and R. norvegicus were approximately 1%. In summary, through the analysis of the 15 species mix with this 3MG pipeline, 12 of 15 (80%) species were successfully identified with a confidence level of 95%. Validation of 3MG analysis results for M1 samples by using LAMP experiments LAMP is commonly used in detecting the biological composition of meat products. We used LAMP results to evaluate the accuracy of the 3MG results. Unfortunately, LAMP protocols are available for the meat of only five of 15 species (pig, sheep, cattle, duck and chicken). Thus, only these five species in the M1 samples were tested. The experiments were conducted separately for each target species (Fig. 3). The results confirmed the presence of meat from pig (Fig. 3A), sheep (Fig. 3B), cattle (Fig. 3C), duck (Fig. 3D) and chicken (Fig. 3E) in our experimental samples, consistent with the results obtained from the 3MG method. Composition determination of the M1 samples with the LAMP method. The X-axis represents time, whereas the Y-axis represents relative fluorescence abundance. Each species was tested in three replicates shown in different colors Quantitative determination of biological composition in mixed sample Sequencing results and characteristics To determine the performance of 3MG in estimating the biological composition in biomass, we prepared a series of samples by mixing meat from S. scrofa domesticus (pig) and G. gallus (chicken) in different proportions. We performed DNA extraction, library construction, DNA sequencing and DNA analyses in the same way as those for the M1 samples. The sequencing results are summarised in Table 3. We generated an average of 2.95 GB of data with 19,749,625 raw reads for each M2 sample. Approximately 90% of bases had quality scores greater than 30. Quantitative analysis of M2 samples' biological composition The number of reads mapped to the pig and chicken mitogenomes were shown in Table S2. The proportion of reads uniquely mapped to the pig mitogenomes was called meat content estimated with 3MG. They were plotted against the known meat content (Fig. 4A). Regression analyses showed that the pork's estimated and known meat content had a correlation coefficient of 0.98. Similarly, based on relative read counts, the estimated meat content for chicken were plotted against known meat content (Fig. 4B). Regression analyses showed that the estimated and known meat content had a correlation coefficient of 0.98. The high correlation coefficient between the estimated and known content suggested that the 3MG method can use the percentage of uniquely mapped reads in quantitatively determining biological composition in a meat mix. Results of quantitative analysis using the 3MG and LAMP for two species. A and B Quantitative analysis results of the 3MG for a combination of two species. The X-axis shows the mass proportions of pork (A) and chicken (B) in the mix samples. The Y-axis shows the proportions of reads uniquely mapped to pork (A) and chicken (B) mitogenomes from mixed samples. The R-value represents the correlation coefficient between the proportions of uniquely mapped reads and the mass proportions of the samples. C and D Results of quantitative analysis using LAMP for the mixed samples of two species. The Y-axis shows the peak times for detecting pork (C) and chicken (D) components in the mix samples. The R-value represents the correlation coefficient between peak time and the mass proportion in the mixed samples Validation of 3MG analysis results using LAMP experiments We conducted a LAMP experiment to determine the quantity of pork and chicken in different ratios. We then compared the LAMP results with those obtained from the 3MG method. The peak time for detecting composition was called meat content estimated by LAMP and plotted against the known meat content of pig (Fig. 4C) and chicken (Fig. 4D). Regression analyses showed that the estimated and known meat content had correlation coefficients of 0.99 (pork) and 0.96 (chicken). Consequently, the 3MG results were consistent with the LAMP results. However, the variations in the LAMP results for chicken were significantly higher than those in the 3MG results. This observation suggested that the 3MG results were more stable than the LAMP results, at least for chicken meat. Estimation of the relative correction factor for DNA–biomass ratio from different species We mixed the meat of 15 species to construct M1 samples in equal mass ratios as described earlier. However, the number of reads mapped to each mitogenome of the 15 species varied significantly (Table 4). This discrepancy was likely due to the differences in mitogenome DNA content among the 15 species at the same meat biomass. As a result, a correction factor was needed when the meat mass was estimated from uniquely mapped read counts for a particular species. Using the number of reads uniquely mapped to the S. scrofa domesticus mitogenome as the baseline, we calculated the relative correction factors for the other species. The correction factors were 1.00 for A. platyrhynchos, 0.77 for C. bactrianus, 1.46 for C. lupus familiaris, 0.59 for E. caballus, 0.65 for G. gallus, 0.32 for M. musculus, 0.40 for M. putorius voucher, 1.24 for M. coypus, 0.43 for N. procyonoides, 0.31 for O. aries and 1.94 for V. vulpes. This set of relative correction factors might correlate with the relative copy numbers of mitogenome in the muscle tissues of each species. They can be used in estimating meat content for different species. Detailed discussions on the ratios of nuclear DNA to mitochondrial DNA and DNA to biomass are provided in the following text. To determine the presence of unexpected biological composition in the samples, we classified the unmapped reads with the RDP classifier and analysed them using MEGAN6. The unmapped reads can be divided into four categories: bacteria, Archaea, Eukaryota and 'not assigned' (Fig. 5). Five genera belonging to Eukaryota were annotated: Myocastor, Canis, Sus, Anas and Gallus. These reads may have been extremely diverse and thus were not mapped to the mitogenomes in the 3MG process. Overall, we detected few contaminants from other mammals and bacteria in our mock mix samples. Analyses of reads unused by the 3MG. The phylogram shows the taxa at the genus level at which reads were mapped. Each circle of the tree represents a taxon labeled by its name and the number of reads assigned to it. The size of the circle represents the proportion of reads aligned to the taxon and cannot be aligned to a lower level of the taxon Meat adulteration and contamination can affect consumers' well-being, disrupt market order and insult religious beliefs. Hence, the development of qualitative, quantitative and unbiased methods for detecting the composition of meat products is of great importance. In the present study, we found that meat composition from 15 species cannot be identified with the metabarcoding approach because of the lack of universal primers or the needed discrimination power. Therefore, we developed a meat mitochondrial metagenomics (3MG) method to determine the composition of 15 meat most commonly found in food markets. The evaluation of detection sensitivity for the 3MG methods based on simulated datasets indicated that the method can detect a contaminating composition at a proportion of 2% and has an error rate of < 5%. This method successfully identified the presence of 12 of 15 (80%) species with the threshold of detection sensitivity. This result showed that the method can simultaneously detect the presence of multiple species with high sensitivity. It can detect a wide variety of adulterated meat in the market. In addition, the analyses of the two species mixed samples revealed correlation coefficients of 0.98 for pork and 0.98 for chicken between the number of uniquely mapped reads and the mass proportion. The 3MG results were more stable than the LAMP results, at least for chicken meat, indicating that the method can use the percentage of uniquely mapped reads in quantitatively determining biological composition in a meat mix. To the best of our knowledge, this study is the first to demonstrate the usefulness of the mitochondrial metagenomics method in detecting meat composition and estimating biomass. This method has several advantages over methods based on PCR amplification and particular markers. It is a non-targeted approach and does not need to assume the biological composition of samples. Consequently, it is likely to have a lower false-negative detection rate. Given that PCR-based methods require species primers, they often fail to amplify sequences not matched by primers. Furthermore, the 3MG method is not affected by problems in PCR reactions, such as the generation of multiple PCR bands resulting from non-specific amplification. The detection of multiple composition with PCR-based methods requires simultaneous PCR reactions specific to multiple biological composition. This approach can be quite expensive. By contrast, the 3MG methods can potentially reduce the cost in this case. The 3MG method may facilitate the analysis of high-value products, such as medical and health-promoting products. In general, the 3MG method is suitable for non-targeted biomonitoring and requires meat composition with an abundance above specific levels, whereas PCR-based method is suitable for targeted biomonitoring and can detect biological composition at considerably low abundance levels. We showed that the mitogenome DNAs of the 15 mammalian and avian species represent 0.28–0.43% of the total DNA. The generation of 1 GB of data costs around US$ 10, and 1 GB of data can produce sufficient mitochondrial reads for determining biological composition qualitatively and quantitatively. Mitogenomes from animals are relatively small and easy to assemble. In December 2020, more than ten thousand animal mitogenomes had been deposited in the NCBI RefSeq database (https://www.ncbi.nlm.nih.gov/genome/browse#!/organelles/). Thus, expecting that all species used in food and medicine will have their mitogenomes available soon is reasonable. Owing to the rapid drop in sequencing costs, fast accumulation of mitogenomes and the integrated bioinformatics software tools, we can expect the broad application of the 3MG method in the near term. One problem encountered in this study was that beef was successfully detected using LAMP but was not detected in the mock sample when the 3MG method was used. One explanation is that the cattle mitogenome sequence has relatively small percentages of unique sequences. Our data showed that only 17.12–23.93% of reads mapped to the cattle mitogenome were unique to the cattle mitogenome (Table 4). LAMP primers were unique enough to amplify the cattle sequence successfully. Hence, the proportion of unique regions on a mitogenome is essential for its successful detection with the 3MG method. In addition, rabbit or rat meat was not successfully detected with the 3MG method. One explanation is that the mitochondrial DNA has a low proportion of all cellular DNA. Our data showed that the total number of reads mapped to the mitochondrial genomes of these two species was significantly lower than those mapped to the other species (Table 4). Additional studies are needed to optimise the 3MG for the detection of such species mixed samples. Several improvements can be made for the 3MG method. Firstly, internal controls can be added to the samples for the accurate determination of the amount of mito-DNA for particular animal species. As described previously [14, 38], the internal controls can be commonly used as metabarcoding markers, particularly COI and 18S. As the lack of universal primers prevent these markers in metabarcoding analysis, they should represent perfect sequences serving as internal controls for 3MG analysis. Secondly, correction factors should be estimated for biomass estimation based on read counts. For meat biomonitoring, biomass is more commonly used than read counts. Thus, an appropriate conversion rate from read count to the biomass for each meat type is needed. It should be emphasized that the sampling locations may affect the results of biomass estimation. For example, meat from different locations of the legs might have different ratios of fat and fibers, resulting in the variations in the DNA extraction rate and the nuclear to mitochondrial DNA ratio. In this study, we tried to extract samples with homogeneous compositions intraspecificly and interspecificly to minimize this effect. The sample heterogeneity problem is difficult to solve not only for the 3MG method, but also for other traditional detection methods in general. Therefore, we need to estimate two types of correction factors. The first one is the nuclear to mitochondrial DNA ratio (also known as the nuclear–mito ratio). The DNA to biomass ratio (DNA-mass ratio) should be calculated as well. Given that meat might contain different proportions of fats, a high degree of variations in nuclear–mito ratio and DNA–mass ratio are expected among different species. Thirdly, many studies have focused on the meat from 15 mammalian and avian species used in food safety biomonitoring. Meat from many other animal species is commonly consumed but has not been tested in the current study. For instance, fish represents another large group of animal meat widely consumed. The 3MG methods developed in the current study can be applied to fish meat in theory. Parameter optimisation and validation of 3MG on the assessment of fish meat are interesting subjects. Lastly, we should build an extensive reference database for unique mitogenomic DNA sequences from different varieties of related animal species given that many animals, such as chickens, pigs, cattle and sheep, have many endemic species. Building an extensive database containing variety-specific mitochondrial genome sequences will facilitate the identification of the sources of particular animal species. The current research has laid the foundation for developing accurate and standard procedures for detecting the composition of meat qualitative and quantitatively. The methods will be necessary for the bioassessment and biomonitoring of meat products worldwide and significantly contribute to meat safety management. The Next Generation Sequencing data for this research is available on the SRA database (https://www.ncbi.nlm.nih.gov/sra). The accession number of a mixture of 15 commonly used animal meat raw sequence reads in Biosample is SAMN11812028. The raw data can be download with the SRA accessions number SRR9107560. Cawthorn D-M, Steinman HA, Hoffman LC. A high incidence of species substitution and mislabelling detected in meat products sold in South Africa. Food Control. 2013;32(2):440–9. Premanandh J. Horse meat scandal – a wake-up call for regulatory authorities. Food Control. 2013;34(2):568–9. Ayaz Y, Ayaz ND, Erol I. Detection of species in meat and meat products using enzyme-linked immunosorbent assay. J Muscle Foods. 2006;17(2):214–20. Zhang W, Xue J. Economically motivated food fraud and adulteration in China: an analysis based on 1553 media reports. Food Control. 2016;67:192–8. Peng GJ, Chang MH, Fang M, Liao CD, Tsai CF, Tseng SH, et al. Incidents of major food adulteration in Taiwan between 2011 and 2015. Food Control. 2016;72:145–52. Bik HM, Porazinska DL, Creer S, Caporaso JG, Knight R, Thomas WK. Sequencing our way towards understanding global eukaryotic biodiversity. Trends Ecol Evol. 2012;27(4):233–43. Carvalho DC, Palhares RM, Drummond MG, Gadanho M. Food metagenomics: next generation sequencing identifies species mixtures and mislabeling within highly processed cod products. Food Control. 2017;80:183–6. Ripp F, Krombholz CF, Liu Y, Weber M, Schafer A, Schmidt B, et al. All-food-Seq (AFS): a quantifiable screen for species in biological samples by deep DNA sequencing. BMC Genomics. 2014;15:639. Crampton-Platt A, Yu DW, Zhou X, Vogler AP. Mitochondrial metagenomics: letting the genes out of the bottle. GigaScience. 2016;5:15. Rennstam Rubbmark O, Sint D, Horngacher N, Traugott M. A broadly applicable COI primer pair and an efficient single-tube amplicon library preparation protocol for metabarcoding. Ecol Evol. 2018;8(24):12335–50. Deagle BE, Jarman SN, Coissac E, Pompanon F, Taberlet P. DNA metabarcoding and the cytochrome c oxidase subunit I marker: not a perfect match. Biol Lett. 2014;10(9):20140562. Polz MF, Cavanaugh CM. Bias in template-to-product ratios in multitemplate PCR. Appl Environ Microbiol. 1998;64(10):3724–30. Tang M, Tan M, Meng G, Yang S, Su X, Liu S, et al. Multiplex sequencing of pooled mitochondrial genomes-a crucial step toward biodiversity analysis using mito-metagenomics. Nucleic Acids Res. 2014;42(22):e166. Ji Y, Huotari T, Roslin T, Schmidt NM, Wang J, Yu DW, et al. SPIKEPIPE: a metagenomic pipeline for the accurate quantification of eukaryotic species occurrences and intraspecific abundance change using DNA barcodes or mitogenomes. Mol Ecol Resour. 2020;20(1):256–67. Bista I, Carvalho GR, Tang M, Walsh K, Zhou X, Hajibabaei M, et al. Performance of amplicon and shotgun sequencing for accurate biomass estimation in invertebrate community samples. Mol Ecol Resour. 2018;18(5):1020–34. Choo LQ, Crampton-Platt A, Vogler AP. Shotgun mitogenomics across body size classes in a local assemblage of tropical Diptera: phylogeny, species diversity and mitochondrial abundance spectrum. Mol Ecol. 2017;26(19):5086–98. Crampton-Platt A, Timmermans MJ, Gimmel ML, Kutty SN, Cockerill TD, Vun Khen C, et al. Soup to tree: the phylogeny of beetles inferred by mitochondrial metagenomics of a Bornean rainforest sample. Mol Biol Evol. 2015;32(9):2302–16. Gillett CP, Crampton-Platt A, Timmermans MJ, Jordal BH, Emerson BC, Vogler AP. Bulk de novo mitogenome assembly from pooled total DNA elucidates the phylogeny of weevils (Coleoptera: Curculionoidea). Mol Biol Evol. 2014;31(8):2223–37. Gómez-Rodríguez C, Crampton-Platt A, Timmermans MJTN, Baselga A, Vogler AP. Validating the power of mitochondrial metagenomics for community ecology and phylogenetics of complex assemblages. Methods Ecol Evol. 2015;6(8):883–94. Tang M, Hardman CJ, Ji Y, Meng G, Liu S, Tan M, et al. High-throughput monitoring of wild bee diversity and abundance via mitogenomics. Methods Ecol Evol. 2015;6(9):1034–43. Liu S, Wang X, Xie L, Tan M, Li Z, Su X, et al. Mitochondrial capture enriches mito-DNA 100 fold, enabling PCR-free mitogenomics biodiversity analysis. Mol Ecol Resour. 2016;16(2):470–9. Xu S, Kong F, Miao L, Lin S. Establishment and application of fluorescent loop-mediated isothermal amplification for detecting chicken or duck derived ingredients. Chin J Anim Quarantine. 2018;35(2):77–81. Xu S, Kong F, Miao L, Cai Z, Lin Z, Zhao R. Establishment of the loop-mediated isothermal amplification for the detection of bovine-derived materials. China Anim Health Inspect. 2017;33(12):94–9. Xu S, Kong F, Miao L, Lin S, Lin Z. Establishment of loop-mediated isothermal amplification for detection of mutton-derived ingredients based on isothermal amplification platform. J Econ Anim. 2016;20(4):200–6. Zhou J, Bruns MA, Tiedje JM. DNA recovery from soils of diverse composition. Appl Environ Microbiol. 1996;62(2):316–22. Camacho C, Coulouris G, Avagyan V, Ma N, Papadopoulos J, Bealer K, et al. BLAST+: architecture and applications. BMC Bioinformatics. 2009;10:421. Hahn C, Bachmann L, Chevreux B. Reconstructing mitochondrial genomes directly from genomic next-generation sequencing reads--a baiting and iterative mapping approach. Nucleic Acids Res. 2013;41(13):e129. Rice P, Longden I, Bleasby A. EMBOSS: the European molecular biology open software suite. Trends Genet. 2000;16(6):276–7. Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, et al. The sequence alignment/map (SAM) format and SAMtools. Bioinformatics. 2009;25(1 Pt 2):1653–4. Larkin MA, Blackshields G, Brown NP, Chenna R, McGettigan PA, McWilliam H, et al. Clustal W and Clustal X version 2.0. Bioinformatics. 2007;23(21):2947–8. Stamatakis A. RAxML version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics. 2014;30(9):1312–3. Kumar S, Stecher G, Tamura K. MEGA7: molecular evolutionary genetics analysis version 7.0 for bigger datasets. Mol Biol Evol. 2016;33(7):1870–4. Wang Q, Garrity GM, Tiedje JM, Cole JR. Naive Bayesian classifier for rapid assignment of rRNA sequences into the new bacterial taxonomy. Appl Environ Microbiol. 2007;73(16):5261–7. Porter TM, Hajibabaei M. Automated high throughput animal CO1 metabarcode classification. Sci Rep. 2018;8(1):4226. Huson DH, Auch AF, Qi J, Schuster SC. MEGAN analysis of metagenomic data. Genome Res. 2007;17(3):377–86. Aida AA, Che Man YB, Wong CMVL, Raha AR, Son R. Analysis of raw meats and fats of pigs using polymerase chain reaction for Halal authentication. Meat Sci. 2005;69(1):47–52. Nakyinsige K, Man YBC, Sazili AQ. Halal authenticity issues in meat and meat products. Meat Sci. 2012;91(3):207–14. Harrison JG, John Calder W, Shuman B, Alex Buerkle C. The quest for absolute abundance: the use of internal standards for DNA-based community ecology. Mol Ecol Resour. 2021;21(1):30–43. We would like to thank the following scientific institutions for providing meat samples: Jiangsu Entry-exit Inspection and Quarantine Bureau, Nanjing City, Jiangsu Province, China The Breeding Farm of Camel, Alxa League, Inner Mongolia, China Xinjiang Entry-exit Inspection and Quarantine Bureau, Urumqi City, Xinjiang, China Nanjing Medical University, Nanjing City, Jiangsu Province, China The breeding farm of nutria, Chongqing City, China The breeding farm of fox, Suihua City, Heilongjiang Province, China This work was supported by funds from CAMS Innovation Fund for Medical Sciences (CIFMS) [2021-I2M-1-022], The National Mega-Project for Innovative Drugs of China [2019ZX09735-002], National Science & Technology Fundamental Resources Investigation Program of China [2018FY100705], National Natural Science Foundation of China [81872966], Guiding Projects of the Bureau of Science and Technology of the Fujian Province (2015Y0031), Xiamen Science and Technology Program project (3502Z20154079), the Science and Technology Program Project of General Administration of Quality Supervision, Inspection and Quarantine of People's Republic of China (2014IK089, 2014IK234) and the Fifth Phase of '333 Project' in Jiangsu Province (No.BRA2016498). The benerfactors were not involved in the study design, data collection and analysis, decision to publish or manuscript preparation. Institute of Medicinal Plant Development, Chinese Academy of Medical Sciences & Peking Union Medical College, 100193, Beijing, PR China Mei Jiang & Chang Liu Technology Center of Xiamen Entry-exit Inspection and Quarantine Bureau, Xiamen, Fujian, 361026, PR China Shu-Fei Xu & Fan-De Kong Technology Center of Jiangsu Entry-exit Inspection and Quarantine Bureau, Nanjing, Jiangsu, 210009, PR China Tai-Shan Tang Technology Center of Henan Entry-exit Inspection and Quarantine Bureau, Zhengzhou, Henan, 450003, PR China Li Miao Technology Center of Zhuhai Entry-exit Inspection and Quarantine Bureau, Zhuhai, Guangdong, 519000, PR China Bao-Zheng Luo College of Agriculture, Fujian Agriculture and Forestry University, Fuzhou, Fujian Province, 350002, PR China Yang Ni Mei Jiang Shu-Fei Xu Fan-De Kong Chang Liu CL and FDK conceived the study; MJ extracted DNA for next-generation sequencing and performed data analysis; SFX, LM and BZL conducted LAMP validation; TST collected the meat materials; YN helped in genome assembly; MJ and CL wrote the paper. All authors have read and agreed to the contents of the manuscript. Correspondence to Fan-De Kong or Chang Liu. The study including meat samples complies with relevant institutional, national and international guidelines and legislation. No specific permits were required for meats collection. Additional file 1: Table S1. The primer sequences used for the LAMP experiments. Table S2. Analysis of universal primers for COX1, 16S rRNA, and 18S rRNA genes. Table S3. Summary of reads mapped to the mitogenomes of S. scrofa domesticus and G. gallus. Fig. S1. The distribution of universal primers on 16 s rRNA sequences. Fig. S2. The distribution of universal primers on 18 s rRNA sequences. Fig. S3. The pairwise p-distance of the 15 mitogenomes sequences. Fig. S4. Alignment of reassembled sequences of mitogenomes and those downloaded from GenBank for 15 species. The prefix "A" and "R" represent the assembled and reference mitogenomes, respectively. Jiang, M., Xu, SF., Tang, TS. et al. Development and evaluation of a meat mitochondrial metagenomic (3MG) method for composition determination of meat from fifteen mammalian and avian species. BMC Genomics 23, 36 (2022). https://doi.org/10.1186/s12864-021-08263-0 Composition determination Biomass estimation Meat mix Mitogenome Mitochondrial metagenomics Analysis pipeline
CommonCrawl
Hölder continuity of weak solution to a nonlinear problem with non-standard growth conditions Zhong Tan1, Jianfeng Zhou1 & Wenxuan Zheng1,2 We study the Hölder continuity of weak solution u to an equation arising in the stationary motion of electrorheological fluids. To this end, we first obtain higher integrability of Du in our case by establishing a reverse Hölder inequality. Next, based on the result of locally higher integrability of Du and difference quotient argument, we derive a Nikolskii type inequality; then in view of the fractional Sobolev embedding theorem and a bootstrap argument we obtain the main result. Here, the analysis and the existence theory of a weak solution to our equation are similar to the weak solution which has been established in the literature with \(\frac{3d}{d+2}\leq p_{\infty}\leq p(x)\leq p_{0}<\infty\), and in this paper we confine ourselves to considering \(p(x)\in(\frac{3d}{d+2},2)\) and space dimension \(d=2,3\). Let \(\Omega\subset\mathbb{R}^{d}\ (d=2,3)\) be a bounded Lipschitz domain. This paper deals with a nonlinear problem (1.6) which arises in the steady motions of a special incompressible non-Newtonian fluid: electrorheological fluids, one example of smart fluids that change their viscosity rapidly when an electric field is applied. In the field of mechatronics, this fluid is actively being researched and numerous research activities on this fluid have been performed in various engineering applications. Also in the mathematical community such materials are intensively investigated being non-Newtonian fluids [9, 10, 12–14, 34]. Note that one of the first mathematical investigations of non-Newtonian models was carried out by Ladyzhenskaya in 1966 [26–28]; the author considered the modified Navier–Stokes equations $$\begin{aligned} \textstyle\begin{cases} u_{t}-\operatorname{ div} a(Du)+D\phi=-\operatorname{ div}(u\otimes u)+f & \text{in } \Omega , \\ \operatorname{ div} u=0&\text{in } \Omega, \end{cases}\displaystyle \end{aligned}$$ where \(u:\Omega\longrightarrow\mathbb{R}^{d}, \phi:\Omega \longrightarrow\mathbb{R}\), are the unknown velocity, pressure, respectively. \(f:\Omega\longrightarrow\mathbb{R}^{d}\) is a given density of the bulk force. Du denotes the symmetric part of the velocity gradient ∇u, namely \(Du=\frac{1}{2}(\nabla u+(\nabla u)^{T})\), \(a:\mathbb{R}^{d\times d}\longrightarrow\mathbb{R}^{d\times d}\) depends in a nonlinear way by Du. We note that there is abundant literature on the power-law model $$ a(Du)=\nu_{0}\bigl(\nu_{1}+ \vert Du \vert ^{2}\bigr)^{\frac{q-2}{2}}Du+\nu_{2}Du, $$ with \(\nu_{0}>0, \nu_{1}, \nu_{2}\geq0\), here \(q>1\) is a positive constant. For example, the existence of measure-valued solutions was shown to (1.1)–(1.2) for \(q>\frac{2d}{d+2}\) in [29, 36], for \(q>\frac{2d}{d+2}\), the existence of a weak solution has been studied in [6, 15, 30–32]. In [46], Wolf constructed a weak solution \(u\in L^{q}(0,T;V_{q}(\Omega))\cap C_{w}(0,T;L^{2}(\Omega))\) to (1.1)–(1.2) for the power with \(q>2\frac{d+1}{d+2}\). In 2014, Bae and Jin [5] studied the local in time existence of a weak solution to (1.1)–(1.2) for \(\frac{3d}{d+2}< q<2\) when \(d=2,3\) and the global in time existence of a weak solution for \(q\geq\frac{11}{5}\), when \(d=3\). When q as a positive constant is replaced by a variable exponent \(p(x)\), then the model of (1.2) can be seen as the variable exponent power-law model. According to the model proposed by Rajagopal and Růžička [38, 41], the system of electrorheological fluids reads $$ \textstyle\begin{cases} \operatorname{ div }E=0, \qquad \operatorname{ curl }E=0 & \text{in } \Omega, \\ E\cdot n=E_{0}\cdot n& \text{on } \partial\Omega,\\ u_{t}-\operatorname{ div} S(Du,E)+u\cdot\nabla u+\nabla\phi=f+\chi^{E}E\cdot \nabla E & \text{in } Q_{T}, \\ \operatorname{ div} u=0,\qquad u|_{t=0}=u_{0}& \text{in } Q_{T},\\ u=0& \text{on } \partial\Omega, \end{cases} $$ with \(f,E_{0},u_{0}\) are given. \(Q_{T}=[0,T]\times\Omega\), \(E:Q_{T}\longrightarrow\mathbb{R}^{d}\) is the electric field, \(u:Q_{T}\longrightarrow\mathbb{R}^{d}\) is the velocity, \(\phi :Q_{T}\longrightarrow\mathbb{R}\) is the pressure, \(f:Q_{T}\longrightarrow \mathbb{R}^{d}\) is the mechanical force and \(\chi^{E}\) is the positive constant dielectric susceptibility. \(S(D,E):\mathbb{R}^{d\times d}_{\mathrm{sym}}\times\mathbb{R}^{d}\longrightarrow\mathbb{R}^{d\times d}_{\mathrm{sym}}\) denotes the stress tensor, which is under non-standard growth conditions $$\begin{aligned} S(Du,E)= {}& \alpha_{21}\bigl(\bigl(1+ \vert Du \vert ^{2}\bigr)^{\frac{p( \vert E \vert ^{2})-1}{2}}-1\bigr)E\otimes E \\ &{}+\bigl(\alpha_{31}+\alpha_{33} \vert E \vert ^{2}\bigr) \bigl(1+ \vert Du \vert ^{2} \bigr)^{\frac{p( \vert E \vert ^{2})-2}{2}}Du \\ &{} +\alpha_{51}\bigl(1+ \vert Du \vert ^{2} \bigr)^{\frac{p( \vert E \vert ^{2})-2}{2}}\bigl((Du)E\otimes E+E\otimes(Du)E\bigr). \end{aligned}$$ Here \(\alpha_{ij}\) are material constants such that $$ \alpha_{31}>0, \qquad \alpha_{33}>0,\qquad \alpha_{33}+ \frac{4}{3}\alpha_{51}>0, $$ and \(p=p(|E|^{2})>1\) is continuous. We shall note that the system (1.4) is separated into the quasi-static Maxwell's equation (1.4)1–(1.4)2 (cf. [25]) and the equation of motion and the conservation of mass (1.4)3-(1.4)5, where E can be viewed as a parameter. Note that the higher differentiability of weak solutions to (1.3)–(1.4) had been obtained in [40, 41], the first regularity result for the model of electrorheological fluids proposed in [41], and in any case the first in a point-wise sense. A further step, the Hausdorff dimension of the singular set \(\Omega \backslash\Omega_{0}\) has been studied in [3]. Related regularity results in the stationary case can also be found in [1, 2, 10, 45] and the references therein. For the non-stationary case, one can refer [4, 41]. In this paper, we are interested in the (interior) regularity properties of weak solutions to the stationary case of (1.3)–(1.4): $$ \operatorname{ div} u=0, \qquad {-}\operatorname{ div} S(Du,E)+u\cdot\nabla u+\nabla\phi =f+\chi^{E}E\cdot\nabla E. $$ The issue of regularity of solutions to (1.5) has been performed in [41] where the author proves the existence of a \(W^{2,2}\) solution to (1.5). Here we analyze the system arising from (1.5) as $$ \textstyle\begin{cases} -\operatorname{ div} S(Du)+u\cdot\nabla u+\nabla\phi=f& \text{in } \Omega, \\ \operatorname{ div} u=0& \text{in } \Omega,\\ u=0& \text{on } \partial\Omega, \end{cases} $$ where \(f:\Omega\longrightarrow\mathbb{R}^{d}\), and $$ S(Du)=\bigl(\mu_{0}+ \vert Du \vert ^{2} \bigr)^{\frac{p(x)-2}{2}}Du, $$ with \(\mu_{0}> 0\). \(p(\cdot):\Omega\longrightarrow[1,\infty) \) is a given Hölder (log-Hölder) continuous function that satisfies $$ \frac{ 3d}{d+2}< p_{\infty}\leq p(x)\leq p_{0}< 2, \qquad \bigl\vert p(x)-p(y) \bigr\vert \leq \omega\bigl( \vert x-y \vert \bigr)\leq c_{0} \vert x-y \vert ^{2\theta_{1}}, $$ for all \(x,y\in\bar{\Omega}\), where \(c_{0}\geq1\) is a constant, \(\theta _{1}=\frac{A_{d}(d+2)-3d}{2A_{d}}\in(0,1)\) and \(A_{d}\) be a constant defined in (3.29), \(p_{\infty}:=\min_{x\in\bar{\Omega}}p(x)\) and \(p_{0}:=\max_{x\in\bar{\Omega}}p(x)\), \(\omega:\mathbb {R}^{+}\longrightarrow\mathbb{R}^{+}\) is the modulus of \(p(\cdot)\), which satisfies $$ \omega(6R)< 1, \qquad\omega(R)\log\frac{1}{R}\leq L, $$ for all \(0< R\leq1\) and \(L>0\) is a constant. In this paper, we assume \(\mu_{0}=1\), then from the definition of \(S(Du)\), $$ \bigl\vert S(Du) \bigr\vert \leq\bigl(1+ \vert Du \vert ^{2}\bigr)^{\frac{p(x)-1}{2}}. $$ The purpose of this paper is to study the Hölder continuity of a weak solution u to (1.6), to this end, various higher integrability results are important to overcome the lack of standard growth conditions of the system. Thus, we first improve the power of integrability of \(D u\in L^{p(x)}_{\mathrm{loc}}\), \(p(x)\in(1,2)\) by establishing a reverse Hölder inequality, and at this point, using the Gerhing lemma 2.4, we can deduce that \(Du\in L^{p(x)(1+\delta_{1})}\) for some positive constant \(\delta_{1}>0\). Next, by a difference quotient argument, we proceed to show the fractional differentiability of Du. For this purpose, we construct a Nikolskii type inequality (3.38), from which, by the fractional order Sobolev embedding theorem and a standard bootstrap argument, we have \(Du\in L^{\eta}\) with \(\eta\geq d\). Note that the self-improving property to a class of elliptic system was first observed by Elcrat and Meyers in [33] (see also [20] and [43]), but their argument is based on the reverse Hölder inequality and a modification of the Gehring lemma. Finally, by the Sobolev embedding theorem, we derive the Hölder continuity of u. In the whole paper, the key point is to suppose that \(p(\cdot)\) satisfies the log-Hölder conditions (1.8)–(1.9), which implies that $$ \frac{1}{r}\sim\frac{1}{r^{p_{2}/p_{1}}}\quad \text{and}\quad \frac {1}{r}\sim \frac{1}{r^{p_{2}^{2}/p_{1}^{2}}}, $$ for all \(r\in(0,1]\) when \(p_{1}:=\inf_{x\in B_{3r}}p(x)\), \(p_{2}:=\sup_{x\in B_{3r}}p(x)\). Moreover, when \(p(x)\) satisfies the log-Hölder continuous conditions, one can use the Korn inequality with variable exponent case (Lemma 2.3), which is the main tool to prove the local higher integrability of ∇u in terms of Du. We also observe that, for the sake of brevity and in order to highlight the main ideas, we confine ourselves to the considered homogeneous case of (1.6). For the non-homogeneous case (\(f\in W^{-1,p'(x)}(\Omega )\)), there are some technique difficulties in the proof of higher integrability of Du (Lemma 3.1), and we will investigate it in our future work. The result of this paper reads as follows. Suppose \(f=0\). Let \(\Omega\subset\mathbb{R}^{d}\) \((d=2,3)\) be a bounded domain with Lipschitz boundary ∂Ω and let \(p(\cdot ):\Omega\longrightarrow(\frac{3d}{d+2},2)\) be log-Hölder continuous where \(\frac{3d}{d+2}< p_{\infty }\leq p\leq p_{0}<2\) satisfies (1.8)–(1.9), and \(S(Du)\) satisfies (1.10). \((u,\phi)\in(V_{p(x)},L_{0}^{p'(x)}(\Omega))\) are the solutions of (1.6)–(1.7). Then $$ u\in C^{\alpha}(\Omega) \quad\textit{for some } \alpha\in(0,1). $$ The rest of paper is organized as follows. In Sect. 2, we present some notions of variable exponents spaces, the definition of a weak solution to (1.6), the property of a difference quotient with variable exponents, and formulate some lemma which will be used in later. In Sect. 3, we first prove the locally higher integrability of Du (Lemma 3.1). Next, by the known result of the difference quotient argument, the log-Hölder continuity of \(p(\cdot)\), we derive the fractional differentiability of Du, then, by the fractional Sobolev space embedding theorem and a standard bootstrap argument, we obtain the higher integrability of Du (the power of integrability is bigger than that in Lemma 3.1). At last, we prove the main result of Theorem 1.1. In the present paper we shall often write p or \(p(\cdot)\) instead of \(p(x)\) if there is no danger of confusion and the exponent q denotes a constant. c denotes a general constant which may vary in different estimates. If the dependence needs to be explicitly stressed, some notations like \(c', c_{0}, c_{1}, c(k_{0})\) will be used. \(A\sim B\) means there exist constants \(c_{1}\) and \(c_{2}\) such that \(c_{1}B\leq A\leq c_{2}B\). \(B_{r}(x_{0}):=\{x:\operatorname{dist}(x,x_{0})< r\}\), we denote the average integral of u on \(B_{r}\) as \((u)_{r}:=(u)_{B_{r}}=\frac{1}{|B_{r}|}\int_{B_{r}}u\,dx\). We recall in what follows some definitions and basic properties of the generalized Lebesgue–Sobolev spaces \(L^{p(x)}(\Omega)\) and \(W^{1,p(x)}(\Omega)\) (for more details one can refer to [8, 11, 16–18, 22–24, 37, 39, 44] and the references therein). Let \(P(\Omega)\) be the set consisting of a Lebesgue measurable function \(p(\cdot):\Omega\longrightarrow[1,\infty]\), where \(\Omega\subset\mathbb{R}^{d}\) \((d\geq2)\) is nonempty. Now, for any \(p(x)\in P(\Omega)\), let us introduce the spaces which are used in this paper, $$\begin{aligned} &V_{p(x)}:= \bigl\{ u: u\in W^{1,p(x)}(\Omega), \operatorname{ div} u=0 \bigr\} , \\ &L_{0}^{p(x)}(\Omega):= \biggl\{ u: u\in L^{p(x)}( \Omega), \int _{\Omega }u\,dx=0 \biggr\} . \end{aligned}$$ Next, let us introduce the embedding properties of the generalized Lebesgue space. Firstly, we know that $$ L^{p(x)}(\Omega) \hookrightarrow L^{q(x)}(\Omega), $$ if and only if $$ q(x) < p(x) \quad\text{a.e. in $\Omega$} . $$ Moreover, if \(q\in P(\Omega)\) and \(q(x)< p^{*}(x)\) and for all \(x\in \bar {\Omega}\), then the embedding \(W^{1,p(x)}\hookrightarrow L^{q(x)}(\Omega )\) is compact and continuous, where \(p^{*}(x)=np(x)/(n-p(x))\) if \(p(x) < n\) or \(p^{*}(x)=+\infty\) if \(p(x)=n\). In what follows we denote \(L^{p'(x)}(\Omega)\) as the conjugate of \(L^{p(x)}(\Omega)\), where \(1/p(x)+1/p'(x)=1\), then for all \(p(x)\in P(\Omega),u\in L^{p(x)}(\Omega),v\in L^{p'(x)}(\Omega)\) we have $$ \int_{\Omega} \bigl\vert u(x)v(x) \bigr\vert \,dx\leq2 \bigl\vert u(x)\bigr\vert _{L^{p(x)}} \bigl\vert v(x)\bigr\vert _{L^{p'(x)}}. $$ From the definition of variable exponent Lebesgue space above, now we introduce a basic property of \(L^{p(x)}(\Omega)\). Let \(p(x)\in P(\Omega)\) satisfy \(1\leq t_{1}\leq p(x)\leq t_{2}<\infty\), then for every \(u\in L^{p(x)}(\Omega)\) $$ \min \bigl\{ \Vert u \Vert _{L^{p(x)}(\Omega)}^{t_{1}}, \Vert u \Vert _{L^{p(x)}(\Omega )}^{t_{2}} \bigr\} \leq \int_{\Omega} \vert u \vert ^{p(x)}\,dx\leq\max \bigl\{ \Vert u \Vert _{L^{p(x)}(\Omega)}^{t_{1}}, \Vert u \Vert _{L^{p(x)}(\Omega)}^{t_{2}} \bigr\} . $$ A proof can be retrieved e.g. from Lemma 3.2.5 in [11]. For convenience, we may denote inequality (2.2) as $$ \Vert u \Vert _{L^{p(x)}(\Omega)}^{s_{1}}\leq \int_{\Omega} \vert u \vert ^{p(x)}\,dx\leq \Vert u \Vert _{L^{p(x)}(\Omega)}^{s_{2}}, $$ where \(s_{1},s_{2}\) equal to \(t_{1}\) or \(t_{2}\). The following conclusion from fluid dynamics ensures local bounds of ∇u in terms of Du on the scale of \(L^{q}\) space. (Korn inequality) Let \(1<\gamma_{1}\leq q\leq\gamma_{2}\) and assume that \(u\in L^{q}(B_{r}(x_{0}),\mathbb{R}^{d})\) satisfies \(Du\in L^{q}(B_{r}(x_{0}),\mathbb {R}^{d\times d}_{\mathrm{sym}})\). Then \(\nabla u\in L^{q}(B_{r}(x_{0}),\mathbb {R}^{d\times d})\) and for a constant \(c=c(d,\gamma_{1},\gamma_{2})\) we have Additionally, if \(u=0\) on \(\partial B_{r}(x_{0})\), then Its proof may be found, e.g., in [35]. In addition, we shall use the Korn inequality with variable exponent case, and we formulate it in the form we need (cf. Theorem 14.3.23 in [11]). Let \(B_{r}\subset\mathbb{R}^{d}\) be a bounded ball, let \(p(x)\) satisfies \(log\)-Hölder conditions and \(1< p_{\infty}\leq p(x)\leq p_{0}<\infty \). Then $$\begin{aligned} \begin{aligned} & \bigl\Vert \nabla u-(\nabla u)_{r} \bigr\Vert _{L^{p(x)}(B_{r})} \leq c \bigl\Vert D u-(D u)_{r} \bigr\Vert _{L^{p(x)}(B_{r})}, \\ & \Vert \nabla u \Vert _{L^{p(x)}(B_{r})}\leq c \bigl\Vert D u-(D u)_{r} \bigr\Vert _{L^{p(x)}(B_{r})}+\frac{c}{r} \bigl\Vert u-(u)_{r} \bigr\Vert _{L^{p(x)}(B_{r})}, \end{aligned} \end{aligned}$$ for all \(u\in W^{1,p(x)}(B_{r})\). Next, we introduce the Gehring lemma in a version formulated by Giaqunta and Giusti (see, e.g., Chapter V, Proposition 1.1 in [20] or Theorem 6.6 in [21]). Let \(\Omega\subset R^{d}\), \(0< m<1\), and \(f\in L^{1}_{\mathrm{loc}}(\Omega)\), \(g\in L^{\sigma}_{\mathrm{loc}}(\Omega)\) for some \(\sigma>1\) be two nonnegative functions such that for any ball \(B_{\rho}\) with \(B_{3\rho}\subset \subset\Omega\) where \(b_{1},b_{2}>1\) and \(0< k\leq k_{0}=k_{0}(m,d)\). Then there exists a constant \(\gamma_{0}=\gamma_{0}(d,m,b_{1})>1\) such that $$ f\in L^{\gamma}_{\mathrm{loc}}\quad \textit{for all } 1< \gamma< \min\{ \gamma _{0},\sigma\}. $$ In order to show the interior higher integrability of Du, we shall use the following lemma, which is a well-known result (Bogovskii theorem), and we restate in the form we need (cf. [7, 19]). Let \(B_{R}\subset\mathbb{R}^{d}\) and let \(f\in L^{q}(B_{R})\) with \(1< q_{1}< q< q_{2}\) be such that \((f)_{R}=0\). Then there exists \(v\in W_{0}^{1,q}(B_{R}; \mathbb{R}^{d})\) satisfying $$ \operatorname{ div} v=f $$ and such that $$ \int_{B_{R}} \vert \nabla v \vert ^{q_{3}}\,dx\leq c \int_{B_{R}} \vert f \vert ^{q_{3}}\,dx, $$ for every \(q_{3}\in[q_{1},q]\), where \(c=c(d,q_{1},q_{2})\) is independent of \(R>0\). Moreover, if the support of f is contained in \(B_{r}\subset B_{R}\), the same result holds for v. Recalling the structure of \(S=S(Du)\) in (1.7) with \(\mu_{0}=1\), then we have $$\begin{aligned} S_{ij}(\xi)= \textstyle\begin{cases} (1+ \vert \xi \vert ^{2})^{\frac{p(x)-2}{2}}\xi_{ij},& \xi\in R^{d\times d}_{\mathrm{sym}}, \xi\neq0, \\ 0, & \xi=0, \end{cases}\displaystyle \end{aligned}$$ with \(\frac{3d}{d+2}< p_{\infty}\leq p(x)\leq p_{0}\leq2\). Define $$ S(\xi):=A(x,\xi)=\bigl(1+ \vert \xi \vert ^{2} \bigr)^{\frac{p(x)-2}{2}}\xi, \quad \xi \in R^{d\times d}_{\mathrm{sym}}. $$ Then \(A:\Omega\times R^{d\times d}_{\mathrm{sym}}\longrightarrow R^{d\times d}_{\mathrm{sym}}\) satisfy $$\begin{aligned} & \bigl\vert A(x,\xi)-A(x_{0},\xi) \bigr\vert \\ &\quad \leq\omega\bigl( \vert x-x_{0} \vert \bigr) \bigl[ \bigl(1+ \vert \xi \vert ^{2}\bigr)^{\frac{p(x)-1}{2}} + \bigl(1+ \vert \xi \vert ^{2}\bigr)^{\frac{p(x_{0})-1}{2}} \bigr]\bigl(1+\log\bigl(1+ \vert \xi \vert \bigr)\bigr), \end{aligned}$$ $$\begin{aligned} & A(x,\xi)\cdot\xi\geq \vert \xi \vert ^{p(x)}-1. \end{aligned}$$ From the definition of \(S_{ij}(\xi)\), we can also obtain $$ \frac{\partial S_{ij}(\xi)}{\partial\xi_{kl}}=\bigl(p(x)-2\bigr) \bigl(1+ \vert \xi \vert ^{2}\bigr)^{\frac{p(x)-4}{2}}\xi_{kl}\xi_{ij}+ \bigl(1+ \vert \xi \vert ^{2}\bigr)^{\frac {p(x)-2}{2}} \delta_{ki}\delta_{lj}, $$ where \(\delta_{ij}\) is Kronecker's delta, for all \(\xi,\eta\in R^{d\times d}_{\mathrm{sym}}, |\xi|+|\eta|>0\), $$\begin{aligned} & \bigl(S_{ij}(\xi)-S_{ij}(\eta) \bigr) ( \xi_{ij}-\eta _{ij}) \\ &\quad = \int_{0}^{1}\frac{d}{dt}S_{ij} \bigl(\eta+t(\xi-\eta)\bigr)\,dt(\xi _{ij}-\eta _{ij}) \\ &\quad \geq \int_{0}^{1}\bigl(p(\cdot)-1\bigr) \bigl(1+\bigl( \eta+t(\xi-\eta)\bigr)^{2}\bigr)^{\frac {p(\cdot )-2}{2}} \vert \xi-\eta \vert ^{2}\,dt \\ &\quad \geq \int_{0}^{1}\bigl(p(\cdot)-1\bigr) \bigl(1+ \bigl\vert \eta+t(\xi-\eta) \bigr\vert \bigr)^{p(\cdot )-2} \vert \xi -\eta \vert ^{2}\,dt \\ &\quad \geq\bigl(p(\cdot)-1\bigr) \bigl(1+ \vert \xi \vert + \vert \eta \vert \bigr)^{p(\cdot)-2} \vert \xi-\eta \vert ^{2} \\ &\quad \geq k_{0}\bigl(1+ \vert \xi \vert + \vert \eta \vert \bigr)^{p(\cdot)-2} \vert \xi-\eta \vert ^{2}, \end{aligned}$$ where \(k_{0}=p_{\infty}-1\), and in the third inequality we have taken into account the inequality $$ \bigl(1+ \vert \xi \vert + \vert \eta \vert \bigr)^{-(2-p(\cdot))}\leq \int_{0}^{1}\bigl(1+ \vert \xi+t\eta \vert \bigr)^{-(2-p(\cdot))}\,dt, \quad\text{a.e. in } \Omega. $$ The property of difference quotient In the whole paper, we will employ the difference $$ \triangle_{\lambda,k}u(x):=u(x+\lambda e_{k})-u(x), $$ where \(e_{k}=(0,\ldots,1,\ldots,0)\) and 1 at the kth place \((k=1,\ldots,d)\). Moreover, for simplicity, we may repeat, using the parameter \(p_{1}, p_{2}\): $$ p_{1}:=\inf_{x\in B_{3r}}p(x),\qquad p_{2}:=\sup _{x\in B_{3r}}p(x). $$ Let \(B_{r}=B_{r}(x_{0})\subset\Omega\) be a ball such that \(\bar {B}_{6r}\subset \Omega\), in what follows, we will repeatedly use the following fact. Let \(1\leq p(\cdot)\leq q_{0}<\infty\) (\(q_{0}\) is a constant) and \(f\in W^{1,p(\cdot)}(\Omega)\), then, for all \(V\subset\subset\Omega \subset \mathbb{R}^{d}\), there exists a constant \(c>0\) such that $$ \int_{B_{mr}} \vert \triangle_{\lambda,k}u \vert ^{p(x)}\,dx \leq c \vert \lambda \vert ^{p_{1}} \int_{B_{(m+1)r}} \biggl\vert \frac{\partial u}{\partial x_{k}} \biggr\vert ^{p(x)}\,dx, $$ for all \(|\lambda|< r<1, k=1,\ldots,d\), \(m=1,2\). We first assume \(u(x)\) is smooth, then for all \(x\in V, i=1,2,\ldots ,d\) and \(0<|h|<\frac{1}{2}\operatorname{dist} (V,\partial\Omega)\) $$ u(x+he_{i})-u(x)=h \int_{0}^{1}\frac{\partial u(x+the_{i})}{\partial x_{i}}\,dt, $$ from above, then we have $$ \begin{aligned} \int_{V} \bigl\vert D^{h} u(x) \bigr\vert ^{p(\cdot)}\,dx={}& \int_{V} \Biggl\{ \sum_{i=1}^{d} \biggl\vert \frac{u(x+he_{i})-u(x)}{h} \biggr\vert ^{2} \Biggr\} ^{\frac {p(\cdot)}{2}}\,dx \\ \leq{}& \int_{V} \Biggl\{ \sum_{i=1}^{d} \biggl[ \int_{0}^{1} \bigl\vert \nabla u(x+the_{i}) \bigr\vert \,dt \biggr]^{2} \Biggr\} ^{\frac{p(\cdot)}{2}}\,dx \\ \leq{}& c\sum_{i=1}^{d} \int_{V} \int_{0}^{1} \bigl\vert \nabla u(x+the_{i}) \bigr\vert ^{p(\cdot )}\,dt\,dx \\ \leq{}&c \int_{\Omega} \bigl\vert \nabla u(x) \bigr\vert ^{p(\cdot)}\,dx, \end{aligned} $$ where c is independent of \(p(\cdot)\), and in the second inequality we have taken into account the fact $$ \bigl( \bigl\vert u(x) \bigr\vert + \bigl\vert v(x) \bigr\vert \bigr)^{p(x)}\leq2^{\sup_{x\in V} p(x)-1}\bigl( \bigl\vert u(x) \bigr\vert ^{p(x)}+ \bigl\vert v(x) \bigr\vert ^{p(x)}\bigr),\quad p(x)\geq1. $$ Indeed, when \(|u|\) or \(|v|\) equal to zero, the above inequality is obvious, and without loss of generality, we may assume \(|u|,|v|\neq0\), and $$ V=V_{1}\cup V_{2},\qquad V_{1}:= \bigl\{ x: \vert u \vert \geq \vert v \vert \bigr\} ,\qquad V_{2}:= \bigl\{ x: \vert v \vert \geq \vert u \vert \bigr\} . $$ It is only enough to consider \(|u|\geq|v|\) in \(V_{1}\), set \(t:=\frac {|u|}{|v|}\geq1\), observe the function $$ f(t)=\frac{(1+t)^{p(x)}}{1+t^{p(x)}},\quad t\geq1, p(x)\geq1, $$ then we obtain, for any fixed x, \(\sup_{t\geq1}f(t)=2^{p(x)-1}\). Hence, $$ \frac{(1+t)^{p(x)}}{1+t^{p(x)}}\leq2^{p(x)-1}, \quad\mbox{a.e. }x\in V_{1}, $$ whence (2.10) in \(V_{1}\), by set \(t:=\frac{|v|}{|u|}\), we have same conclusion in \(V_{2}\), thus we obtain (2.10). From the definition of a generalized Lebesgue space, we obtain (2.9) if u is smooth, then, for all \(u\in W^{1,p(\cdot)}(\Omega)\), (2.9) holds. □ Definition of the weak solutions to (1.6) Suppose \(\frac{3d}{d+2}\leq p_{\infty}\leq p(x)\leq p_{0}<\infty\), and \(f\in W^{-1,p'(\cdot)}(\Omega)\) is given, then \((u,\phi)\in (V_{p(\cdot)}, L_{0}^{p(\cdot)'}(\Omega) )\) is said to be a weak solution to (1.6)–(1.10), if and only if $$ \int_{\Omega}S(Du)\cdot D\varphi \,dx- \int_{\Omega}u\otimes u\cdot \nabla \varphi\, dx- \int_{\Omega}\phi\operatorname{ div} \varphi\,dx+(f, \varphi)=0, $$ for all \(\varphi\in W_{0}^{1,p(x)}(\Omega)\), or $$ \int_{\Omega}S(Du)\cdot D\varphi\, dx- \int_{\Omega}u\otimes u\cdot \nabla \varphi\, dx+(f, \varphi)=0, $$ for all \(\varphi\in V_{p(x)}\). For more details, one can refer for instance to [11] (Chap. 14, Sect. 4, p. 472) or [41]. Hölder continuity of weak solutions We note that the starting point for any comparison and freezing argument in the setting of variable \(p(x)\)-growth problems is a quantitative higher integrability result. Hence, in order to obtain the interior differentiability of weak solution to (1.6), we shall first show the locally higher integrability of Du. At this point, we define a global positive constant \(\alpha\in(1, [(d+2)p_{\infty }-d]/2d)\), from which we then have the following result. Suppose \(f=0\). Let \(\Omega\subset\mathbb{R}^{d}\) \((d=2,3)\) be a bounded domain with Lipschitz boundary ∂Ω and let \(p(\cdot ):\Omega\longrightarrow(\frac{3d}{d+2},2)\) be log-Hölder continuous with \(\frac{3d}{d+2}< p_{\infty}\leq p\leq p_{0}<2\) satisfying (1.8) and (1.9), and S satisfying (1.10). \((u,\phi)\in(V_{p(x)},L_{0}^{p'(x)}(\Omega))\) is the solution of (1.6)–(1.7). Then there exist constants \(c,\delta_{1}>0\) both depending on \(d,p_{0},p_{\infty}\) and \(r_{0}\in(0,1)\) suitable small, such that if \(B_{2r}\subset\subset\Omega\) for any \(r\in(0,r_{0})\), then Without loss of generality, we first set \(r_{0}=\frac{1}{2}\), and we will specify it in later. Let \(\eta\in C_{0}^{\infty}(B_{2r})\) with \(r\in (0,r_{0})\) be a cut-off function such that $$ \textstyle\begin{cases} \eta=1& \text{in } B_{r},\\ 0\leq\eta\leq1& \text{in } B_{2r},\\ \vert \nabla\eta \vert \leq\frac{c}{r}& \text{in } B_{2r}, \end{cases} $$ where c is a positive constant independent of r. Let $$ \varphi=\eta^{2}\bigl(u-(u)_{2r}\bigr)+w, $$ where the function w is defined according to Lemma 2.5 as a solution to $$ \operatorname{ div} w=-\operatorname{ div}\bigl(\eta^{2} \bigl(u-(u)_{2r}\bigr)\bigr)=-\bigl(u-(u)_{2r}\bigr)\cdot \nabla \bigl(\eta^{2}\bigr). $$ It is obvious that such a function w exists, since $$ \operatorname{ div} u=0,\qquad \int_{B_{2r}}\bigl(u-(u)_{2r}\bigr)\cdot\nabla\bigl( \eta ^{2}\bigr) \,dx=0. $$ We claim that \(w\in W_{0}^{1,p_{2}}(B_{2r})\). In fact, from Lemma 2.5, we have for some exponent \(q>p_{2}\) such that the right hand side is finite. Taking into account (2.12), we let φ in (3.2) be a test function, thus $$\begin{aligned} 0= {}& \int_{B_{2r}} \eta^{2} S(Du)\cdot Du\,dx+2 \int_{B_{2r}}\eta S(Du)\cdot \bigl(\bigl(u-(u)_{2r} \bigr)\cdot\nabla\eta\bigr)\,dx \\ & {}+ \int_{B_{2r}} S(Du)\cdot Dw\,dx+ \int_{B_{2r}}(u\cdot\nabla u)\cdot \bigl(\bigl(u-(u)_{2r} \bigr)\eta^{2}\bigr)\,dx \\ &{}+ \int_{B_{2r}}(u\cdot\nabla u)\cdot w\,dx \\ :={}&I_{1}+I_{2}+I_{3}+I_{4}+I_{5}. \end{aligned}$$ Now, we estimate the terms \(I_{1}\)–\(I_{5}\). Using (2.7), we first infer that $$ I_{1}\geq \int_{B_{2r}}\eta^{2} \vert Du \vert ^{p(x)}\,dx-cr^{d} . $$ Note that \(2p'(x)\geq2\) and \(\eta\in[0,1]\), thus, by (1.10) and the Young inequality $$ \vert I_{2} \vert \leq\varepsilon \int_{B_{2r}}\eta^{2} \vert Du \vert ^{p(x)}\,dx +c(\varepsilon ) \int_{B_{2r}} \biggl\vert \frac{u-(u)_{2r}}{r} \biggr\vert ^{p(x)}\,dx+cr^{d}, $$ where ε is a positive constant that will be specified later. Likewise, taking into account (3.4), we arrive at $$\begin{aligned} \vert I_{3} \vert \leq{}&\varepsilon \int_{B_{2r}} \vert Du \vert ^{p(x)}\,dx+c( \varepsilon) \int _{B_{2r}} \vert Dw \vert ^{p(x)} \,dx+cr^{d} \\ \leq{}& \varepsilon \int_{B_{2r}} \vert Du \vert ^{p(x)}\,dx+c( \varepsilon) \int _{B_{2r}} \biggl\vert \frac{u-(u)_{2r}}{r} \biggr\vert ^{p_{2}}\,dx+cr^{d}. \end{aligned}$$ Observe that $$ \bigl\Vert u-(u)_{2r} \bigr\Vert _{L^{q}(B_{2r})}\leq c(q) \Vert u \Vert _{L^{q}(B_{2r})} $$ for any \(q\geq1\), with \(c(q)\geq1\). Thus, appealing to the Young and Hölder inequalities, we have $$\begin{aligned} \vert I_{4} \vert \leq{}& \Vert \nabla u \Vert _{L^{ (\frac{1}{2} (\frac {p_{\infty }}{\alpha} )^{*} )'}(B_{2r})} \bigl\Vert u-(u)_{2r} \bigr\Vert _{L^{ (\frac{p_{\infty}}{\alpha} )^{*}}(B_{2r})} \Vert u \Vert _{L^{ (\frac{p_{\infty}}{\alpha} )^{*}}(B_{2r}) } \\ \leq{}& c \Vert \nabla u \Vert _{L^{ (\frac{1}{2} (\frac{p_{\infty }}{\alpha } )^{*} )'}(B_{2r})} \Vert u \Vert _{L^{ (\frac{p_{\infty}}{\alpha} )^{*}}(B_{2r})}^{2} \\ \leq{}&c \int_{B_{2r}} \vert \nabla u \vert ^{\frac{dp_{\infty}}{(d+2)p_{\infty }-2d\alpha}}\,dx+c \int_{B_{2r}} \vert u \vert ^{ (\frac{p_{\infty}}{\alpha } )^{*}}\,dx. \end{aligned}$$ Similarly, using the Hölder and Young inequalities again, from the property of w in (3.4) and noting that \(w\in W_{0}^{1,p_{2}}(B_{2r})\), the term \(I_{5}\) can be estimated as $$\begin{aligned} \vert I_{5} \vert \leq{}& \Vert \nabla u \Vert _{L^{\frac{dp_{\infty}}{(d+2)p_{\infty }-2d\alpha }}(B_{2r})} \Vert u \Vert _{L^{ (\frac{p_{\infty}}{\alpha} )^{*}}(B_{2r}) } \Vert w \Vert _{L^{ (\frac{p_{\infty}}{\alpha} )^{*}}(B_{2r})} \\ \leq{}& c \Vert \nabla u \Vert _{L^{\frac{dp_{\infty}}{(d+2)p_{\infty}-2d\alpha }}(B_{2r})} \Vert u \Vert _{L^{ (\frac{p_{\infty}}{\alpha} )^{*}}(B_{2r})} \Vert \nabla w \Vert _{L^{\frac{p_{\infty}}{\alpha}}(B_{2r})} \\ \leq{}& c \Vert \nabla u \Vert _{L^{\frac{dp_{\infty}}{(d+2)p_{\infty}-2d\alpha }}(B_{2r})} \Vert u \Vert _{L^{ (\frac{p_{\infty}}{\alpha} )^{*}}(B_{2r}) } \bigl\Vert u-(u)_{2r} \bigr\Vert _{L^{ (\frac{p_{\infty}}{\alpha} )^{*}}(B_{2r})} \\ \leq{}&c \int_{B_{2r}} \vert \nabla u \vert ^{\frac{dp_{\infty}}{(d+2)p_{\infty }-2d\alpha}}\,dx+c \int_{B_{2r}} \vert u \vert ^{ (\frac{p_{\infty}}{\alpha } )^{*}}\,dx. \end{aligned}$$ Inserting the estimation (3.6)–(3.10) into (3.5) and choosing \(\varepsilon=\frac{1}{4}\), we conclude that Next, we shall show a reverse Hölder inequality. In view of the Sobolev–Poincaré and Korn inequalities in Lemma 2.2, the last term on the right hand side of (3.11) can be estimated as where \(c=c(d,p_{0},p_{\infty})\) and in the fourth inequality, we have taken into account that \(p_{2}\alpha/p_{\infty}>1\) and \(\int_{B_{2r}}|\nabla u|^{p_{\infty}/\alpha}\,dx\leq1\) for any \(r\in (0,r_{0}']\) with \(r_{0}'\leq1\) suitable small, since we have absolute continuity of the integral. At this stage, we have determined \(r_{0}=\min\{\frac{1}{2},r_{0}'\}\). Now, inserting (3.12) into (3.11), we conclude that Taking into account Lemma 2.4, we set $$\begin{aligned} &g:= \vert \nabla u \vert ^{\frac{dp_{\infty}}{(d+2)p_{\infty}-2d\alpha }}+ \vert \nabla u \vert ^{\frac{p_{\infty}}{\alpha}} + \vert u \vert ^{ (\frac{p_{\infty}}{\alpha} )^{*}}+1, \\ &f:= \vert Du \vert ^{p(x)}. \end{aligned}$$ From the definition of α, we can see that $$ \frac{d}{(d+2)p_{\infty}-2d\alpha}\in(0,1), \qquad \biggl(\frac{p_{\infty}}{\alpha} \biggr)^{*}< \frac{p_{\infty}^{*}}{\alpha}< p_{\infty}^{*}. $$ Thus, we infer that \(g\in L^{\sigma}(B_{2r})\) for some \(\sigma=\sigma (d,p_{\infty},\alpha)=\sigma(d,p_{\infty})>1\). At this point, there exists a constant \(\delta_{1}>0\) such that \(\gamma=1+\delta_{1}\) in Lemma 2.4, then the result (3.1) holds. □ Based on the interior higher integrability of Du, we now turn to a proof of the Hölder continuity of u. For the main difficult result from the difference quotient of \(S(Du)\) in (3.15), for dealing with it, we need the monotonicity (2.8) and the growth condition (2.6) of \(S(\cdot)\). Furthermore, we may repeatedly use the log-Hölder property of \(p(x)\) and the local higher integrability of Du. Proof of Theorem 1.1 For \(i,j=1,2,\ldots,d\). Let \(\xi\in C_{0}^{\infty}\) be a cut-off function for \(B_{2r}\), i.e., $$ \textstyle\begin{cases} \xi=1 &\text{in } B_{r},\\ 0\leq\xi\leq1&\text{in } B_{2r},\\ \vert \frac{\partial\xi}{\partial x_{i}} \vert \leq\frac{c}{r},\qquad \vert \frac{\partial^{2} \xi}{\partial x_{i}\,\partial x_{j}} \vert \leq\frac{c}{r^{2}} & \text{in } B_{2r}, \end{cases} $$ where \(c>0\) is a positive constant independent of r. Define $$ \varphi=\triangle_{-\lambda,k}\bigl(\xi^{2} \triangle_{\lambda,k}u\bigr), $$ where \(|\lambda|< r<1\), \(k=1,\ldots,d\). One can see that φ is an admissible test function in (2.11). Now we divide the proof into several steps. Step 1 (Fractional differentiability of Du). To begin with, we choose φ in (3.14) as a test function, inserting it into (2.11) with \(f=0\), which implies $$\begin{aligned} & \int_{B_{2r}}S_{ij}(Du)D_{ij}\bigl( \triangle_{-\lambda,k}\bigl(\xi ^{2}\triangle _{\lambda,k}u\bigr) \bigr)\,dx+ \int_{B_{2r}}u_{i}\partial_{i} u_{j} \triangle _{-\lambda ,k}\bigl(\xi^{2} \triangle_{\lambda,k}u_{j}\bigr)\,dx \\ &\quad = \int_{B_{2r}}\phi\partial_{i}\bigl( \triangle_{-\lambda,k}\bigl(\xi ^{2}\triangle _{\lambda,k}u_{i} \bigr)\bigr) \,dx. \end{aligned}$$ $$\begin{aligned} & \int_{B_{2r}}S_{ij}(Du)D_{ij}\bigl( \triangle_{-\lambda,k}\bigl(\xi ^{2}\triangle _{\lambda,k}u\bigr) \bigr)\,dx \\ &\quad= \int_{B_{2r}} \bigl[\triangle_{\lambda ,k}S_{ij}(Du) \bigr]\xi^{2}\triangle_{\lambda,k}D_{ij} u\,dx \\ &\qquad{} + \int_{B_{2r}}S_{ij}(Du)\triangle_{-\lambda,k} \biggl(\xi \biggl(\frac {\partial\xi}{\partial x_{i}}\triangle_{\lambda,k}u_{j}+ \frac{\partial \xi }{\partial x_{j}}\triangle_{\lambda,k}u_{i} \biggr) \biggr)\,dx \end{aligned}$$ $$\begin{aligned} &\triangle_{\lambda,k}S(Du) \\ &\quad = \bigl(1+ \bigl\vert Du(x+ \lambda e_{k}) \bigr\vert ^{2}\bigr)^{\frac{p(x+\lambda e_{k})-2}{2}}Du(x+ \lambda e_{k}) \\ &\qquad{}-\bigl(1+ \bigl\vert Du(x) \bigr\vert ^{2}\bigr)^{\frac{p(x+\lambda e_{k})-2}{2}}Du(x) \\ &\qquad{}+ \bigl[ \bigl(1+ \bigl\vert Du(x) \bigr\vert ^{2} \bigr)^{\frac{p(x+\lambda e_{k})-2}{2}}Du(x)-\bigl(1+ \bigl\vert Du(x) \bigr\vert ^{2} \bigr)^{\frac{p(x)-2}{2}}Du(x) \bigr]. \end{aligned}$$ Then from (2.8) we arrive at $$\begin{aligned} &k_{0} \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(x+\lambda e_{k})-2} \cdot \vert \triangle_{\lambda,k}Du \vert ^{2}\xi ^{2}\,dx \\ &\quad \leq - \int_{B_{2r}}S_{ij}(Du)\triangle_{-\lambda,k} \biggl(\xi \biggl(\frac{\partial\xi}{\partial x_{i}}\triangle_{\lambda,k}u_{j}+ \frac {\partial \xi}{\partial x_{j}}\triangle_{\lambda,k}u_{i} \biggr) \biggr)\,dx \\ &\qquad{}- \int_{B_{2r}}u_{i}\partial_{i}u_{j} \bigl(\triangle_{-\lambda,k} \bigl(\xi ^{2}\triangle_{\lambda,k}u_{j} \bigr) \bigr)\,dx + \int_{B_{2r}}\phi\partial_{i} \bigl( \triangle_{-\lambda,k} \bigl(\xi ^{2}\triangle_{\lambda,k}u_{i} \bigr) \bigr)\,dx \\ &\qquad{}- \int_{B_{2r}} \bigl[ \bigl(1+ \bigl\vert Du(x) \bigr\vert ^{2}\bigr)^{\frac{p(x+\lambda e_{k})-2}{2}}D_{ij}u(x)-\bigl(1+ \bigl\vert Du(x) \bigr\vert ^{2}\bigr)^{\frac{p(x)-2}{2}}D_{ij}u(x) \bigr] \\ &\qquad{}\times\triangle_{\lambda,k}D_{ij}u\,dx \\ &\quad =:H_{1}+H_{2}+H_{3}+H_{4}. \end{aligned}$$ Since \(p(x)\) is Hölder continuous, we can choose \(0< r<1\) suitable small such that $$ p_{2}\leq p_{1}(1+\delta_{1})\leq p(x) (1+ \delta_{1}). $$ By Lemma 3.1, we can see that $$\begin{aligned} &k_{0} \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(x+\lambda e_{k})-2} \cdot \vert \triangle_{\lambda,k}Du \vert ^{2}\xi ^{2}\,dx \\ &\quad\leq k_{0} \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p_{2}-2} \cdot \vert \triangle_{\lambda,k}Du \vert ^{2}\xi^{2}\,dx \\ &\quad\leq ck_{0} \int_{B_{3r}} \bigl(1+ \bigl\vert Du(x) \bigr\vert \bigr)^{p_{2}} \,dx\leq c. \end{aligned}$$ From the previous inequality, one can see that, for suitable small \(r\in (0,1]\) and \(|\lambda|< r\), $$ \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(x+\lambda e_{k})-2} \cdot \vert \triangle_{\lambda,k}Du \vert ^{2}\xi^{2}\,dx< 1, $$ this conclusion will be heavily used in the following estimate. Furthermore, taking into account the Korn inequality (2.4) and (3.1), we are in a position to obtain $$ \int_{B_{2r}} \vert \nabla u \vert ^{p(x+\lambda e_{k})}\,dx< \int_{B_{3r}}\bigl( \vert \nabla u \vert ^{p_{2}}+1 \bigr)\,dx\leq c $$ for all \(|\lambda|< r\), where c is a determined positive constant independent of λ, and from now on \(r\in(0,1]\) is a fixed constant. For simplicity of notation, in the following paper, we may denote \(p(x+\lambda e_{k})\) as \(p(\bar{x})\). Now, we turn to an estimate of \(H_{1}\), \(H_{3}\)–\(H_{4}\). Estimation of \(H_{1}\). By Hölder inequality with variable exponent, from (1.10), (2.3) and (2.9), we find that $$\begin{aligned} H_{1}\leq{}& 2\sum_{i,j=1} ^{d} \bigl\Vert S_{ij}(Du) \bigr\Vert _{L^{p'(\bar {x})}(B_{2r})} \\ &{}\times \biggl\Vert \triangle_{-\lambda,k} \biggl(\xi \biggl( \frac {\partial\xi }{\partial x_{i}}\triangle_{\lambda,k}u_{j}+\frac{\partial\xi }{\partial x_{j}} \triangle_{\lambda,k}u_{i} \biggr) \biggr) \biggr\Vert _{L^{p(\bar {x})}(B_{2r})} \\ \leq{}& c \vert \lambda \vert ^{p_{1}/p_{2}} \biggl( \int_{B_{2r}} \bigl(1+ \vert D u \vert ^{p(\bar{x})} \bigr) \,dx \biggr)^{\frac{1}{q_{1}}} \\ &{}\times\sum_{i,j=1} ^{d} \biggl\Vert \frac{\partial}{\partial x_{k}} \biggl(\xi \biggl(\frac{\partial\xi}{\partial x_{i}}\triangle _{\lambda ,k}u_{j}+\frac{\partial\xi}{\partial x_{j}}\triangle_{\lambda ,k}u_{i} \biggr) \biggr) \biggr\Vert _{L^{p(\bar{x})}(B_{2r})} \\ \leq{}& c \frac{ \vert \lambda \vert ^{p_{1}/p_{2}}}{r^{p_{2}/p_{1}}} \biggl( \int _{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac {1}{q_{1}}} \\ &{} \times\biggl[ \frac{ \vert \lambda \vert ^{p_{1}/p_{2}}}{r^{p_{2}/p_{1}}} \biggl( \int _{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac {1}{s_{1}}} + \bigl\Vert \xi\nabla(\triangle_{\lambda,k}u) \bigr\Vert _{L^{p(\bar {x})}(B_{2r})} \biggr], \end{aligned}$$ for all \(0<|\lambda|<r<1\), and \(s_{1}\) equal to \(p_{1}\) or \(p_{2}\), \(q_{1}\) equal to \(p_{1}'\) or \(p_{2}'\), and in the last inequality we have used the fact \(|Du|\leq|\nabla u|\). Note that $$ \xi\nabla(\triangle_{\lambda,k}u)=\nabla(\xi\triangle_{\lambda,k}u )- \nabla\xi\triangle_{\lambda,k}u \quad\text{and}\quad \bigl(\nabla(\xi \triangle _{\lambda,k}u)\bigr)_{2r}=0= \bigl(D(\xi\triangle_{\lambda,k}u) \bigr)_{2r}. $$ Thus, by the Korn inequality (2.5), we infer that $$\begin{aligned} & \bigl\Vert \xi\nabla(\triangle_{\lambda,k}u) \bigr\Vert _{L^{p(\bar {x})}(B_{2r})} \\ &\quad\leq \bigl\Vert \nabla(\xi\triangle_{\lambda,k}u) \bigr\Vert _{L^{p(\bar{x})}(B_{2r})} + \Vert \nabla\xi\triangle_{\lambda,k}u \Vert _{L^{p(\bar {x})}(B_{2r})} \\ &\quad\leq \bigl\Vert D(\xi\triangle_{\lambda,k}u) \bigr\Vert _{L^{p(\bar{x})}(B_{2r})} + \Vert \nabla\xi\triangle_{\lambda,k}u \Vert _{L^{p(\bar {x})}(B_{2r})} \\ &\quad\leq c \biggl( \int_{B_{2r}} \bigl\vert \xi D(\triangle_{\lambda,k}u) \bigr\vert ^{p(\bar {x})}\,dx \biggr)^{\frac{1}{s_{1}}} +c\frac{ \vert \lambda \vert ^{p_{1}/p_{2}}}{r^{p_{2}/p_{1}}} \biggl( \int_{B_{2r}} \vert \nabla u \vert ^{p(\bar{x})}\,dx \biggr)^{\frac{1}{s_{1}}}, \end{aligned}$$ where in the last inequality, for simplicity, we just denote the exponent as \(1/s_{1}\), and it cannot add any confusion when the exponent is replaced by one of another character. Now, inserting (3.20) into (3.19), we obtain $$\begin{aligned} H_{1} \leq{}&\frac{c \vert \lambda \vert ^{2p_{1}/p_{2}}}{r^{2p_{2}/p_{1}}} \biggl( \int _{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac {1}{q_{1}}+\frac{1}{s_{1}}} \\ &{}+ \frac{c \vert \lambda \vert ^{p_{1}/p_{2}}}{r^{p_{2}/p_{1}}} \biggl( \int_{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac{1}{q_{1}}}\cdot \biggl( \int_{B_{2r}} \bigl\vert \xi D (\triangle_{\lambda,k}u) \bigr\vert ^{p(\bar{x})} \,dx \biggr)^{\frac{1}{s_{1}}} \\ :={}& A+B. \end{aligned}$$ Taking into account (3.18), we can see that the term A in the previous inequality is bounded from above for fixed r suitable small. On the other hand, observe that, for any suitable function \(f,g\) and \(q\leq s<2\), by the Hölder inequality, $$\begin{aligned} \int \vert f \vert ^{s}\,dx&= \int\bigl( \vert g \vert ^{\frac{s(q-2)}{2}} \vert f \vert ^{s}\bigr) \vert g \vert ^{\frac {s(2-q)}{2}}\,dx \\ &\leq \bigl\Vert \vert g \vert ^{\frac{s(q-2)}{2}} \vert f \vert ^{s} \bigr\Vert _{L^{\frac{2}{s}}}\bigl\Vert |g|^{\frac{s(2-q)}{2}}\bigr\Vert _{L^{\frac{2}{2-s}}}. \end{aligned}$$ Thus, in order to estimate the term B in (3.21), take $$ g=\bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr),\qquad f=\xi D (\triangle _{\lambda,k}u), $$ \(s=q=p(\bar{x})\) in previous inequality. If \(\frac{t_{1}}{s_{1}}\geq1\), by the Hölder inequality, the Young inequality with \(k_{0}/4\) and (2.3), we obtain $$\begin{aligned} B\leq{}& \frac{c \vert \lambda \vert ^{p_{1}/p_{2}}}{r^{p_{2}/p_{1}}} \biggl( \int _{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac {1}{q_{1}}} \\ &{}\times \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})-2}\cdot \bigl\vert D(\triangle_{\lambda,k}u) \bigr\vert ^{2} \xi ^{2}\,dx \biggr)^{\frac{t_{1}}{2s_{1}}} \\ &{}\times \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})}\,dx \biggr)^{\frac{2-t_{2}}{2s_{1}}} \\ \leq{}&\frac{k_{0}}{4} \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})-2}\cdot \bigl\vert D(\triangle_{\lambda ,k}u) \bigr\vert ^{2} \xi^{2} \,dx \biggr)^{\frac{t_{1}}{s_{1}}} \\ &{}+c(k_{0})\frac{ \vert \lambda \vert ^{2p_{1}/p_{2}}}{r^{2p_{2}/p_{1}}} \biggl( \int _{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac {2}{q_{1}}} \\ &{}\times \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})} \,dx \biggr)^{\frac{2-t_{2}}{s_{1}}} \\ \leq{}&\frac{k_{0}}{4} \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})-2}\cdot \bigl\vert D(\triangle_{\lambda ,k}u) \bigr\vert ^{2} \xi^{2} \,dx \\ &{}+c(k_{0})\frac{ \vert \lambda \vert ^{2p_{1}/p_{2}}}{r^{2p_{2}/p_{1}}} \biggl( \int _{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac {2}{q_{1}}} \\ &{}\times \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})} \,dx \biggr)^{\frac{2-t_{2}}{s_{1}}}, \end{aligned}$$ where \(t_{1}, t_{2}\) are equal to \(p_{1}\) or \(p_{2}\), and in the last inequality we have used the fact \(\frac{t_{1}}{s_{1}}\geq1\) and (3.17). Likewise, if \(0<\frac{t_{1}}{s_{1}}<1\), that is, \(t_{1}=p_{1}, s_{1}=p_{2}\), by the Hölder inequality, the Young inequality with \(k_{0}/4\) and (2.3), we arrive at $$\begin{aligned} B\leq{}& \frac{c \vert \lambda \vert ^{p_{1}/p_{2}}}{r^{p_{2}/p_{1}}} \biggl( \int _{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac {1}{q_{1}}} \\ &{}\times \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})-2}\cdot \bigl\vert D(\triangle_{\lambda,k}u) \bigr\vert ^{2} \xi ^{2}\,dx \biggr)^{\frac{p_{1}}{2p_{2}}} \\ &{}\times \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})}\,dx \biggr)^{\frac{2-t_{2}}{2p_{2}}} \\ \leq{}&\frac{k_{0}}{4} \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})-2}\cdot \bigl\vert D(\triangle_{\lambda ,k}u) \bigr\vert ^{2} \xi^{2} \,dx \\ &{}+c(k_{0}) \biggl( \int_{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac{2p_{2}}{q_{1}(2p_{2}-p_{1})}} \\ &{}\times\frac{ \vert \lambda \vert ^{2p_{1}/(2p_{2}-p_{1})}}{r^{2p_{2}^{2}/p_{1}(2p_{2}-p_{1})}} \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar {x})} \,dx \biggr)^{\frac{2-t_{2}}{2p_{2}-p_{1}}} \\ \leq{}&\frac{k_{0}}{4} \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})-2}\cdot \bigl\vert D(\triangle_{\lambda ,k}u) \bigr\vert ^{2} \xi^{2} \,dx \\ &{}+c(k_{0}) \biggl( \int_{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac{2p_{2}}{q_{1}(2p_{2}-p_{1})}} \\ &{}\times\frac{ \vert \lambda \vert ^{2p_{1}/(2p_{2}-p_{1})}}{r^{2p_{2}^{2}/p_{1}^{2}}} \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar {x})} \,dx \biggr)^{\frac{2-t_{2}}{2p_{2}-p_{1}}}. \end{aligned}$$ Combining (3.22) with (3.23), we obtain $$\begin{aligned} B \leq{}& \frac{k_{0}}{4} \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})-2}\cdot \bigl\vert D(\triangle_{\lambda ,k}u) \bigr\vert ^{2} \xi^{2} \,dx \\ &{}+c(k_{0})\frac{ \vert \lambda \vert ^{2p_{1}/(2p_{2}-p_{1})}}{r^{2p_{2}^{2}/p_{1}^{2}}} \biggl( \int _{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac {2p_{2}}{q_{1}(2p_{2}-p_{1})}} \\ &{}\times \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})} \,dx \biggr)^{\frac{2-t_{2}}{2p_{2}-p_{1}}} \\ &{} +c(k_{0})\frac{ \vert \lambda \vert ^{2p_{1}/p_{2}}}{r^{2p_{2}/p_{1}}} \biggl( \int _{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac {2}{q_{1}}} \\ &{}\times \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})} \,dx \biggr)^{\frac{2-t_{2}}{s_{1}}}. \end{aligned}$$ Inserting (3.24) into (3.21), we finally obtain $$\begin{aligned} H_{1}\leq{}& \frac{c \vert \lambda \vert ^{2p_{1}/p_{2}}}{r^{2p_{2}/p_{1}}}\biggl[ \biggl( \int_{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac {1}{q_{1}}+\frac {1}{s_{1}}} \\ &{}+ \biggl( \int_{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p\bar{x})} \bigr)\,dx \biggr)^{\frac{2}{q_{1}}} \\ &{}\times \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})}\,dx \biggr)^{\frac {2-t_{2}}{s_{1}}} \biggr] \\ &{}+c\frac{ \vert \lambda \vert ^{2p_{1}/(2p_{2}-p_{1})}}{r^{2p_{2}^{2}/p_{1}^{2}}} \biggl( \int_{B_{2r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar{x})} \bigr)\,dx \biggr)^{\frac{2p_{2}}{q_{1}(2p_{2}-p_{1})}} \\ & {}\times\biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar {x})} \,dx \biggr)^{\frac{2-t_{2}}{2p_{2}-p_{1}}} \\ &{} +\frac{k_{0}}{4} \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})-2}\cdot \bigl\vert D(\triangle_{\lambda ,k}u) \bigr\vert ^{2} \xi^{2} \,dx, \end{aligned}$$ for all \(0<|\lambda|<r<1\). Estimation of \(H_{3}\). To begin with, we note that $$ \frac{\partial}{\partial x_{i}} \bigl( \triangle_{-\lambda,k} \bigl(\xi^{2} \triangle_{\lambda,k}u_{i} \bigr) \bigr)=2 \biggl( \triangle _{-\lambda ,k} \biggl(\xi\frac{\partial\xi}{\partial x_{i}} \triangle_{\lambda ,k}u_{i} \biggr) \biggr), $$ then from (2.9), (2.3), similar to (3.24), we can see that $$\begin{aligned} H_{3}= {}& 2 \int_{B_{2r}}\phi \biggl(\triangle_{-\lambda,k} \biggl(\xi \frac {\partial\xi}{\partial x_{i}} \triangle_{\lambda,k}u_{i} \biggr) \biggr)\,dx \\ \leq{}& c \vert \lambda \vert ^{p_{1}/p_{2}} \Vert \phi \Vert _{L^{p'(\bar{x})}(B_{2r})} \biggl( \int_{B_{2r}} \biggl\vert \nabla \biggl(\xi\frac{\partial\xi}{\partial x_{i}} \triangle_{\lambda,k}u_{i} \biggr) \biggr\vert ^{p(\bar{x})} \,dx \biggr)^{\frac {1}{t_{3}}} \\ \leq{}&c \vert \lambda \vert ^{p_{1}/p_{2}} \Vert \phi \Vert _{L^{p'(x)}(B_{3r})} \\ &{}\times \biggl[\frac{ \vert \lambda \vert ^{p_{1}/p_{2}}}{r^{2p_{2}/p_{1}}} \biggl( \int _{B_{2r}} \vert \nabla u \vert ^{p(\bar{x})}\,dx \biggr)^{\frac{1}{t_{3}}} + \frac{1}{r^{p_{2}/p_{1}}} \biggl( \int_{B_{2r}} \bigl\vert \xi D(\triangle _{\lambda ,k }u) \bigr\vert ^{p(\bar{x})}\,dx \biggr)^{\frac{1}{s_{1}}} \biggr] \\ \leq{}&c\frac{ \vert \lambda \vert ^{2p_{1}/p_{2}}}{r^{2p_{2}/p_{1}}} \Vert \phi \Vert _{L^{p'(x)}(B_{3r})} \biggl( \int_{B_{3r}} \bigl(1+ \vert \nabla u \vert ^{p(\bar {x})} \bigr)\,dx \biggr)^{\frac{1}{t_{3}}} \\ &{}+c\frac{ \vert \lambda \vert ^{2p_{1}/p_{2}}}{r^{2p_{2}/p_{1}}} \Vert \phi \Vert ^{2}_{L^{p'(x)}(B_{3r})} \biggl( \int _{B_{2r}}\bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})}\,dx \biggr)^{\frac{2-t_{2}}{s_{1}}} \\ &{}+c(k_{0})\frac{ \vert \lambda \vert ^{2p_{2}/(2p_{2}-p_{1})}}{r^{2p_{2}^{2}/p_{1}^{2}}} \Vert \phi \Vert _{L^{p'(x)}(B_{3r})}^{2p_{2}/(2p_{2}-p_{1})} \\ &{}\times \biggl( \int_{B_{2r}}\bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar {x})}\,dx \biggr)^{\frac{2-t_{2}}{2p_{2}-p_{1}}} \\ &{}+\frac{k_{0}}{4} \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})-2}\cdot \bigl\vert D(\triangle_{\lambda,k}u) \bigr\vert ^{2} \xi^{2} \,dx, \end{aligned}$$ where \(t_{3}\) is equal to \(p_{1}\) or \(p_{2}\), and we replace \(t_{3}\) by \(s_{1}\) in the second inequality if there is no danger of any possible confusion, since from (3.19) and (3.21), we infer that \((\int_{B_{2r}}|\xi D(\triangle_{\lambda,k }u)|^{p}\,dx )^{\frac {1}{s_{1}}} \) is biggest for all cases of the value of \(t_{3}\). Estimation of \(H_{4}\). Now, since \(0< r<1\) is suitably small, we have $$ p_{2}\biggl(1+\frac{\delta_{1}}{4}\biggr)\leq p_{1}(1+ \delta_{1})\leq p(x) (1+\delta_{1}). $$ From (2.6) and Lemma 3.1, we can find that $$\begin{aligned} H_{4} \leq{}& c\omega\bigl( \vert \lambda \vert \bigr) \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x) \bigr\vert ^{2}\bigr)^{\frac {p_{2}-1}{2}} \\ &{}\times\bigl(\log\bigl(1+ \bigl\vert Du(x) \bigr\vert \bigr)+1\bigr) \bigl( \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr) \,dx \\ \leq{}&c\omega\bigl( \vert \lambda \vert \bigr) \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x) \bigr\vert \bigr)^{p_{2}} \bigl(\log ^{\frac{p_{2}}{p_{2}-1}}\bigl(1+ \bigl\vert Du(x) \bigr\vert \bigr)+1 \bigr) \\ &{}+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert ^{p_{2}} \,dx \\ \leq{}& c \vert \lambda \vert ^{2\theta_{1}} \int_{B_{3r}} \bigl(1+ \bigl\vert Du(x) \bigr\vert \bigr)^{p_{2}(1+\frac {\delta_{1}}{4})} \,dx \\ \leq{}& c \vert \lambda \vert ^{2\theta_{1}} \int_{B_{3r}} \bigl(1+ \bigl\vert Du(x) \bigr\vert \bigr)^{p(x)(1+\delta _{1})} \,dx\leq c\frac{\lambda^{2\theta_{1}}}{r^{\delta_{1}\,d}}, \end{aligned}$$ for all \(0<|\lambda|\leq r\). Inserting (3.25)–(3.27) into (3.16), we finally obtain $$\begin{aligned} &\frac{k_{0}}{2} \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})-2}\cdot \bigl\vert D(\triangle_{\lambda,k}u) \bigr\vert ^{2} \xi^{2} \,dx \\ &\quad \leq H_{2}+ c\frac{ \vert \lambda \vert ^{2p_{1}/p_{2}}}{r^{2p_{2}/p_{1}}} \biggl[ \biggl( \int_{B_{3r}} \bigl(1+ \vert \nabla u \vert ^{p(x)} \bigr) \,dx \biggr)^{\frac {1}{q_{1}}+\frac{1}{s_{1}}} \\ &\qquad{}+ \Vert \phi \Vert _{L^{p'(x)}(B_{3r})} \biggl( \int_{B_{3r}} \bigl(1+ \vert \nabla u \vert ^{p(x)} \bigr)\,dx \biggr)^{\frac{1}{t_{3}}} \\ &\qquad{} + \biggl( \Vert \phi \Vert ^{2}_{L^{p'(x)}(B_{3r})} + \biggl( \int _{B_{3r}} \bigl( 1+ \vert \nabla u \vert ^{p(x)} \bigr)\,dx \biggr)^{\frac {2}{q_{1}}} \biggr) \\ &\qquad{}\times \biggl( \int_{B_{2r}}\bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar {x})}\,dx \biggr)^{\frac{2-t_{2}}{s_{1}}} \biggr] \\ &\qquad{} +\frac{ \vert \lambda \vert ^{2p_{2}/(2p_{2}-p_{1})}}{r^{2p_{2}^{2}/p_{1}^{2}}} \biggl( \int _{B_{2r}}\bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})}\,dx \biggr)^{\frac {2-t_{2}}{2p_{2}-p_{1}}} \\ &\qquad{}\times \biggl( \int_{B_{3r}} \bigl(1+ \vert \nabla u \vert ^{p(x)} \bigr)\,dx \biggr)^{\frac{2p_{2}}{q_{1}(2p_{2}-p_{1})}} \\ &\qquad{} +\frac{ \vert \lambda \vert ^{2p_{2}/(2p_{2}-p_{1})}}{r^{2p_{2}^{2}/p_{1}^{2}}} \biggl( \int _{B_{2r}}\bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})}\,dx \biggr)^{\frac {2-t_{2}}{2p_{2}-p_{1}}} \\ &\qquad{}\times \Vert \phi \Vert _{L^{p'(x)}(B_{3r})}^{2p_{2}/(2p_{2}-p_{1})}+c \frac{\lambda ^{2\theta_{1}}}{r^{\delta_{1}d}}. \end{aligned}$$ Estimation of \(H_{2}\). To estimate \(H_{2}\), we first note that $$\begin{aligned} H_{2}= {}& \int_{B_{2r}} (\triangle_{\lambda,k }u_{i} ) \frac {\partial u_{j}(x+\lambda e_{k})}{\partial x_{i}}\xi^{2}\triangle_{\lambda,k} u_{j} \,dx \\ &{}+ \int_{B_{2r}}u_{i} \biggl(\triangle_{\lambda,k} \frac{\partial u_{j}}{\partial x_{i}} \biggr)\xi^{2}\triangle_{\lambda,k} u_{j}\,dx \\ := {}& C+D, \end{aligned}$$ where \(i,j=1,\ldots,d\). From now on, we assume for the moment that we have proved $$ u\in W^{1,s}_{\mathrm{loc}}(\Omega), $$ with \(s\in[p_{\infty},A_{d}]\), \(A_{d}\leq d\), and $$\begin{aligned} A_{d}:= \textstyle\begin{cases} d, & d=2, \\ \frac{3p_{0}d}{(d+2)p_{0}-2\bar{p}_{\infty}}, & d=3, \end{cases}\displaystyle \end{aligned}$$ here we denote \(3d/(d+2)\) by \(\bar{p}_{\infty}\), we will see that the assumption of (3.29) is valid later. From the Hölder inequality, we have $$ C\leq \int_{B_{2r}} \vert \triangle_{\lambda,k}u \vert ^{2} \bigl\vert \nabla u(x+\lambda e_{k}) \bigr\vert \,dx\leq c \Vert \triangle_{\lambda,k}u \Vert _{L^{2s'}(B_{2r})}^{2} \Vert u \Vert _{W^{1,s}(B_{3r})}. $$ We let \(s^{*}=\frac{ds}{d-s}\), since \(s<2s'<s^{*}\), by interpolation, it follows that $$ \Vert \triangle_{\lambda,k}u \Vert _{L^{2s'}(B_{2r})}^{2} \leq \Vert \triangle _{\lambda ,k}u \Vert _{L^{s^{*}}(B_{2r})}^{2(1-\theta)} \cdot \Vert \triangle_{\lambda ,k}u \Vert _{L^{s}(B_{2r})}^{2\theta} \leq c \vert \lambda \vert ^{2\theta} \Vert u \Vert ^{2}_{W^{1,s}(B_{3r})}, $$ with \(\theta=\frac{s(d+2)-3d}{2s}\). Combining (3.30) with (3.31), we arrive at $$ C\leq c \vert \lambda \vert ^{2\theta} \Vert u \Vert ^{3}_{W^{1,s}(B_{3r})}. $$ For the term D, by integration by parts, we get $$ D=- \int_{B_{2r}}u_{i}(\triangle_{\lambda,k}u_{j})^{2} \xi\frac{\partial \xi }{\partial x_{i}}\,dx. $$ Similar to the estimation of the term C, we obtain $$\begin{aligned} D \leq& \frac{c}{r} \Vert \triangle_{\lambda,k}u \Vert _{L^{2s'}(B_{2r})}^{2} \Vert u \Vert _{W^{1,s}(B_{3r})} \leq \frac{c}{r} \vert \lambda \vert ^{2\theta} \Vert u \Vert ^{3}_{W^{1,s}(B_{3r})}. \end{aligned}$$ Together (3.32) with (3.33), we finally obtain $$ H_{2}\leq c \biggl(1+\frac{1}{r} \biggr) \lambda^{2\theta} \Vert u \Vert ^{3}_{W^{1,s}(B_{3r})}\leq \frac{c\lambda^{2\theta}}{r^{2}} \Vert u \Vert ^{3}_{W^{1,s}(B_{3r})}. $$ Since \(s\in[p_{\infty},A_{d}]\), we can see that $$ \textstyle\begin{cases} \frac{p_{1}}{p_{2}}\geq\frac{p_{\infty}}{p_{0}}\geq\theta,\\ \frac{2p_{1}}{2p_{2}-p_{1}}\geq\frac{2p_{\infty}}{2p_{0}-p_{\infty}}\geq \theta, \end{cases} $$ and from (1.8), (1.9), we have $$ r^{-(p_{2}-p_{1})}=2^{p_{2}-p_{1}}e^{(p_{2}-p_{1})\log\frac{1}{2r}}\leq 2^{p_{2}-p_{1}}e^{\omega(2r)\log\frac{1}{2r}}\leq c(L), $$ for all \(0< r<1\). Moreover, since \(\delta_{1}\) is independent of r, we can choose \(\delta\in(0,\delta_{1}]\) to be a constant, replacing \(\delta _{1}\) by δ in (3.27), such that \(\delta d\leq2\). Now, taking into account (3.35), (3.36), inserting (3.34) into (3.28), we finally obtain $$\begin{aligned} & \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar {x})-2}\cdot \bigl\vert \triangle_{\lambda,k}D(u) \bigr\vert ^{2} \xi ^{2}\,dx \\ &\quad\leq\frac{c \vert \lambda \vert ^{2p_{1}/p_{2}}}{r^{2(1+(p_{2}-p_{1})/p_{1})}}+\frac {c \vert \lambda \vert ^{2p_{1}/(2p_{2}-p_{1})}}{r^{2(1+(p_{2}-p_{1})(p_{2}+p_{1})/p_{1}^{2})}} +\frac{c}{r^{2}} \vert \lambda \vert ^{2\theta}+c\frac{\lambda^{2\theta _{1}}}{r^{\delta d}} \\ &\quad \leq\frac{c}{r^{2}} \vert \lambda \vert ^{2\theta}, \end{aligned}$$ for all \(0<|\lambda|<r<1\) with c depends on \(\Vert u \Vert _{W_{\mathrm{loc}}^{p(x)}(\Omega)}\), \(\Vert u \Vert _{W^{1,s}_{\mathrm{loc}}(\Omega)}\), L and \(\Vert \phi \Vert _{L^{p'(x)}_{\mathrm{loc}}(\Omega)}\), and in the last inequality we have used the fact \(\theta_{1}\geq\theta\). In what follows, we set $$ \gamma:=\frac{2s}{s+2-\bar{p}_{\infty}}\in \biggl[\frac{2p_{\infty }}{2p_{\infty}+2-\bar{p}_{\infty}},\frac{2A_{d}}{2A_{d}+2-\bar {p}_{\infty }} \biggr]. $$ $$ \frac{2s(\gamma-2)}{2\gamma}=\bar{p}_{\infty}-2 \quad\text{for all } s \in[p_{\infty},A_{d}]. $$ Taking into account (3.19), with the aid of the Hölder inequality, we infer that $$\begin{aligned} & \int_{B_{r}} \bigl\vert \triangle_{\lambda,k} D(u) \bigr\vert ^{\gamma}\,dx \\ &\quad\leq \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{\bar{p}_{\infty}-2} \bigl\vert \triangle_{\lambda,k}D(u) \bigr\vert ^{2} \xi ^{2}\,dx \biggr)^{\frac{\gamma}{2}} \\ &\qquad{}\times \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{s}\,dx \biggr)^{\frac{2-\gamma}{2}} \\ &\quad\leq \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{p(\bar{x})-2} \bigl\vert \triangle_{\lambda,k}D(u) \bigr\vert ^{2} \xi^{2}\,dx \biggr)^{\frac{\gamma}{2}} \\ &\qquad{}\times \biggl( \int_{B_{2r}} \bigl(1+ \bigl\vert Du(x+\lambda e_{k}) \bigr\vert + \bigl\vert Du(x) \bigr\vert \bigr)^{s}\,dx \biggr)^{\frac{2-\gamma}{2}} \leq\frac{c \vert \lambda \vert ^{\gamma\theta}}{r^{2}}, \end{aligned}$$ where in the first inequality, we have taken into account \(\frac {\gamma }{2-\gamma}(2-\bar{p}_{\infty})=s\). Appealing to (3.38), the equivalence of the Nikolskii fractional space and the Sobolev space [42], we have $$ Du\in W^{t,\gamma}_{\mathrm{loc}}(\Omega), \quad\text{for all } t\in [0,\theta], $$ then the fractional order Sobolev embedding theorem implies that $$ Du \in L^{\frac{d\gamma}{d-t\gamma}}\quad \text{for all } t\in [0,\theta]. $$ Step 2 (Higher integrability of Du). We set $$ T(s):=\frac{d\gamma}{d-\theta\gamma}=\frac{4sd}{10d-2\bar {p}_{\infty}d-4s}. $$ Then, from the definition of \(\bar{p}_{\infty}\), by a direct calculation, we find that $$ T(s)-s\geq\sigma:=\frac{6d+2\bar{p}_{\infty}d+4p_{\infty }}{10d-2\bar {p}_{\infty}d-4p_{\infty}}>0. $$ Taking into account (3.39), (3.40), we obtain $$ Du\in L^{\eta}(B_{\frac{r}{2}}), \quad\text{for all } \eta\in \bigl[1,T(s)\bigr]. $$ We use that \(u\in V_{p(x)}\) is any weak solution to (1.6) and \(p(x)\geq p_{\infty}\). Thus, we claim that \(s=p_{\infty}\) such that (3.29) holds. We set $$ s_{0}:=p_{\infty},\qquad s_{1}:=s_{0}+ \frac{\sigma}{2}. $$ In virtue of (3.40), we see that $$ T(s_{0})>s_{1}>s_{0}=p_{\infty}. $$ Taking into account (3.41), $$ Du\in L^{s_{1}}(B_{\frac{r}{2}}). $$ By the Korn inequality, we have $$ u\in W^{1,s_{1}}_{\mathrm{loc}}(\Omega), $$ from the above, it follows that (3.29) holds with \(s=s_{1}\). Now, if \(s_{1}> A_{d}\), we have proved the higher integrability of Du, and we can derive the Hölder continuity of u by the Sobolev imbedding theorem, since \(Du\in L^{\eta}(B_{\frac{ r}{2}})\) for all \(\eta\in [1,T(A_{d})]\), and \(T(A_{d})> d\). If \(s_{1}\leq A_{d}\), we continue the process above, such that there is a \(s_{i}> A_{d}\) \((i>1)\). Without loss of generality, we assume that \(s_{1}\leq A_{d}\), then from (3.29), (3.39)–(3.42), $$ Du\in L^{\eta_{1}}(B_{\frac{r}{2}}),\quad \eta_{1}\in \bigl[1,T(s_{1})\bigr]. $$ Note that \(T(s_{1})>T(s_{0})\), then we can increase the power of integrability of Du by a standard bootstrap argument. We set $$ \tilde{s}:=\sup \bigl\{ s\in[p_{\infty},A_{d}]: u \in W^{1,s}_{\mathrm{loc}}(\Omega ) \bigr\} . $$ We may assume that \(s=A_{d}\), otherwise, \(s< A_{d}\). Define $$ \bar{s}:=\tilde{s}-\frac{\sigma}{4}, $$ by the definition of s̃, we can obtain \(u\in W^{1,\bar {s}}_{\mathrm{loc}}(\Omega)\), since \(\tilde{s}>p_{\infty}+\sigma\). From (3.41), it follows that $$ Du\in L^{\eta}(B_{\frac{r}{2}}), \quad\text{for all } \eta\in \bigl[1,T( \bar{s})\bigr]. $$ Then, choose \(\eta:=T(\bar{s})-\frac{\sigma}{4}\), and taking into account (3.40), we find that $$ \eta\geq\bar{s}+\sigma-\frac{\sigma}{4}\geq\tilde{s}+\frac {\sigma}{2}, $$ it contradicts (3.43). By the conclusion above, from now on we have $$ u\in W^{1,s}_{\mathrm{loc}}(\Omega), \quad\text{for all } s \in[p_{\infty},A_{d}], $$ then, from (3.41), we have $$ u\in W^{1,T(s)}_{\mathrm{loc}}(\Omega),\quad \text{for all } s\in [p_{\infty},A_{d}]. $$ Note that \(T(A_{d})>T(s)\), thus $$ u\in W^{1,\eta}_{\mathrm{loc}}(\Omega), \quad\text{for all } \eta\in \bigl[p_{\infty},T(A_{d})\bigr], $$ with \(T(A_{d})> d\). Now, making use of the Sobolev embedding theorem, from (3.44), \(u\in C^{\alpha}(\Omega)\), for some \(\alpha\in(0,1)\). □ Acerbi, E., Mingione, G.: Regularity results for a class of functionals with nonstandard growth. Arch. Ration. Mech. Anal. 156, 121–140 (2001) Acerbi, E., Mingione, G.: Regularity results for electrorheological fluids: the stationary case. C. R. Acad. Sci. Paris, Ser. I 334, 817–822 (2002) Acerbi, E., Mingione, G.: Regularity results for stationary electro-rheological fluids. Arch. Ration. Mech. Anal. 164, 213–259 (2002) Acerbi, E., Mingione, G., Seregin, G.: Regularity results for parabolic systems related to a class of non-Newtonian fluids. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 21(1), 25–60 (2004) Bae, H.O., Jin, B.J.: Regularity of non-Newtonian fluids. J. Math. Fluid Mech. 16, 225–241 (2014) Bellout, H., Bloom, F., Nečas, J.: Young measure-valued solutions for non-Newtonian incompressible fluids. Commun. Partial Differ. Equ. 19, 1763–1803 (1994) Bogovskii, M.E.: Solutions of some problems of vector analysis, associated with the operators div and grad. In: Theory of Cubature Formulas and the Application of Functional Analysis to Problems of Mathematical Physics. Trudy Sem. S. L. Soboleva, vol. 1, pp. 5–40. Akad. Nauk SSSR Sibirsk. Otdel., Inst. Mat., Novosibirsk (1980) Bonanno, G., Molica Bisci, G., Rădulescu, V.: Infinitely many solutions for a class of nonlinear eigenvalue problem in Orlicz–Sobolev spaces. C. R. Math. Acad. Sci. Paris 349(5–6), 263–268 (2011) Diening, L.: Theoretical and numerical results for electrorheological fluids. PhD thesis, University of Freiburg, Germany (2002) Diening, L., Ettwein, F., Růžička, M.: \(C^{1,\alpha}\)-Regularity for electrorheological fluids in two dimensions. Nonlinear Differ. Equ. Appl. 14(1–2), 207–217 (2007) Article MATH Google Scholar Diening, L., Harjulehto, P., Hästö, P., Růžička, M.: Lebesgue and Sobolev Spaces with Variable Exponents. Springer, Berlin (2011) Book MATH Google Scholar Diening, L., Málek, J., Steinhauer, M.: On Lipschitz truncations of Sobolev functions (with variable exponent) and their selected applications. ESAIM Control Optim. Calc. Var. 14(2), 211–232 (2008) Diening, L., Růžička, M.: Strong solutions for generalized Newtonian fluids. J. Math. Fluid Mech. 7, 413–450 (2005) Diening, L., Růžička, M.: An existence result for non-Newtonian fluids in nonregular domains. Discrete Contin. Dyn. Syst., Ser. S 3, 255–268 (2010) Diening, L., Růžička, M., Wolf, J.: Existence of weak solutions for unsteady motions of generalized Newtonian fluids. Ann. Sc. Norm. Super. Pisa, Cl. Sci. 9(1), 1–46 (2010) Edmunds, D., Rákosník, J.: Sobolev embeddings with variable exponent. Stud. Math. 143(3), 267–293 (2000) Edmunds, D., Rákosník, J.: Sobolev embeddings with variable exponent II. Math. Nachr. 246/247, 53–67 (2002) Fu, Y., Shan, Y.: On the removability of isolated singular points for elliptic equations involving variable exponent. Adv. Nonlinear Anal. 5(2), 121–132 (2016) Galdi, G.P.: An Introduction to the Mathematical Theory of the Navier–Stokes Equations. Vol. 1. Linearized Steady Problems. Springer Tracts in Natural Philosophy, vol. 38. Springer, Berlin (1994) Giaquinta, M.: Multiple Integrals in the Calculus of Variations and Nonlinear Elliptic Systems. Princeton Univ. Press, Princeton (1983) Giusti, E.: Direct Methods in the Calculus of Variations. World Scientific, Singapore (2003) Ho, K., Sim, I.: A-priori bounds and existence for solutions of weighted elliptic equations with a convection term. Adv. Nonlinear Anal. 6(4), 427–445 (2017) Hudzik, H.: The problems of separability, duality, reflexivity and of comparison for generalized Orlicz–Sobolev spaces \(W^{k}_{M }(\Omega)\). Comment. Math. Prace Mat. 21, 315–324 (1979) Kovácik, O., Rákosník, J.: On spaces \(L^{p(x)}\) and \(W^{k,p(x)}\). Czechoslov. Math. J. 116(41), 592–618 (1991) Kristály, A., Repovš, D.: On the Schrödinger–Maxwell system involving sublinear terms. Nonlinear Anal., Real World Appl. 13(1), 213–223 (2012) Ladyzhenskaya, O.A.: New equations for description of motion of viscous incompressible fluids and global solvability of boundary value problems for them. Proc. Steklov Inst. Math. 102 95–118 (1967) Ladyzhenskaya, O.A.: On nonlinear problems of continuum mechanics. In: Proc. Internat. Congr. Math. Proc. Internat. Congr. Math. (Moscow 1966), pp. 560–573. Nauka, Moscow (1968) English translation in: Amer. Math. Soc. Translation (2) 70 (1968) Ladyzhenskaya, O.A.: On some modifications of the Navier–Stokes equations for large gradient of velocity. Zap. Nauchn. Sem. LOMI 7, 126–154 (1968) English translation in: Sem. Math. V.A. Steklov Math. Inst. Leningrad 7 (1968) Málek, J., Nečas, J., Novotný, A.: Measure-valued solutions and asymptotic behavior of a multipolar model of a boundary layer. Czechoslov. Math. J. 42(117), 549–576 (1992) Málek, J., Nečas, J., Rokyta, M., Růžička, M.: Weak and Measure-Valued Solutions to Evolutionary Partial Differential Equations. Applied Mathematics and Mathematical Computation, vol. 13. Chapman & Hall, London (1996) Málek, J., Nečas, J., Růžička, M.: On the non-Newtonian incompressible fluids. Math. Models Methods Appl. Sci. 3(1), 35–63 (1993) Málek, J., Nečas, J., Růžička, M.: On weak solutions to a class of non-Newtonian incompressible fluids in bounded three-dimensional domains: the case \(p \geq 2\). Adv. Differ. Equ. 6(3), 257–302 (2001) Meyers, N.G., Elcart, A.: Some results on regularity of weak solutions of nonlinear elliptic system. J. Reine Angew. Math. 311/312, 145–169 (1979) Mihăilescu, M., Rădulescu, V.: A multiplicity result for a nonlinear degenerate problem arising in the theory of electrorheological fluids. Proc. R. Soc. Lond., Ser. A, Math. Phys. Eng. Sci. 462(2073), 625–2641 (2006) Mosolov, P., Mjasnikov, V.: On the correctness of boundary value problems in the mechanics of continuous media. Mat. Sb. 88(130), 256–267 (1972) Nečas, J.: Theory of multipolar viscous fluids. In: The Mathematics of Finite Elements and Applications, VII (Uxbridge, 1990), pp. 233–244. Academic Press, London (1991) Rădulescu, V.D., Repovš, D.D.: Partial Differential Equations with Variable Exponents. Variational Methods and Qualitative Analysis. Monographs and Research Notes in Mathematics. CRC Press, Boca Raton (2015) Rajagopal, K., Růžička, M.: On the modeling of electrorheological materials. Mech. Res. Commun. 23, 401–407 (1996) Repovš, D.: Stationary waves of Schrödinger-type equations with variable exponent. Anal. Appl. (Singap.) 13(6), 645–661 (2015) Růžička, M.: Flow of shear dependent electrorheological fluids: unsteady space periodic case. In: Sequeira, A. (ed.) Appl. Nonlinear Anal, pp. 485–504. Plenum, New York (1999) Růžička, M.: Electrorheological Fluids: Modeling and Mathematical Theory. Lecture Notes in Mathematics, vol. 1748. Springer, Berlin (2000) Simon, J.: Sobolev, Besov and Nikolskii fractional spaces: imbedding and comparions for vector valued spaces on an interval. Ann. Mat. Pura Appl. (4) LCVII, 117–148 (1990) Stredulinsky, E.W.: Higher integrability from reverse Hölder inequalities. Indiana Univ. Math. J. 29, 407–413 (1980) Tan, Z., Zhou, J.: Higher integrability of weak solution of a nonlinear problem arising in the electrorheological fluids. Commun. Pure Appl. Anal. 15(4), 1335–1350 (2016) Tan, Z., Zhou, J.: Partial regularity of a certain class of non-Newtonian fluids. J. Math. Anal. Appl. 455(2), 1529–1558 (2017) Wolf, J.: Existence of weak solutions to the equations of non-stationary motion of non-Newtonian fluids with shear rate dependent viscosity. J. Math. Fluid Mech. 9(1), 104–138 (2007) The authors wish to thank the referees and the editor for their valuable comments and suggestions. This work was supported by the National Natural Science Foundation of China (No. 11271305, 11531010). The second author was partially supported by the China Scholarship Council (No. 20170631 0012) as an exchange Ph.D. student at Purdue University. School of Mathematical Sciences, Xiamen University, Xiamen, P.R. China Zhong Tan, Jianfeng Zhou & Wenxuan Zheng School of Mechanical and Electronic Engineering, Tarm University, Alar, P.R. China Wenxuan Zheng Zhong Tan Jianfeng Zhou Each of the authors contributed to each part of this study equally, all authors read and approved the final manuscript. Correspondence to Wenxuan Zheng. Tan, Z., Zhou, J. & Zheng, W. Hölder continuity of weak solution to a nonlinear problem with non-standard growth conditions. Bound Value Probl 2018, 131 (2018). https://doi.org/10.1186/s13661-018-1051-6 Higher integrability Hölder continuity Nonlinear problem Fractional Sobolev space Electrorheological fluids
CommonCrawl
Chemistry - How to rationalise the resonance structures and hybridisation of the nitrogen in a conjugated amine? Solution 1: First of all, are they correct? ChemBioDraw had some complaints, but as far as I can see there's the same amount of electrons, and no valence orbitals exceeding capacity. Yes, these are the six most important resonance structures for this compound. The reason ChemDraw complains is that it is trying to act smarter than you, and it most certainly is not. It interprets that negative formal charge on the carbon atom as implying a lone pair, since carbon can only have the negative charge if it also has a lone pair. When you add the lone pair and the charge, ChemDraw is suddenly stupid and thinks you have exceeded the octet on carbon (3 bonds + 1 explicit LP + 1 implied LP from the charge). In the first structure, nitrogen is $\mathrm{sp^3}$ hybridised, but on all others it's clearly $\mathrm{sp^2}$ hybridised. So what does that mean? It's somewhere in between, but closer to $\mathrm{sp^3}$-hybridised? If the lone pair on a nitrogen (or on any atom)) participates in resonance, then that nitrogen (or whatever) atom must be $\mathrm{sp^2}$-hybridized so that the the lone pair is in a $\mathrm{p}$-like orbital to ensure appropriate symmetry for $\pi$-overlap. Put another way, if an atom is $\mathrm{sp^2}$-hybridized in one resonance structure, then it is $\mathrm{sp^2}$-hybridized in all of them. Atoms that are $\mathrm{sp^2}$-hybridized and $\mathrm{sp^3}$-hybridized have differing geometries, which is not permitted in the resonance phenomenon. In truth, hybridization is an approximation we make to make quantum mechanics jive with molecular geometry. Geometry is real (empirically determinable) and hybridization is fictitious (not empirically determinable), but hybridization makes QM behave better conceptually and mathematically. Hybridization is also a useful predictor of chemical reactivity of various bonds and functional groups in organic chemistry. Experimentally, I would guess that the nitrogen atom is trigonal planar (or very close to it). Trigonal planar implies $\mathrm{sp^2}$-hybridized (not the other way around). In molecular orbital theory, we do away with the need for both resonance and hybridization. This molecule would have 7 $\pi$ orbitals formed from linear combinations of the p orbitals on the 6 participating carbon atoms and the nitrogen atom. The probability density function plots of these 7 orbitals will be more complex than you might be used to, but they will suggest the same electron density and charge density as a resonance hybrid assembled from your six resonance structures. See the Wikipedia article on conjugated systems, which is not very good but will give you the idea. My reasoning is that the first structure contributes more. The five charge-separated resonance structures are more important than you think, since they imply that the five-membered ring is aromatic like the surprisingly stable cyclopentadienyl carbanion. However, you might not yet be this far along in your studies of organic chemistry. At the introductory level of understanding of resonance in organic molecules, the first structure is most important for the reasons you list. I wanted to add a perspective to this since this can be a contentious issue, and I would still say knowing whether a nitrogen is $sp^2$ or $sp^3$ hybridized is not straightforward. I was originally hoping to add this under this question specifically about aniline, but that question is now closed and marked as duplicate, with a link to this question. In a simple molecule like aniline, one might think that the hydrogens of the amine would be planar with the ring. However, we know from crystal structures of aniline that it is indeed pyramidal. If you have access to the Cambridge Structural Database (CSD), you can see this in structure BAZGOY. The improper dihedral angle of the hydrogens is ~26 degrees in that structure. I was also curious to see what QM had to say about this. I performed some optimizations of analine with the improper dihedral angle of the hydrogens constrained at various angles. Here is a drawing of the improper dihedral I constrained. I used the MP2 method and cc-pVTZ(-F) basis set, in implicit solvent (PBF) using the Jaguar software from Schrodinger. You can see in this plot and gif that the lowest energy angle was at ~35 degrees. That's pretty much completely pyramidal (in ethane, an equivalent angle is 33.4). However, that being said, I also wouldn't say this is dogmatic. This is an implicit solvent calculation, and in real water with real hydrogens and oxygens to hydrogen bond to, things may be still different. NMR might be able to help figure this out, or it could further confuse if the timescale of flipping smears the signal to give a flat angle when it is not. I should also point out that there are other crystal structures that contradict this. If you look at the crystal structure VOMFOS, which is of an adenosine analog, the amine hydrogens are planar with the ring. Is that just an artifact of the crystallization packing network, or a true representation of the hybridization of that nitrogen? Hard to say. I think my main point here is to be careful what you assume. If you asked me on a test in undergrad whether the amine on aniline is $sp^2$ or $sp^3$, I would have said $sp^2$ with confidence. Experimental and computational evidence suggests that might not be the case. Chemistry - What is the reaction between oxalic acid and potassium permanganate? Chemistry - Regioselectivity of acid-catalyzed ring-opening of epoxides Chemistry - Easily removable material that sticks to skin Chemistry - Is there a point at which Ethanol (E10) fuel becomes harmful to gas tanks or engines if not used? Chemistry - Significant Figures Interpretation Chemistry - Why are these molecular orbitals invalid for hexatriene? Chemistry - Is there any substance that's a 4-4-4 on the NFPA diamond? Chemistry - Why are all the phenyl protons in benzyl alcohol equivalent in the ¹H-NMR spectrum? Chemistry - Saturated vs unsaturated fats - Structure in relation to room temperature state? Chemistry - Carbon with 5 bonds? Chemistry - Dissolving Organic Tissues Chemistry - How do you create primary amines from alcohols?
CommonCrawl
Existence and convergence results of meromorphic solutions to the equilibrium system with angular velocity Bo Meng1 Boundary Value Problems volume 2019, Article number: 88 (2019) Cite this article We study the equilibrium system with angular velocity for the prey. This system is a generalization of the two-species equilibrium model with Neumann type boundary condition. Firstly, we consider the asymptotical stability of equilibrium points to the system of ordinary differential equations type. Then, the existence of meromorphic solutions and the stability of equilibrium points to the system of weakly coupled meromorphic type are discussed. Finally, the existence of nonnegative meromorphic solutions to the system of strongly coupled meromorphic type is investigated, and the asymptotic stability of unique positive equilibrium point of the system is proved by constructing meromorphic functions. The equilibrium system with angular velocity is noted for its pattern-forming behavior and has been widely used as a model for the study of obstacle problems involving reservoir simulation. These include the effects of noise on bifurcations, pattern selection, spatiotemporal chaos, and the dynamics of defects; see, for example, [1,2,3,4,5,6,7,8] and the references therein for details. It also has been used to model patterns in simple fluids and in a variety of complex fluids and biological materials, such as neural tissue [3, 7]. These problems are widely studied and very well used in many areas of mathematics and physics, see [3, 5, 6, 9]. Since it was initiated by Paul Dirac in order to get a form of quantum theory compatible with special relativity, the Dirac equation has been playing a critical role in some fields of mathematics and physics, such as quantum mechanics, Clifford analysis, and partial differential equations. As one of the universal equilibrium systems used in the description of pattern formation in spatially extended dissipative systems, the general equilibrium differential equation can also be found in the study of convective hydrodynamics, plasma confinement in toroidal devices, viscous film flow, and bifurcating solutions of the modified equilibrium differential equation [6, 10, 11]. In recent years, some references such as Sheng et al. [12], Zhai et al. [13], Zhang [14], Wu et al. [15], Sun et al. [16], Li et al. [17], Bai and Sun [18], Wang et al. [19], and so on, introduced many beautiful patterns to satisfy practical requirements of modern computing systems with multi-processors. There is the potential of considering the linearization characteristics to be further developed for the system of equilibrium boundary value problems. Ardila [20] studied the existence and stability of standing waves solutions of a three-coupled nonlinear Schrödinger system related to the Raman amplification in a plasma. Hu and Yin [21] considered dynamics of the compressible Navier–Stokes equation in one spatial dimension for a viscous fluid with vanishing thermal conductivity. For the case of ideal polytropic gases, it is shown that the rarefaction waves in this medium are stable with regards to sufficiently weak perturbations of the velocity and pressure fields. Motivated and inspired by the references [18,19,20,21], in this paper we further consider the following universal equilibrium equations with nondifferentiable boundary conditions: $$ \begin{gathered} -\Delta u + u= a\bigl( \vert x \vert \bigr) \vert u \vert ^{p-2}u, \quad x\in B_{1} \\ u>0, \quad x\in B_{1}, \\ \frac{\partial u}{\partial \nu }= 0, \quad x\in \partial B_{1}, \end{gathered} $$ where \(B_{1}\) is the unit ball centered at the origin in \(\mathbb{R} ^{n}\), \(n\geq 3\) and \(p>2\) and \(a\in L^{1}(0,1)\) is increasing, not constant and \(a(r)>0\) a.e. in \([0,1]\). Meanwhile, we are also interested in the equilibrium elliptic system given by $$ \begin{gathered} -\Delta u + u= f_{u}\bigl( \vert x \vert , u, v\bigr), \quad x\in B_{1} \\ -\Delta v + v= f_{v}\bigl( \vert x \vert , u, v\bigr), \quad x\in B_{1} \\ \frac{\partial u}{\partial \nu }=\frac{\partial u}{\partial \nu }= 0, \quad x\in \partial B_{1}, \end{gathered} $$ under suitable assumptions on f. Our assumptions do allow some supercritical nonlinearities. Problems in this abstract form are often referred to as equilibrium problems. More details on this problem class can be found in [9, 22]. Let \(I=\{1,2,\ldots ,n\}\) be an index set, \(H_{i}\) be a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle _{i}\) and norm \(\|\cdot \|_{i}\), respectively. Let \(A: H_{1}\rightarrow H_{1}\), \(B: H_{2}\rightarrow H_{2}\), \(F_{1}: H_{1}\times H_{2}\rightarrow H _{1}\), and \(\eta _{1}: H_{1}\times H_{1}\rightarrow H_{1}\) be mappings. Let \(a_{i}: H_{i}\times H_{i}\rightarrow \mathbb{R}\) be a coercive continuous map such that (C1) \(a_{i}(\sigma _{i},\sigma _{i})\geq c_{i} \|\sigma _{i}\| _{i}^{2}\); \(|a_{i}(\varrho _{i},\sigma _{i})|\leq d_{i} \|\varrho _{i} \|_{i}\cdot \|\sigma _{i}\|_{i}\) for any \(\varrho _{i},\sigma _{i}\in H_{i}\). Let \(b_{i}: H_{i}\times H_{i}\rightarrow \mathbb{R}\) be a map with nondifferentiable terms such that \(b_{i}\) is a linear function for the first variable; \(b_{i}\) is a convex function; There exists a positive constant \(\gamma _{i}\) satisfying $$ \gamma _{i} \Vert \varrho _{i} \Vert _{i} \cdot \Vert \sigma _{i} \Vert _{i}\geq a_{i}( \varrho _{i},\sigma _{i}) $$ $$ b_{i}(\varrho _{i},\sigma _{i}-w_{i}) \geq b_{i}(\varrho _{i},\sigma _{i})-b _{i}(\varrho _{i},w_{i}) $$ for any \(\varrho _{i},\sigma _{i},w_{i}\in H_{i}\). Based on the above notations, we then define the proposed system of generalized nonlinear variational inequality problems as follows: Find \((x,y)\in H_{1}\times H_{2}\) such that $$ \textstyle\begin{cases} \langle F_{1}(Ax,By)-f_{1},\eta _{1}(\sigma _{1},x)\rangle _{1}+a_{1}(x, \sigma _{1}-x)+b_{1}(x,\sigma _{1})-b_{1}(x,x)\geq 0\\ \quad \forall \sigma _{1}\in H_{1}, \\ \langle F_{2}(Ax,By)-f_{2},\eta _{2}(\sigma _{2},y)\rangle _{2}+a_{2}(y, \sigma _{2}-y)+b_{2}(y,\sigma _{2})-b_{2}(y,y)\geq 0\\ \quad \forall \sigma _{2}\in H_{2}, \end{cases} $$ where \(f_{i}\in H_{i}\) is given for each \(i\in I\). Remark 1 There are some special cases for the model problem (3) (see [23]): If \(A=B=I\), \(f_{i}=0\), and \(a_{i}(\varrho _{i},\sigma _{i})=0\), then (3) is equivalent to $$ \bigl\langle F_{1}(x,y),\eta _{1}(\sigma _{1},x)\bigr\rangle _{1}+b_{1}(x,\sigma _{1})-b_{1}(x,x)\geq 0, $$ where \(\sigma _{1}\in H_{1}\). $$ \bigl\langle F_{2}(x,y),\eta _{2}(\sigma _{2},y)\bigr\rangle _{2}+b_{2}(y,\sigma _{2})-b_{2}(y,y)\geq 0, $$ If \(H_{1}=H_{2}=H\), \(f_{i}=f_{2}=f\), \(\eta _{1}=\eta _{2}= \eta \), \(a_{1}=a_{2}=a\), \(b_{1}=b_{2}=b\), then (3) is reduced to $$ \bigl\langle F(Ax,Bx)-f,\eta (v,x)\bigr\rangle +a(x,v-x)+b(x,v)-b(x,x) \geq 0, $$ where \(v\in H\). Let us recall some basic definitions and lemmas that we need in the forthcoming analysis. Definition 1 We say that the functional Φ satisfies the Palais–Smale condition (see [15, 24,25,26]) if any sequence \(\{u_{n}\} _{n \in \mathbb{N}} \subset X\) has a convergent subsequence provided \(\{\varPhi (u_{n}) \}_{n \in \mathbb{N}}\) is bounded and \(\varPhi ' (u_{n}) \to 0\) as \(n \to + \infty \). For any \(\lambda >0\), we define the subfunctions associated with the universal equilibrium operator (1) by (see [11]) $$ \mathbb{I}_{+}^{\tau , \lambda } f (x) = \frac{1}{\varGamma (\tau )} \int _{- \infty }^{x} f (\xi ) (x - \xi )^{\tau - 1} e^{- \lambda (x - \xi )} \,d \xi $$ and the subfunctions associated with the universal equilibrium operator (2) by $$ \mathbb{I}_{-}^{\tau , \lambda } f (x) = \frac{1}{\varGamma (\tau )} \int _{x}^{+ \infty } f (\xi ) (\xi - x)^{\tau - 1} e^{- \lambda (\xi - x)} \,d \xi . $$ The positive and negative tempered equilibrium derivatives of order \(0 < \tau < 1\) are defined as follows (see [27]): $$\begin{aligned}& \mathbb{D}_{+}^{\tau , \lambda } f (x) = \lambda ^{\tau }f (x) + \frac{ \tau }{\varGamma (1 - \tau )} \int _{-\infty }^{x} \frac{f (x) - f(\xi )}{(x -\xi )^{\tau + 1}} e^{- \lambda (x - \xi )} \,d \xi , \\& \mathbb{D}_{-}^{\tau , \lambda } f (x) = \lambda ^{\tau }f (x) + \frac{ \tau }{\varGamma (1 - \tau )} \int _{x}^{+\infty } \frac{f (x) - f( \xi )}{(\xi -x)^{\tau + 1}} e^{- \lambda (\xi - x)} \,d \xi , \end{aligned}$$ for any \(\lambda >0\), respectively, where \(f : \mathbb{R} \to \mathbb{R}\). Define the Banach space $$ W_{\lambda }^{\tau , 2} (\mathbb{R}) = \biggl\{ f \in L^{2}(\mathbb{R}): \int _{\mathbb{R}}\bigl(\lambda ^{2} + \omega ^{2} \bigr)^{\tau } \bigl\vert \widehat{f} ( \omega ) \bigr\vert ^{2} \,d \omega < \infty \biggr\} $$ with the norm $$ \Vert f \Vert _{\tau , \lambda }= \biggl( \int _{\mathbb{R}}\bigl(\lambda ^{2} + \omega ^{2} \bigr)^{\tau } \bigl\vert \widehat{f} (\omega ) \bigr\vert ^{2} \,d \omega \biggr)^{1/2}. $$ For any \(f \in W_{\lambda }^{\tau , 2} (\mathbb{R})\), let \(\mathbb{D} _{\pm }^{\tau , \lambda } f (x)\) denote the subfunctions associated with the universal equilibrium operator (1) with Fourier transform \((\lambda \pm i \omega )^{\tau }\widehat{f} (\omega )\) (see [23]), where the Fourier transform of \(u(x)\) is defined as follows: $$\mathcal{F} (u) (\xi ) = \int _{-\infty }^{\infty }e^{- i x \cdot \xi } u(x) \,dx. $$ Now we state the following known results. Lemma 1 (see [23]) $$ \mathbb{D}_{\pm }^{\tau , \lambda } \mathbb{I}_{\pm }^{\tau , \lambda } f (x) = f (x) $$ for any \(\tau , \lambda >0\) and \(f \in L^{2}(\mathbb{R})\) and $$ \mathbb{I}_{\pm }^{\tau , \lambda } \mathbb{D}_{\pm }^{\tau , \lambda } f (x) = f (x) $$ for any \(f \in W_{\lambda }^{\tau , 2} (\mathbb{R})\). $$ \bigl\langle f, \mathbb{D}_{+}^{\tau , \lambda } g \bigr\rangle _{L^{2}( \mathbb{R})} = \bigl\langle \mathbb{D}_{-}^{\tau , \lambda } f, g \bigr\rangle _{L^{2}(\mathbb{R})} $$ for any \(\tau , \lambda >0\) and \(f, g \in W_{\lambda }^{\tau , 2} ( \mathbb{R})\). (see [2]) For any \(\tau , \lambda >0\) and \(p \geq 1\), \(\mathbb{I}_{\pm }^{\tau , \lambda }: L^{p}(\mathbb{R}) \to L^{p}(\mathbb{R})\) are bounded equilibrium operators with $$ \bigl\Vert \mathbb{I}_{\pm }^{\tau , \lambda } f \bigr\Vert _{L^{p}(\mathbb{R})} \leq \lambda ^{- \tau } \Vert f \Vert _{L^{p}(\mathbb{R})}. $$ $$ \mathbb{I}_{\pm }^{\tau , \lambda } \mathbb{I}_{\pm }^{\beta , \lambda } f (x) = \mathbb{I}_{\pm }^{\tau + \beta , \lambda } f (x) $$ for any \(\tau , \beta , \lambda >0\) and \(f \in L^{p} (\mathbb{R})\). $$ \bigl\langle f, \mathbb{I}_{+}^{\tau , \lambda } g \bigr\rangle _{L^{2}( \mathbb{R})} = \bigl\langle \mathbb{I}_{-}^{\tau , \lambda } f, g \bigr\rangle _{L^{2}(\mathbb{R})} $$ for any \(\tau , \lambda >0\) and \(f, g \in L^{2} (\mathbb{R})\). Next, for \(0 < \tau < 1\), we define a fractional Sobolev space \(H^{\tau }(\mathbb{R})\) as follows: $$\begin{aligned} H^{\tau }(\mathbb{R}) = \overline{C_{0}^{\infty }( \mathbb{R})}^{\| \cdot \|_{\tau }} \end{aligned}$$ endowed with $$ \Vert u \Vert _{\tau } = \biggl( \int _{\mathbb{R}} \bigl\vert u (t) \bigr\vert ^{2} \,dt + \int _{\mathbb{R}} \vert \omega \vert ^{2 \tau } \bigl\vert \widehat{u} (\omega ) \bigr\vert ^{2} \,d \omega \biggr)^{1/2}. $$ $$\begin{aligned}& 2^{\frac{\tau - 1}{2}} \Vert u \Vert _{\tau } \leq \Vert u \Vert _{\tau , 1} \leq \Vert u \Vert _{\tau }, \end{aligned}$$ $$\begin{aligned}& \Vert u \Vert _{\tau , 1} \leq \Vert u \Vert _{\tau , \lambda } \leq \lambda ^{\tau } \Vert u \Vert _{\tau ,1}, \end{aligned}$$ $$\begin{aligned}& \Vert u \Vert _{\tau , \lambda } < \Vert u \Vert _{\tau , 1} < \lambda ^{-\tau } \Vert u \Vert _{\tau ,\lambda } \end{aligned}$$ for \(0 < \tau < 1\), where \(\|u\|_{\tau , 1}\) is the norm on \(W_{1}^{\tau , 2} (\mathbb{R})\), and so \(W_{1}^{\tau , 2} (\mathbb{R}) = H^{\tau }(\mathbb{R})\) with equivalent norms. (see [5, 18]) Let \(\tau > 1/2\). Then any \(u \in W _{\lambda }^{\tau , 2} (\mathbb{R})\) is uniformly continuous, bounded and there exists a constant \(C= C_{\tau }\) such that $$ \sup_{t \in \mathbb{R}} \bigl\vert u (t) \bigr\vert \leq C \Vert u \Vert _{\tau ,\lambda }. $$ From Lemma 3 and (5)–(7), we have the following implication: if \(u \in W_{\lambda }^{\tau , 2}\) with \(\frac{1}{2} < \tau < 1\), then \(u \in L^{q} (\mathbb{R})\) for all \(q \in [2, \infty )\) as $$\begin{aligned} \int _{\mathbb{R}} \bigl\vert u (t) \bigr\vert ^{q} \,dt \leq \Vert u \Vert _{\infty }^{q -2} \Vert u \Vert _{L^{2} (\mathbb{R})}^{2} \leq 2^{1 - \tau } C^{q-2} \Vert u \Vert _{\tau , \lambda }^{q}. \end{aligned}$$ The imbedding of \(W_{\lambda }^{\tau , 2}\) in \(L^{q} (-T, T)\) is compact for \(q \in (2, \infty )\) and any \(T > 0\) (see [3]). Existence and convergence In this section, we prove the existence of solution of (3) and discuss the convergence of the sequence. Theorem 1 Assume that u is a critical point of $$ I(w):= \psi (w)-\frac{1}{p} \int _{B_{1}} a\bigl( \vert x \vert \bigr) \vert w \vert ^{p} \,dx. $$ If there exists \(v \in \operatorname{Dom}(\psi )\) satisfying the equilibrium equation $$ \begin{gathered} -\Delta v + v= a\bigl( \vert x \vert \bigr) \vert u \vert ^{p-2}u, \quad x\in B_{1} \\ \frac{\partial v}{\partial \nu }= 0, \quad x\in \partial B_{1}, \end{gathered} $$ then u is a meromorphic solution of (3). Assume that \(\{\varrho _{n}\}_{n \in \mathbf{N}} \subset W_{\lambda }^{\tau ,2} (\mathbb{R})\) is a sequence such that \(\{\varPhi (\varrho _{n})\}_{n \in \mathbb{N}}\) is bounded and \(\varPhi '( \varrho _{n}) \to 0\) as \(n \to \infty \). Then there exists a positive constant D such that $$ \bigl\vert \varPhi (\varrho _{n}) \bigr\vert \leq D \quad \text{and} \quad \bigl\Vert \varPhi '(\varrho _{n}) \bigr\Vert _{(W_{\lambda }^{\tau ,2} (\mathbb{R}))^{*}} \leq D $$ for any \(n \in \mathbb{N}\), where \((W_{\lambda }^{\tau ,2} ( \mathbb{R}))^{*}\) is the dual space of \(W_{\lambda }^{\tau ,2} ( \mathbb{R})\). Firstly, we show that \(\{\varrho _{n}\}_{n \in \mathbf{N}}\) is bounded. Without loss of generality, we assume that $$ \inf_{n} \Vert \varrho _{n} \Vert _{\tau ,\lambda } = \eta > 0, $$ denote by \(\varrho = \varrho (\eta )\) the number corresponding to \(\delta = \eta ^{2}\) in (C1) such that $$ M \bigl( \Vert \varrho _{n} \Vert _{\tau ,\lambda }^{2}\bigr) \geq \varrho $$ for all n. In view of (C2) and (12), one gets $$\begin{aligned} D + D \Vert \varrho _{n} \Vert _{\tau ,\lambda } \geq& \varPhi ( \varrho _{n})- \frac{1}{ \mu } \varPhi ' (\varrho _{n}) \varrho _{n} \\ = &\frac{1}{2} \widehat{M} \bigl( \Vert \varrho _{n} \Vert _{\tau ,\lambda }^{2}\bigr) - \frac{1}{\mu } M \bigl( \Vert \varrho _{n} \Vert _{\tau ,\lambda }^{2}\bigr) \Vert \varrho _{n} \Vert _{\tau ,\lambda }^{2} \\ &{} -\frac{1}{\mu } \int _{\mathbb{R}} \bigl(\mu F \bigl(t, \varrho _{n}(t)\bigr) - f \bigl(t, \varrho _{n}(t)\bigr) \varrho _{n} (t)\bigr) \,dt \\ \geq &\biggl(\frac{1}{2 \varUpsilon } - \frac{1}{\mu } \biggr) M\bigl( \Vert \varrho _{n} \Vert _{\tau ,\lambda }^{2}\bigr) \Vert \varrho _{n} \Vert _{\tau ,\lambda }^{2} \\ \geq &\varrho \biggl(\frac{1}{2 \varUpsilon } - \frac{1}{\mu } \biggr) \Vert \varrho _{n} \Vert _{\tau ,\lambda }^{2}. \end{aligned}$$ Since \(\mu > 2 \varUpsilon \), the boundedness of \(\{\varrho _{n}\}_{n \in \mathbf{N}}\) follows directly. So there exists a subsequence \(\{\varrho _{n}\}_{n \in \mathbf{N}}\) and \(u \in W_{\lambda }^{\tau ,2}\) such that $$ \varrho _{n} \rightharpoonup u \quad \text{weakly in } W_{\lambda } ^{\tau ,2}(\mathbb{R}), $$ which yields $$ \begin{aligned}[b] \varPhi ' (\varrho _{n}) (\varrho _{n} - \varrho ) ={}& M \bigl( \Vert \varrho _{n} \Vert _{\tau ,\lambda }^{2}\bigr) \int _{\mathbb{R}} \bigl( \mathbb{D}_{+}^{\tau , \lambda } \varrho _{n} \mathbb{D}_{+}^{\tau , \lambda } (\varrho _{n} - \varrho ) \bigr) \,dt \\ &{} - \int _{\mathbb{R}} f (t, \varrho _{n}) ( \varrho _{n} - \varrho ) \,dt\to 0 \end{aligned} $$ as \(n \to \infty \). Now we show that $$ \lim_{n \to \infty } \int _{\mathbb{R}} f (t, \varrho _{n}) (\varrho _{n} - \varrho ) \,dt = 0. $$ To this end, by (13), there exists some positive constant d such that $$\begin{aligned}& \Vert \varrho _{n} \Vert _{\tau ,\lambda } < d \quad \text{and} \quad \Vert u \Vert _{ \tau ,\lambda } < d, \quad \text{for } n \in \mathbb{N}, \\ & \varrho _{n} \to u \quad \text{strongly in } L^{q} ( \mathbb{R}) \quad \text{and a.e. in } \mathbb{R}. \end{aligned}$$ Moreover, (C4) implies that there exists a positive constant T such that $$ f (t, \varrho _{n}) \leq \varepsilon \vert \varrho _{n} \vert ^{q-1} $$ for any \(\varepsilon > 0\) and \(|t| > T\). Then, by using Remark 2 and Young's inequality, we obtain $$\begin{aligned} & \biggl\vert \int _{\mathbb{R}} f (t, \varrho _{n}) (\varrho _{n} - \varrho ) \,dt \biggr\vert \\ &\quad \leq \int _{\mathbb{R}} \bigl\vert f (t, \varrho _{n}) \bigr\vert \vert \varrho _{n} - u \vert \,dt \\ &\quad \leq \int _{-T}^{T} \bigl\vert f (t, \varrho _{n}) \bigr\vert \vert \varrho _{n} - u \vert \,dt + \int _{ \vert t \vert > T} \bigl\vert f (t, \varrho _{n}) \bigr\vert \vert \varrho _{n} - u \vert \,dt \\ &\quad \leq \varepsilon \Vert \varrho _{n} \Vert _{\infty } + \varepsilon \int _{ \vert t \vert > T} \vert \varrho _{n} \vert ^{q-1} \vert \varrho _{n} - u \vert \,dt \\ &\quad \leq \varepsilon C \Vert \varrho _{n} \Vert _{\tau ,\lambda } + \varepsilon \int _{ \vert t \vert > T} \biggl(\frac{q - 1}{q} \vert \varrho _{n} \vert ^{q} + \frac{1}{ \mu } \vert \varrho _{n} - u \vert ^{q} \biggr) \,dt \\ &\quad \leq \varepsilon C \Vert \varrho _{n} \Vert _{\tau ,\lambda } + \frac{q - 1}{q} \varepsilon 2^{1 - \tau } C^{q-2} \Vert \varrho _{n} \Vert _{ \tau , \lambda }^{q} + \varepsilon \frac{1}{\mu }2^{1 - \tau } C^{q-2} \Vert \varrho _{n} -u \Vert _{\tau , \lambda }^{q-2} \\ &\quad \leq \varepsilon C d + \frac{q - 1}{q} \varepsilon 2^{1 - \tau } C ^{q-2} d^{q} + \varepsilon \frac{1}{\mu }2^{1 - \tau } C^{q-2} \Vert \varrho _{n} -u \Vert _{\tau , \lambda }^{q-2} \end{aligned}$$ for large enough n from (8). Therefore, we have $$M \bigl( \Vert \varrho _{n} \Vert _{\tau ,\lambda }^{2} \bigr) \int _{\mathbb{R}} \bigl( \mathbb{D}_{+}^{\tau , \lambda } \varrho _{n} \mathbb{D}_{+}^{\tau , \lambda } (\varrho _{n} - \varrho ) \bigr) \,dt \to 0 $$ from (14) as \(n \to \infty \). Thus, by the boundedness of \(M(\|\varrho _{n}\|_{\tau ,\lambda }^{2})\), one can get $$ \int _{\mathbb{R}} \bigl( \mathbb{D}_{+}^{\tau , \lambda } \varrho _{n} \mathbb{D}_{+}^{\tau , \lambda } (\varrho _{n} - \varrho ) \bigr) \,dt \to 0 $$ In a similar manner, we can get $$ \int _{\mathbb{R}} \bigl( \mathbb{D}_{+}^{\tau , \lambda } u \mathbb{D} _{+}^{\tau , \lambda } (\varrho _{n} - \varrho ) \bigr) \,dt \to 0 $$ Combining (16) and (17), we obtain that $$\int _{\mathbb{R}} \bigl( \mathbb{D}_{+}^{\tau , \lambda } ( \varrho _{n} - \varrho ) \mathbb{D}_{+}^{\tau , \lambda } ( \varrho _{n} - \varrho ) \bigr) \,dt \to 0 $$ $$ \Vert \varrho _{n} - u \Vert _{\tau ,\lambda } \to 0 $$ as \(n \to \infty \), and then Φ satisfies the Palais–Smale condition. □ Let the following two conditions hold: \(\forall \sigma _{i},\varrho _{i}\in H_{i}\), \(\eta _{i}(\sigma _{i},\varrho _{i})=-\eta _{i}(\varrho _{i},\sigma _{i})\); \(a_{i}: H_{i}\times H_{i}\rightarrow \mathbb{R}\) satisfies (C1) and (C2), \(b_{i}: H_{i}\times H_{i}\rightarrow \mathbb{R}\) with (C3)–(C6). Moreover, we have the following conditions: $$\begin{aligned}& 0< \frac{1}{\sigma _{1}+\rho _{1}c_{1}}\bigl[\delta _{1}\sqrt{1-2\rho _{1} \tau _{1}+\rho ^{2}_{1}\beta _{1}^{2}} +\rho _{1}\gamma _{1}\bigr]+\frac{\rho _{2} \delta _{2}\xi _{2}}{\sigma _{2}+\rho _{2}c_{2}}< 1, \end{aligned}$$ $$\begin{aligned}& 0< \frac{1}{\sigma _{2}+\rho _{2}c_{2}}\bigl[\delta _{2}\sqrt{1-2\rho _{2} \tau _{2}+\rho ^{2}_{2}\beta _{2}^{2}} +\rho _{2}\gamma _{2}\bigr]+\frac{\rho _{1} \delta _{1}\xi _{1}}{\sigma _{1}+\rho _{1}c_{1}}< 1. \end{aligned}$$ Then the sequence \(\{(\varrho _{n},\sigma _{n})\}_{n\geq 0}\) converges to \((\varrho ^{*},\sigma ^{*})\), where \((\varrho ^{*},\sigma ^{*})\) is the meromorphic solution of (3). It follows from (18) and (19) that $$\begin{aligned}& \begin{aligned}[b] \bigl\langle \varrho _{n}-\varrho _{n-1} ,\eta _{1}(\sigma _{1},\varrho _{n}) \bigr\rangle _{1} + \rho _{1}\bigl\langle F_{1}(A\varrho _{n-1},B\sigma _{n-1})-f _{1},\eta _{1}(\sigma _{1},\varrho _{n}) \bigr\rangle _{1} \\ \quad{} + \rho _{1} \bigl[a_{1}(\varrho _{n}, \sigma _{1}-\varrho _{n})\bigr]+ \rho _{1} \bigl[b_{1}(\varrho _{n-1},\sigma _{1})-b_{1}( \varrho _{n-1},\varrho _{n})\bigr]\geq 0, \end{aligned} \end{aligned}$$ $$\begin{aligned}& \begin{aligned}[b] \bigl\langle \sigma _{n}-\sigma _{n-1} ,\eta _{2}(\sigma _{2},\sigma _{n}) \bigr\rangle _{2} + \rho _{2}\bigl\langle F_{2}(A \varrho _{n-1},B\sigma _{n-1})-f _{2},\eta _{2}(\sigma _{2},\sigma _{n})\bigr\rangle _{2} \\ \quad{} + \rho _{2} \bigl[a_{2}(\sigma _{n},\sigma _{2}-\sigma _{n})\bigr]+\rho _{2}\bigl[ b_{2}(\sigma _{n-1},\sigma _{2})-b_{1}( \sigma _{n-1},\sigma _{n})\bigr] \geq 0 \end{aligned} \end{aligned}$$ for any \((\sigma _{1},\sigma _{2})\in H_{1}\times H_{2}\). If we take \(\sigma _{1}=\varrho _{n+1}\) in (20) and \(\sigma _{1}= \varrho _{n}\) in (21), respectively, then $$\begin{aligned} &\begin{aligned}[b] \bigl\langle \varrho _{n}-\varrho _{n-1} ,\eta _{1}(\varrho _{n+1},\varrho _{n}) \bigr\rangle _{1} + \rho _{1}\bigl\langle F_{1}(A\varrho _{n-1},B\sigma _{n-1})-f _{1},\eta _{1}(\varrho _{n+1},\varrho _{n})\bigr\rangle _{1} \\ \quad {}+ \rho _{1} \bigl[a_{1}(\varrho _{n}, \varrho _{n+1}-\varrho _{n})\bigr]+ \rho _{1} \bigl[b_{1}(\varrho _{n-1},\varrho _{n+1})-b_{1}( \varrho _{n-1}, \varrho _{n})\bigr]\geq 0, \end{aligned} \end{aligned}$$ $$\begin{aligned} &\begin{aligned}[b] &\bigl\langle \varrho _{n+1}-\varrho _{n} ,\eta _{1}(\varrho _{n},\varrho _{n+1}) \bigr\rangle _{1} + \rho _{1}\bigl\langle F_{1}(A\varrho _{n},B\sigma _{n})-f_{1}, \eta _{1}(\varrho _{n},\varrho _{n+1})\bigr\rangle _{1} \\ &\quad{} + \rho _{1} \bigl[a_{1}(\varrho _{n+1}, \varrho _{n}-\varrho _{n+1})\bigr]+ \rho _{1} \bigl[b_{1}(\varrho _{n},\varrho _{n})-b_{1}( \varrho _{n},\varrho _{n+1})\bigr]\geq 0. \end{aligned} \end{aligned}$$ Adding (22) and (23), we obtain that $$ \begin{aligned} &\bigl\langle \varrho _{n}-\varrho _{n+1} ,\eta _{1}(\varrho _{n},\varrho _{n+1}) \bigr\rangle _{1} \\ &\quad \leq \bigl\langle \varrho _{n-1}-\varrho _{n} ,\eta _{1}(\varrho _{n}, \varrho _{n+1})\bigr\rangle _{1} - \rho _{1}\bigl\langle F_{1}(A\varrho _{n-1},B \sigma _{n-1})-F_{1}(A\varrho _{n},B\sigma _{n}),\eta _{1}(\varrho _{n}, \varrho _{n+1})\bigr\rangle _{1} \\ &\qquad {}- \rho _{1} \bigl[a_{1}(\varrho _{n}- \varrho _{n+1},\varrho _{n}-\varrho _{n+1})\bigr]+ \rho _{1} \bigl[b_{1}(\varrho _{n-1}-\varrho _{n},\varrho _{n+1})+b_{1}(\varrho _{n}- \varrho _{n-1},\varrho _{n})\bigr] \\ &\quad \leq \bigl\langle \varrho _{n-1}-\varrho _{n} - \rho _{1} \bigl[ F_{1}(A \varrho _{n-1},B\sigma _{n-1})-F_{1}(A\varrho _{n},B\sigma _{n}) \bigr],\eta _{1}(\varrho _{n},\varrho _{n+1})\bigr\rangle _{1} \\ &\qquad {}- \rho _{1} \bigl[a_{1}(\varrho _{n}- \varrho _{n+1},\varrho _{n}-\varrho _{n+1})\bigr]+ \rho _{1} \bigl[b_{1}(\varrho _{n}-\varrho _{n-1},\varrho _{n}-\varrho _{n+1})\bigr]. \end{aligned} $$ Since \(\eta _{1}\) is \(\sigma _{1}\)-strongly monotone and \(\delta _{1}\)-Lipschitz continuous and \(a_{1}\) satisfies (C1), we have $$ \begin{aligned} &\sigma _{1} \Vert \varrho _{n}-\varrho _{n+1} \Vert ^{2}_{1} \\ &\quad \leq \bigl\Vert \varrho _{n-1}-\varrho _{n}-\rho _{1}\bigl[F_{1}(A\varrho _{n-1},B \sigma _{n-1})-F_{1}(A\varrho _{n},B\sigma _{n}) \bigr] \bigr\Vert _{1} \bigl\Vert \eta _{1}(\varrho _{n},\varrho _{n+1}) \bigr\Vert _{1} \\ &\qquad {}-\rho _{1}c_{1} \Vert \varrho _{n}- \varrho _{n+1} \Vert _{1}^{2}+\rho _{1} \gamma _{1} \Vert \varrho _{n}-\varrho _{n-1} \Vert _{1} \Vert \varrho _{n}-\varrho _{n+1} \Vert _{1} \\ &\quad \leq \delta _{1} \bigl\Vert \varrho _{n-1}-\varrho _{n}-\rho _{1}\bigl[F_{1}(A \varrho _{n-1},B\sigma _{n-1})-F_{1}(A\varrho _{n},B\sigma _{n})\bigr] \bigr\Vert _{1} \Vert \varrho _{n}-\varrho _{n+1} \Vert _{1} \\ &\qquad {}-\rho _{1}c_{1} \Vert \varrho _{n}- \varrho _{n+1} \Vert _{1}^{2}+\rho _{1} \gamma _{1} \Vert \varrho _{n}-\varrho _{n-1} \Vert _{1} \Vert \varrho _{n}-\varrho _{n+1} \Vert _{1}, \end{aligned} $$ which implies that $$\begin{aligned} & \Vert \varrho _{n}-\varrho _{n+1} \Vert _{1} \\ &\quad \leq \frac{1}{\sigma _{1}+\rho _{1}c_{1}} \bigl( \delta _{1} \bigl\Vert \varrho _{n-1}-\varrho _{n}-\rho _{1}\bigl[F_{1}(A \varrho _{n-1}, B\sigma _{n-1})-F _{1}(A\varrho _{n},B\sigma _{n-1})\bigr] \bigr\Vert _{1} \\ &\qquad {}+\rho _{1}\delta _{1} \bigl\Vert F_{1}(A\varrho _{n},B\sigma _{n-1})-F_{1}(A \varrho _{n},B\sigma _{n}) \bigr\Vert _{1}+\rho _{1}\gamma _{1} \Vert \varrho _{n}- \varrho _{n-1} \Vert _{1} \bigr). \end{aligned}$$ $$ \begin{aligned}[b] & \bigl\Vert \varrho _{n-1}-\varrho _{n}-\rho _{1}\bigl[F_{1}(A\varrho _{n-1}, B\sigma _{n-1})-F_{1}(A\varrho _{n},B\sigma _{n-1})\bigr] \bigr\Vert ^{2}_{1} \\ &\quad = \Vert \varrho _{n-1}-\varrho _{n} \Vert ^{2}_{1}-2\rho _{1}\bigl\langle F_{1}(A \varrho _{n-1}, B\sigma _{n-1})-F_{1}(A\varrho _{n},B\sigma _{n-1}), \varrho _{n-1}-\varrho _{n}\bigr\rangle _{1} \\ &\qquad {}+\rho _{1}^{2} \bigl\Vert F_{1}(A \varrho _{n-1}, B\sigma _{n-1})-F_{1}(A\varrho _{n},B\sigma _{n-1}) \bigr\Vert _{1}^{2} \\ &\quad \leq \bigl(1-2\rho _{1}\tau _{1}+\rho ^{2}_{1}\beta _{1}^{2}\bigr) \Vert \varrho _{n-1}- \varrho _{n} \Vert _{1}^{2} \end{aligned} $$ $$ \bigl\Vert F_{1}(A\varrho _{n}, B\sigma _{n-1})-F_{1}(A\varrho _{n},B\sigma _{n}) \bigr\Vert _{1}\leq \xi _{1} \Vert \sigma _{n-1}- \sigma _{n} \Vert _{2}. $$ It follows from (24), (25), and (26) that $$\begin{aligned} \Vert \varrho _{n}-\varrho _{n+1} \Vert _{1} \leq& \frac{1}{\sigma _{1}+\rho _{1}c _{1}}\bigl[\delta _{1} \sqrt{1-2\rho _{1} \tau _{1}+\rho ^{2}_{1}\beta _{1}^{2}} +\rho _{1}\gamma _{1}\bigr] \Vert \varrho _{n-1}- \varrho _{n} \Vert _{1} \\ &{}+\frac{\rho _{1}\delta _{1}\xi _{1}}{\sigma _{1}+\rho _{1}c_{1}} \Vert \sigma _{n-1}-\sigma _{n} \Vert _{2}, \end{aligned}$$ taking \(\sigma _{2}=\sigma _{n+1}\) in (25) and \(\sigma _{2}=\sigma _{n}\) in (26), respectively. Similarly, we have $$\begin{aligned} \Vert \sigma _{n}-\sigma _{n+1} \Vert _{2} \leq& \frac{1}{\sigma _{2}+\rho _{2}c_{2}}\bigl[\delta _{2}\sqrt{1-2\rho _{2} \tau _{2}+\rho ^{2}_{2}\beta _{2}^{2}} +\rho _{2}\gamma _{2}\bigr] \Vert \sigma _{n-1}- \sigma _{n} \Vert _{2} \\ &{}+\frac{\rho _{2}\delta _{2}\xi _{2}}{\sigma _{2}+\rho _{2}c_{2}} \Vert \varrho _{n-1}-\varrho _{n} \Vert _{1}. \end{aligned}$$ From (27) and (28), we obtain that $$\begin{aligned} & \Vert \varrho _{n}-\varrho _{n+1} \Vert _{1}+ \Vert \sigma _{n}-\sigma _{n+1} \Vert _{2} \\ &\quad \leq \biggl(\frac{1}{\sigma _{1}+\rho _{1}c_{1}}\bigl[\delta _{1}\sqrt{1-2 \rho _{1}\tau _{1}+\rho ^{2}_{1}\beta _{1}^{2}} +\rho _{1}\gamma _{1}\bigr]+ \frac{ \rho _{2}\delta _{2}\xi _{2}}{\sigma _{2}+\rho _{2}c_{2}} \biggr) \Vert \varrho _{n-1}-\varrho _{n} \Vert _{1} \\ &\qquad {}+ \biggl( \frac{1}{\sigma _{2}+\rho _{2}c_{2}}\bigl[\delta _{2}\sqrt{1-2\rho _{2}\tau _{2}+\rho ^{2}_{2}\beta _{2}^{2}} +\rho _{2}\gamma _{2}\bigr]+ \frac{ \rho _{1}\delta _{1}\xi _{1}}{\sigma _{1}+\rho _{1}c_{1}} \biggr) \Vert \sigma _{n-1}- \sigma _{n} \Vert _{2} \\ &\quad \leq \max \{\theta _{1},\theta _{2}\}\bigl( \Vert \varrho _{n-1}-\varrho _{n} \Vert _{1}+ \Vert \sigma _{n-1}-\sigma _{n} \Vert _{2}\bigr), \end{aligned}$$ $$\begin{aligned}& \theta _{1}:= \frac{1}{\sigma _{1}+\rho _{1}c_{1}}\bigl[\delta _{1}\sqrt {1-2 \rho _{1}\tau _{1}+\rho ^{2}_{1}\beta _{1}^{2}} +\rho _{1}\gamma _{1}\bigr]+ \frac{ \rho _{2}\delta _{2}\xi _{2}}{\sigma _{2}+\rho _{2}c_{2}}, \\& \theta _{2}:= \frac{1}{\sigma _{2}+\rho _{2}c_{2}}\bigl[\delta _{2}\sqrt {1-2 \rho _{2}\tau _{2}+\rho ^{2}_{2}\beta _{2}^{2}} +\rho _{2}\gamma _{2}\bigr]+ \frac{ \rho _{1}\delta _{1}\xi _{1}}{\sigma _{1}+\rho _{1}c_{1}}. \end{aligned}$$ Now, if we define the norm \(\|\cdot \|_{*}\) on \(H_{1}\times H_{2}\) by $$ \bigl\Vert (u,v) \bigr\Vert _{*}= \Vert u \Vert _{1}+ \Vert v \Vert _{2} $$ for any \((u,v)\in H_{1}\times H_{2}\), then we have $$ \bigl\Vert (\varrho _{n},\sigma _{n})-(\varrho _{n+1},\sigma _{n+1}) \bigr\Vert _{*}\leq \max \{ \theta _{1},\theta _{2}\} \bigl\Vert (\varrho _{n-1},\sigma _{n-1})-(\varrho _{n},\sigma _{n}) \bigr\Vert _{*}. $$ By using (18) and (19), it follows that \(\theta _{1}, \theta _{2}\in (0,1)\). Hence, (30) implies that \(\{(\varrho _{n},\sigma _{n})\}\) is a Halton sequence in \(H_{1}\times H_{2}\). Let \((\varrho _{n},\sigma _{n})\rightarrow (\varrho ^{*},\sigma ^{*})\) in \(H_{1}\times H_{2}\) as \(n\rightarrow \infty \). Therefore, $$\begin{aligned}& \begin{aligned} & \bigl\langle F_{1}\bigl(A\varrho ^{*},B\sigma ^{*}\bigr)-f_{1},\eta _{1}\bigl(\sigma _{1}, \varrho ^{*}\bigr)\bigr\rangle _{1}+a_{1} \bigl(\varrho ^{*},\sigma _{1}-\varrho ^{*} \bigr)+b _{1}\bigl(\varrho ^{*},\sigma _{1} \bigr)-b_{1}\bigl(\varrho ^{*},\varrho ^{*}\bigr) \geq 0 \\ &\quad \forall \sigma _{1}\in H_{1}, \end{aligned} \\& \begin{aligned} &\bigl\langle F_{2}\bigl(A\varrho ^{*},B\sigma ^{*}\bigr)-f_{2},\eta _{2} \bigl(\sigma _{2}, \sigma ^{*}\bigr)\bigr\rangle _{2}+a_{2}\bigl(\sigma ^{*},\sigma _{2}- \sigma ^{*}\bigr)+b_{2}\bigl( \sigma ^{*},\sigma _{2}\bigr)-b_{2}\bigl(\sigma ^{*},\sigma ^{*}\bigr)\geq 0 \\ &\quad \forall \sigma _{2}\in H_{2}. \end{aligned} \end{aligned}$$ Thus, \((\varrho ^{*},\sigma ^{*})\) is a meromorphic solution of the model problem (3), which implies the required conclusion. □ In this paper, we studied the equilibrium system with angular velocity for the prey. This system was a generalization of the two-species equilibrium model with Neumann type boundary condition. Firstly, we considered the asymptotical stability of equilibrium points to the system of ordinary differential equations type. Then, the existence of meromorphic solutions and the stability of equilibrium points to the system of weakly coupled meromorphic type were discussed. Finally, the existence of nonnegative meromorphic solutions to the system of strongly coupled meromorphic type was investigated, and the asymptotic stability of unique positive equilibrium point of the system was proved by constructing meromorphic functions. Cui, Y.: Uniqueness of solution for boundary value problems for fractional differential equations. Appl. Math. Lett. 51, 48–54 (2016) Cui, Y., Zou, Y.: Existence of solutions for second-order integral boundary value problems. Nonlinear Anal., Model. Control 21(6), 828–838 (2016) Glowinski, R., Lions, J., Trémolières, R.: Numerical Analysis of Variational Inequalities. North-Holland, Amsterdam (1981) Hartman, P., Stampacchia, G.: On some nonlinear elliptic differential functional equations. Acta Math. 115, 153–188 (1966) Barta, T.: Convergence to equilibrium of relatively compact solutions to equilibrium equations. Asymptot. Anal. 4(81), 1–9 (2011) Noor, M.: Mixed variational-like inequalities. Commun. Appl. Nonlinear Anal. 1, 63–75 (1994) Panagiotopoulos, P.: Inequality Problems in Mechanics and Applications. Birkhäuser, Boston (1985) Lions, J., Stampaccia, G.: Variational inequalities. Commun. Pure Appl. Math. 20, 493–512 (1967) Kassay, G., Radulescu, V.D.: Equilibrium Problems and Applications. Mathematics in Science and Engineering. Elsevier/Academic Press, London (2018) Chergui, L.: An existence and uniqueness theorem for a second order nonlinear system with coupled integral boundary value conditions. Appl. Math. Comput. 256, 438–444 (2015) Cui, Y., Zou, Y.: Monotone iterative method for differential systems with coupled integral boundary value problems. Bound. Value Probl. 2013, 245 (2013) Sheng, K., Zhang, W., Bai, Z.: Positive solutions to fractional boundary-value problems with p-Laplacian on time scales. Bound. Value Probl. 2018, Article ID 70 (2018) Zhai, C., Wang, W., Li, H.: A uniqueness method to a new Hadamard fractional differential system with four-point boundary conditions. J. Inequal. Appl. 2018, Article ID 207 (2018) Zhang, Y.: Existence results for a coupled system of nonlinear fractional multi-point boundary value problems at resonance. J. Inequal. Appl. 2018, Article ID 198 (2018) Wu, J., Zhang, X., Liu, L., Wu, Y., Cui, Y.: The convergence analysis and error estimation for unique solution of a p-Laplacian fractional differential equation with singular decreasing nonlinearity. Bound. Value Probl. 2018, Article ID 82 (2018) Sun, Q., Ji, H., Cui, Y.: Positive solutions for boundary value problems of fractional differential equation with integral boundary conditions. J. Funct. Spaces 2018, Article ID 6461930 (2018) Li, K., Li, J., Wang, W.: Epidemic reaction-diffusion systems with two types of boundary conditions. Electron. J. Differ. Equ. 2018(170), 1 (2018) Bai, Z., Sun, W.: Existence and multiplicity of positive solutions for singular fractional boundary value problems. Comput. Math. Appl. 63(9), 1369–1381 (2012) Wang, Y., Liu, Y., Cui, Y.: Multiple sign-changing solutions for nonlinear fractional Kirchhoff equations. Bound. Value Probl. 2018, Article ID 193 (2018) Ardila, A.: Orbital stability of standing waves for a system of nonlinear Schrödinger equations with three wave interaction. Nonlinear Anal. 167, 1–20 (2018) Hu, J., Yin, H.: Nonlinear stability of rarefaction waves for the compressible Navier–Stokes equations with zero heat conductivity. Nonlinear Anal. 174, 242–277 (2018) Alleche, B., Radulescu, V.D.: Further on set-valued equilibrium problems in the pseudo-monotone case and applications to Browder variational inclusions. Optim. Lett. 12(8), 1789–1810 (2018) Bai, Z.: Eigenvalue intervals for a class of fractional boundary value problem. Comput. Math. Appl. 64(10), 3253–3257 (2012) Cui, Y., Sun, J.: On existence of positive solutions of coupled integral boundary value problems for a nonlinear singular superlinear differential system. Electron. J. Qual. Theory Differ. Equ. 2012, Article ID 41 (2012) Cui, Y.: Existence of solutions for coupled integral boundary value problem at resonance. Publ. Math. (Debr.) 89(1–2), 73–88 (2016) Song, Q., Bai, Z.: Positive solutions of fractional differential equations involving the Riemann–Stieltjes integral boundary condition. Adv. Differ. Equ. 2018, Article ID 183 (2018) Djafari, B., Khatibzadeh, H.: A strong convergence theorem for solutions to a nonhomogeneous second order equilibrium equation. J. Math. Anal. Appl. 363(2), 648–654 (2008) This work was supported by the Post-Doctoral Applied Research Projects of Qingdao (no. 2015122) and the Scientific Research Foundation of Shandong University of Science and Technology for Recruited Talents (no. 2014RCJJ032). College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao, China Bo Meng The author read and approved the final manuscript. Correspondence to Bo Meng. The author declares that he has no competing interests. Meng, B. Existence and convergence results of meromorphic solutions to the equilibrium system with angular velocity. Bound Value Probl 2019, 88 (2019). https://doi.org/10.1186/s13661-019-1197-x DOI: https://doi.org/10.1186/s13661-019-1197-x Asymptotic stability Equilibrium system Meromorphic function
CommonCrawl
Joint channel and phase noise estimation for mmWave full-duplex communication systems Abbas Koohian ORCID: orcid.org/0000-0001-9578-00351, Hani Mehrpouyan2, Ali A. Nasir3 & Salman Durrani1 EURASIP Journal on Advances in Signal Processing volume 2019, Article number: 18 (2019) Cite this article Full-duplex (FD) communication at millimeter-wave (mmWave) frequencies suffers from a strong self-interference (SI) signal, which can only be partially canceled using conventional RF cancelation techniques. This is because current digital SI cancellation techniques, designed for microwave frequencies, ignore the rapid phase noise (PN) variation at mmWave frequencies, which can lead to large estimation errors. In this work, we consider a multiple-input multiple-output mmWave FD communication system. We propose an extended Kalman filter-based estimation algorithm to track the rapid variation of PN at mmWave frequencies. We derive a lower bound for the estimation error of PN at mmWave and numerically show that the mean square error performance of the proposed estimator approaches the lower bound. We also simulate the bit error rate performance of the proposed system and show the effectiveness of a digital canceler, which uses the proposed estimator to estimate the SI channel. The results show that for a 2×2 FD system with 64−QAM modulation and PN variance of 10−4, the residual SI power can be reduced to − 25 dB and − 40 dB, respectively, for signal-to-interference ratio of 0 and 15 dB. The next generation of wireless communication technologies, known as 5G, are expected to offer multi-gigabit data rates to mobile users [1, 2]. This has prompted wireless service providers to seek higher bandwidth at less crowded millimeter-wave (mmWave) frequencies. The short wave lengths of mmWave frequencies also allow for practical implementations of base stations with large number of antennas known as massive multiple-input multiple-output (MIMO) system, which is another promising technology for 5G networks [3]. Given these capabilities offered by mmWave communication, these systems have become increasingly popular in academia and industry. While MIMO systems can fully benefit from the capabilities offered by communication at mmWave frequencies [3], due to large peak-to-average power ratio, orthogonal frequency division modulation (OFDM) is not popular for mmWave communication [4]. Since there is still an open debate about the modulation type at mmWave frequencies [5], we do not consider OFDM in this work. Full-duplex (FD) communication has also emerged recently as a promising wireless technology, which allows for efficient use of bandwidth by enabling in-band transmission and reception [6–9]. The major obstacle in exploiting the full potential of FD communication is the self-interference (SI) signal, which is significantly stronger than the desired communication signal [10, 11]. The power of the SI signal can be reduced via two different suppression techniques: (i) passive suppression, where transmit and receive antennas are physically isolated to reduce the leakage of the transmit signal into the RF front end of the receiver chain, and (ii) active suppression, where the SI signal is suppressed via subtracting the analog replica of SI signal from the received signal [6]. The experimental results at microwave frequencies show that the successive combination of passive and active suppression can reduce the SI signal power to the receiver noise floor [12]. For this reason, in the majority of the radio architectures proposed for FD communication at microwave frequencies, the residual SI signal at baseband is treated as noise [6, 13–15]. The cancelation techniques that treat the SI signal as noise suffer from two fundamental problems: (i) they assume Gaussian distribution of the SI signal. However, as explained in [16, 17], the SI signal has a strong line of sight (LoS) component, and hence, it is not Gaussian, and (ii) treating SI as noise requires statistical knowledge of the SI channel, which might not be available. Recently, SI channel measurements have been carried out for FD communication at mmWave frequencies [17, 18]. The measurements indicate that, as opposed to the microwave frequency band, the SI channel at mmWave has a non-line-of-sight (NLoS) component, which cannot be canceled using passive and active suppression techniques. This partial suppression of the SI signal results in a large residual SI signal at baseband, which is still significantly higher than the receiver noise floor [17]. Another challenge for mmWave FD communication systems is that the oscillator phase noise (PN) is large and rapidly changing [19]. Thus, the majority of the existing techniques for residual SI signal cancelation at baseband, which assume a very steady oscillator PN [20–23] cannot be used for FD mmWave communication. We note that the important aspect of mmWave communication considered in this paper is the estimation of fast varying PN. This fast variation of PN is in the order of symbol time at mmWave [24]. In this work, we consider the problem of joint channel and PN estimation for a mmWave FD MIMO system communication system. The main contributions of this work are as follows: We construct a state vector for the joint estimation of the channel and PN and propose an algorithm based on extended Kalman filtering technique to track the fast PN variation at mmWave band. We derive the lower bound on the estimation error of the proposed estimator and numerically show that the proposed estimator reaches the performance of the lower bound. We also show the effectiveness of a digital SI cancelation, which uses the proposed estimation technique to estimate the SI channel. We present simulation results to show the mean square error (MSE) and bit error rate (BER) performance of a mmWave FD MIMO system with different PN variances and signal-to-interference ratios (SIR). The results show that for a 2×2 FD system with 64−QAM modulation and PN variance of 10−4, the residual SI power can be reduced to − 25 dB and − 40 dB, respectively, for signal-to-interference ratio of 0 and 15 dB. Notation: The following notation is used in this paper. Superscripts (·)† and (·)T are the conjugate and the transpose operators, respectively. Bold face small letters, e.g., x are used for vectors, bold face capital letters, e.g., X are used for matrices. ejθ is the multivariate complex exponential function. |·| is the absolute value operator, and ∠x is the phase of the complex variable x. ⊙ is the Hadamard (element-wise) product. diag(x) creates a matrix with elements of vector x on the main diagonal. trace(·) is the trace of a matrix, which sums up all the diagonal elements of a given matrix. \(x \in \mathcal {A}\), means x is an element of set \(\mathcal {A}\). 0N is N by 1 vector of all zeros, and IN is N by N identity matrix. 1N×N is N×N matrix of all 1. \(\mathbb {E}[\cdot ]\) is the expectation operator. \(\mathfrak {R}\{\cdot \}\) returns the real part of a complex quantity. Finally, in Table 1, we present the important symbols used in the mathematical representation of the system model. In general, if x is used in the mathematical representation of the system model, then \(\bar {x}\) is used for the mathematical representation of the system model needed for joint channel and PN estimation, and \(\hat {x}\) is an estimate of x. Table 1 Important symbols used in this paper We consider the MIMO communication system between two mmWave FD nodes a and b, each with Nt transmit and Nr receive antennas as illustrated in Fig. 1. The considered communication system can be a model for backhaul communication for cellular systems [24]. In this work, we make the following assumptions: The same number of transmit and receive antennas for both nodes: We assume both nodes in the considered FD communication system have the same number of transmit and receive antennas. System model block diagram of FD communication, where AC stands for analog SI cancelation, DC stands for digital SI cancelations, \(e^{j\theta ^{[r/r_{b}/t/\text {SI}]}_{l}}\) represents the PN at the lth antenna Modeling of RF impairments: RF impairments due to imperfect transmitter and receiver chain electronics have been shown to significantly degrade the performance of the analog cancelation techniques [25, 26]. Since the focus of this work is residual SI cancelation, we only include PN in our model and assume that the other hardware impairments are dealt by a RF canceler. Such an assumption is also made in [20, 21, 27, 28]. Assumptions on oscillators: We make two assumptions about the oscillators. First, we assume that free-running oscillators are used. The assumption of using free-running oscillators for mmWave communications has also been made in [24, 29]. Second, we assume each transmit and receive antenna is equipped with an independent oscillator. Quasi-static flat-fading channel assumption: The SI measurement results of [17] show that even with omnidirectional dipole antennas, the delay spread of the channel does not exceed 800 ns. This delay is significantly smaller than the proposed symbol durations for 5G communication [30, 31], which are in order of μs. Hence, not only can the channel be assumed flat but it can also be assumed to remain constant over transmission of one block of data (quasi-static). Similarly, measurement results of the desired communication channel show that the channel delays are relatively small compared to the symbol durations [32]. Synchronized transmission and reception: Although synchronizing transmission and reception of analog desired communication signal with the reception of analog SI signal is an important practical problem and requires its own detailed investigation, the synchronized FD communication assumption is widely used in the literature of channel and PN estimation for digital SI cancelation (DC) [20, 21, 33]. Mathematical representation of received vector In this subsection, we present a mathematical model for the received vector of a FD MIMO communication system at mmWave frequencies. The received vector at node a and at time instant n is y(n) and is given by $$\begin{array}{*{20}l} \mathbf{y}(n)=\mathbf{H}(n)\mathbf{x}(n)+\mathbf{H}^{\text{SI}}(n)\mathbf{x}^{\text{SI}}(n)+\mathbf{w}(n), \end{array} $$ where \(\mathbf {y}(n)\triangleq [y_{1}(n),\cdots,y_{N_{r}}(n)]^{T}\), and yi(n) is the received symbol at the ith antenna. For i∈{1,⋯,Nr} and k∈{1,⋯,Nt}, the element in the ith row and kth column of Nr×Nt channel matrix H(n) is given by \(h_{i,k}e^{j\left (\theta ^{[r]}_{i}(n)+\theta ^{[t]}_{k}(n)\right)}\), where hi,k is the communication channel between the kth transmit antenna of node b to the ith receive antenna of node a, for m∈{r,t,SI}, \(\theta ^{[m]}_{j}(n)\) is the oscillator PN at the jth antenna and m determines the type of antenna such that m=r indicates a receive antenna, m=t means a transmit antenna, and m=SI indicates an interfering antenna. Furthermore, PN variation of a free-running oscillator follows a Wiener process [34], i.e., $$\begin{array}{*{20}l} \theta^{[m]}_{j}(n)= \theta^{[m]}_{j}(n-1)+\delta(n), \end{array} $$ where δ(n) is Gaussian noise with mean 0 and variance \(\sigma ^{2}_{[m]}\), i.e., \(\delta (n)\sim \mathcal {N}\left (0,\sigma ^{2}_{[m]}\right)\). Similarly, the element in the ith row and kth column of Nr×Nt SI channel matrix HSI(n) is given by \(h^{\text {SI}}_{i,k} e^{j\left (\theta ^{[r]}_{i}(n)+\theta ^{[\text {SI}]}_{k}(n)\right)}\), where \(h^{\text {SI}}_{i,k}\) is the interference channel between the kth transmit antenna and the ith receive antenna of node a. In addition, the kth elements of Nt×1 vectors x(n) and xSI(n) are given by xk(n) and \(x^{\text {SI}}_{k}(n)\), respectively, which are the transmitted symbols from the kth transmit antenna of nodes b and a, respectively. Finally, \(\mathbf {w}(n) \triangleq [w_{1}(n),\cdots,w_{N_{r}}(n)]^{T}\), where wi(n) is the complex Gaussian noise, i.e., \(w_{i}(n)\sim \mathcal {CN}(0,\sigma ^{2})\). Mathematical representation for joint channel and PN estimation For received vector y(n) and noise vector w(n) in (1), a useful mathematical model for joint channel and PN estimation is of the form ([35], Ch. 13, pp.450, Eq. 13.66) $$\begin{array}{*{20}l} \mathbf{y}(n)=\bar{\mathbf{H}}(n)f\left(\boldsymbol{\beta}(n)\right)+\mathbf{w}(n), \end{array} $$ where \(\bar {\mathbf {H}}(n)\) is the state transition model matrix, f is a nonlinear function, and β(n) is the state vector to be estimated. A fundamental step in the problem of joint channel and PN estimation is the construction of the state vector and the state transition matrix based on the system model given by (1). The state vector and the state transition matrix for the joint PN and channel estimation in the presence of SI signal are given by (4) and (5), respectively. The state vector: $$\begin{array}{*{20}l} \boldsymbol{\beta}(n)\triangleq[\boldsymbol{\bar{\beta}}_{1}(n),\cdots,\boldsymbol{\bar{\beta}}_{N_{r}}(n)]^{T} \end{array} $$ The state transition matrix: $$\begin{array}{*{20}l} \bar{\mathbf{H}}(n)\triangleq \left[ \begin{array}{ccc} \bar{\mathbf{h}}_{1} & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \bar{\mathbf{h}}_{N_{r}} \\ \end{array} \right] \end{array} $$ $$\begin{array}{*{20}l}&\boldsymbol{\bar{\beta}}_{i}(n)\triangleq [\bar{\beta}_{i,1},\cdots, \bar{\beta}_{i,{2N_{t}}}], \end{array} $$ $$\begin{array}{*{20}l} &\bar{\beta}_{i,\bar{k}}(n)\triangleq \left\{ \begin{array}{ll} \theta^{[r]}_{i}(n)+\theta^{[t]}_{k}(n), & \bar{k} \text{ is odd;} \\ \theta^{[r]}_{i}(n)+\theta^{[\text{SI}]}_{k}(n), & \bar{k} \text{ is even} \end{array}\right., \end{array} $$ (6b) $$\begin{array}{*{20}l} &\bar{\mathbf{h}}_{i}\triangleq [\bar{h}_{i,1},\cdots,\bar{h}_{i,2N_{t}}]\odot [\bar{x}_{1}(n),\cdots,\bar{x}_{2Nt}(n)], \end{array} $$ $$\begin{array}{*{20}l} &\bar{h}_{i,\bar{k}} \triangleq \left\{ \begin{array}{ll} h_{i,k}, & \bar{k} \text{ is odd;} \\ h^{\text{SI}}_{i,k}, & \bar{k} \text{ is even} \end{array}\right., \end{array} $$ (6d) $$\begin{array}{*{20}l} &\bar{x}_{\bar{k}}(n)\triangleq \left\{ \begin{array}{ll} x_{k}(n), & \bar{k} \text{ is odd;} \\ x^{\text{SI}}_{k}(n), & \bar{k} \text{ is even} \end{array}\right., \end{array} $$ (6e) $$\begin{array}{*{20}l} &\bar{k}=\{1,\cdots,2N_{t}\} \end{array} $$ $$\begin{array}{*{20}l} & k=\left\{ \begin{array}{ll} \bar{k}, & {\bar{k}<N_{t} ;} \\ \bar{k}-N_{t}, & {\bar{k}>N_{t}} \end{array}\right.. \end{array} $$ (6g) The principle idea behind the design of the state vector β(n) and the state transition matrix \(\bar {\mathbf {H}}(n)\) as given by (4) and (5), respectively, is the fact that the PN noise is the only random variable that varies from one symbol to another and needs to be tracked. On the other hand, because of the quasi-static nature of the communication and SI channels, they remain constant over transmission of a single data packet. Therefore, these channels need to be estimated only once at the beginning of data transmission. This initial channel estimation for the constant channels can be done using pilot transmission. Furthermore, we note that at each receive antenna, there are 2Nt parameters that need to be estimated, Nt parameters for the communication channel, and Nt parameters for the SI channel. This explains the existence of index \(\bar {k}\in \{1,\cdots,2N_{t}\}\). Finally, with the state vector β(n) and the state transition matrix \(\bar {\mathbf {H}}(n)\) given by (4) and (5), the discrete-time received vector at time instant n and at the baseband of node a is given by $$\begin{array}{*{20}l} \mathbf{y}(n)=\bar{\mathbf{H}}(n)e^{j\boldsymbol{\beta}(n)}+\mathbf{w}(n). \end{array} $$ Joint channel and PN estimation In this section, we use the state vector (4) and the state transition matrix (5) and present a joint channel and PN estimator based on the concept of extended Kalman filtering (EKF) [35]. The observation vector of EKF is given by y(n) in (7), which is a nonlinear function of the states β(n). The EKF state equation is given by $$\begin{array}{*{20}l} \boldsymbol{\beta}(n)=\boldsymbol{\beta}(n-1)+\mathbf{u}(n), \end{array} $$ where u(n) is Gaussian with mean zero and covariance \(\mathbf {Q}\triangleq \mathbb {E}[\boldsymbol {\beta }(n)\boldsymbol {\beta }^{T}(n)] \), i.e., \(\mathbf {u}(n)\sim \mathcal {N}(\boldsymbol {0}_{2N_{r}N_{t}}, \mathbf {Q})\). The 2NtNr×2NtNr covariance matrix Q is given by $$\begin{array}{*{20}l} \mathbf{Q}\triangleq\mathbb{E}\left[\boldsymbol{\beta}(n)\boldsymbol{\beta}^{T}(n)\right]=\left[ \begin{array}{ccc} \mathbf{R}_{1,1}&\cdots & \bar{\mathbf{R}}_{1,N_{r}} \\ \vdots & \vdots &\vdots\\ \mathbf{R}_{N_{r},1}&\cdots & \bar{\mathbf{R}}_{N_{r},N_{r}} \\ \end{array} \right], \end{array} $$ where, for m,n∈{1,⋯,Nr}, Rm,n is 2NtNt matrix given by (10), where \(\sigma ^{2}_{r}\), \(\sigma ^{2}_{t}\), and \(\sigma ^{2}_{\text {SI}}\) are PN variances due to receive, transmit, and SI antennas, respectively. $$ {\begin{aligned} \mathbf{R}_{m,n}=\left\{ \begin{array}{ll} \sigma^{2}_{r}\boldsymbol{1}_{2N_{t}\times2N_{t}}+ \text{diag}\left(\underbrace{\sigma^{2}_{t},\cdots,\sigma^{2}_{t}}_{2N_{t}}\right),& m=n \text{ and\ is\ odd}\\ \sigma^{2}_{r}\boldsymbol{1}_{2N_{t}\times2N_{t}}+\text{diag}\left(\underbrace{\sigma^{2}_{\text{SI}},\cdots,\sigma^{2}_{\text{SI}}}_{2N_{t}}\right),& m=n \text{ and is even}\\ \text{diag}\left(\underbrace{\sigma^{2}_{t},\cdots,\sigma^{2}_{t}}_{2N_{t}}\right),& m \neq n \text{ and is odd}\\ \text{diag}\left(\underbrace{\sigma^{2}_{\text{SI}},\cdots,\sigma^{2}_{\text{SI}}}_{2N_{t}}\right),& m \neq n \text{ and is even}\\ \end{array}\right. \end{aligned}} $$ The EKF state update equations are given by [35] $$\begin{array}{*{20}l} \boldsymbol{\hat{\beta}}(n|n)&=\boldsymbol{\hat{\beta}}(n|n-1) \\ &\quad+ \mathfrak{R}\left\{\mathbf{K}(n)\left(\mathbf{y}(n)-\bar{\mathbf{H}}(n)e^{j\boldsymbol{\hat{\beta}}(n|n-1)}\right)\right\}, \end{array} $$ $$\begin{array}{*{20}l} \boldsymbol{\hat{\beta}}(n|n-1)&=\boldsymbol{\hat{\beta}}(n-1|n-1), \end{array} $$ $$\begin{array}{*{20}l} \mathbf{K}(n)&=\mathbf{M}(n|n\!\,-\,1)\mathbf{D}^{\dag}(n) \\&\quad\times\left(\sigma^{2}\mathbf{I}_{N_{r}}+\mathbf{D}(n)\mathbf{M}(n|n-1)\mathbf{D}^{\dag}(n)\right)^{-1}, \end{array} $$ $$\begin{array}{*{20}l} \mathbf{M}(n|n-1)&=\mathbf{M}(n-1|n-1)+\mathbf{Q}, \end{array} $$ $$\begin{array}{*{20}l} \mathbf{M}(n|n)&=\mathfrak{R}\left\{\left(\mathbf{I}_{N_{r}}-\mathbf{K}(n)\mathbf{D}(n)\right)\mathbf{M}(n|n-1)\right\}, \end{array} $$ where, for k∈{1,⋯,Nt}, $$\begin{array}{*{20}l} &\mathbf{D}(n)=\frac{\partial\bar{\mathbf{H}}(n)e^{j\boldsymbol{\beta}(n)}}{\partial\boldsymbol{\beta}^{T}(n)}=\left(\begin{array}{ccc} \mathbf{z}_{1} & \boldsymbol{0}^{T}_{2Nt} & \boldsymbol{0}^{T}_{2Nt} \\ \boldsymbol{0}^{T}_{2Nt} & \ddots & \vdots \\ \boldsymbol{0}^{T}_{2Nt} & \boldsymbol{0}^{T}_{2Nt} & \mathbf{z}_{N_{r}} \\ \end{array} \right), \end{array} $$ $$\begin{array}{*{20}l} &\mathbf{z}_{i}=\left\{\begin{array}{ll} h_{i,k}x_{k}(n)e^{j\hat{\beta}_{i,k}(n|n-1)} & {k} \text{ is even} \\ h^{\text{SI}}_{i,k}x^{\text{SI}}_{k}(n)e^{j\hat{\beta}_{i,k}(n|n-1)} & {k} \text{ is odd} \end{array}\right., \end{array} $$ and \(\hat {\beta }_{i,k}(n|n-1)\) is the 2(i−1)Nt+k element of vector \(\boldsymbol {\hat {\beta }}(n|n-1)\). We note that the state vector, as given by (8), is a real vector. This is because the state vector only contains the phases, which are real numbers. The complex channel coefficients are estimated using this estimated real vector and using the complex exponential function as given by (7). Since the states are all real, when updating the mean of the states in EKF, we can safely discard the imaginary part of the updated mean as in (11). Symbol detection The EKF Eq. (17) shows that zi requires the knowledge of the constant channels hi,k, \(h_{i,k}^{\text {SI}}\) and the transmitted symbols. Note that \(x^{\text {SI}}_{k}\), the SI symbol is perfectly known at the receiver. The knowledge of the constant channels can be obtained using pilot-based estimation during the initial half-duplex (HD) phase of the communication. In addition, the transmitted symbols at time n are detected using the initial channel estimates and the estimates of the state vector β at time n−1. This is because at time n of the EKF algorithm, β(n−1) has been successfully estimated. This procedure is shown in Fig. 2. Time diagram of modified EKF Lower bound of estimation error In this section, we derive a lower bound on the estimation error of the estimator proposed in the previous subsection. We first note that the mean square error (MSE) for estimating the state vector β(n) is given by $$\begin{array}{*{20}l} \text{MSE}=\text{trace}\left(\mathbb{E}\left[\left(\boldsymbol{\beta}(n)-\boldsymbol{\hat{\beta}(n)}\right)\left(\boldsymbol{\beta}_{n}-\boldsymbol{\hat{\beta}_{n}}\right)^{T}\right]\right) \end{array} $$ With the above definition of the MSE vector, we present the following proposition. MSE of the EKF is lower bounded by trace(Q), i.e., $$\begin{array}{*{20}l} \text{MSE}\geq trace\left(\boldsymbol{Q}\right), \end{array} $$ where Q is the state covariance matrix given by (9). See Appendix 5. □ We note that (19) shows that the lower bound on the estimation error increases as the sum of diagonal elements of the covariance matrix of the states increases. Furthermore, (9) indicates that the diagonal elements of the state covariance matrix are the function of PN variance. Consequently, increasing the PN variance will result in worse estimation error. Since the residual SI cancelation is performed using the estimated SI channel, increasing the PN variance will result in worse SI cancelation performance. It is also worth to note that [34] shows that the PN variance is a monotonic increasing of function of carrier frequency. This means that the estimation error increases with increasing the carrier frequency and vice versa. Complexity analysis of EKF For the complexity analysis of the proposed joint channel and PN estimation technique, we take the approach used by [36, 37] and count the number of multiplications and additions used in each step of EKF algorithm. Table 2 shows the complexity of each step of EKF algorithm using \(\mathcal {O}-\)notation. The corresponding complexity calculations for this table can be found in Appendix 5. Table 2 Complexity of each step of EKF algorithm According to Table (2), the EKF has a polynomial complexity as a function of number of transmit Nt and receive Nr antennas. We can justify the increased complexity as follows. In [20], the authors propose an algorithm for channel estimation with linear complexity. However, the algorithm in [20] assumes a constant PN for a block of data. This could be an acceptable scenario in microwave communication but does not suit mmWave communication. Hence, the increased complexity of the proposed algorithm is justified because of fast variation of PN, i.e., PN variation over symbol time. Simulation results In this section, we present simulation results for MIMO FD systems at 60 GHz frequency, which corresponds to mmWave frequency band [3]. For each simulation run, we assume a communication packet is 40 symbol long, i.e., N=40. This communication packet is transmitted after the training packet, which is 2Nt symbols long, and is used for estimating the constant channels for EKF initialization as described in Section 3.1. We then use 10,000 simulation runs to obtain the desired simulation results. Moreover, we use the assumptions presented in Section 2 to generate the random noise and PN. As summarized in [38], there are many mmWave channel models available for mmWave systems. In this work, similar to a large number of existing works in [24, 29, 39–41], we adopt a general Rician model. Note that the proposed estimator is independent of the adopted model. A performance comparison of the different mmWave channel models is outside the scope of this work. We generate the random SI and communication channel (HSI/COM) using Rician distribution as follows: $$\begin{array}{*{20}l} \mathbf{H}_{\text{SI/COM}}=\sqrt{\frac{K}{K+1}}\mathbf{H}_{\text{LoS}}+\sqrt{\frac{1}{K+1}}\mathbf{H}_{\text{NLoS}}, \end{array} $$ where K is the Rician distribution K-factor; HLoS is the LoS component of the channel, and is generated assuming uniform distribution for angle of arrival, using the approach presented in [24]; HNLoS is the NLoS component of the channel; and for both SI and communication channel is generated assuming Rayleigh fading. Furthermore, for both the SI and communication channel, we set the K-factor to 2 dB. We note that the SI and communication channels have different power intensities, i.e., \(\mathbb {E}\left [\mathbf {H}_{\text {SI}}\mathbf {H}^{\dag }_{\text {SI}}\right ] \neq \mathbb {E}\left [\mathbf {H}_{\text {COM}}\mathbf {H}^{\dag }_{\text {COM}}\right ]\). Assuming that the LoS power of the residual SI (SI signal after the passive and analog cancelation) is the same as the LoS power of the communication signal, the signal to interference ratio (SIR) is given by: $$\begin{array}{*{20}l} \text{SIR}=\frac{\sigma^{2}_{\text{COM}}}{\sigma^{2}_{\text{SI}}}, \end{array} $$ where \(\sigma ^{2}_{\text {COM}}\) and \(\sigma ^{2}_{\text {SI}}\) are the variances of NLoS components of the communication and SI channels, respectively. In addition, SNR is defined as $$\begin{array}{*{20}l} \text{SNR}\triangleq \frac{\mathbb{E}[E_{s}]}{\sigma^{2}}, \end{array} $$ where Es is the symbol energy, \(\mathbb {E}[E_{s}]=1\), and σ2 is the noise variance. Finally, we use the MSE for the state vector at time N=40. This MSE is given by rewriting (18) in terms of the Euclidean norm of a vector, i.e., ||·||2, $$\begin{array}{*{20}l} \mathbb{E}\left[\left|\left|\boldsymbol{\beta}(N)-\boldsymbol{\hat{\beta}}(N)\right|\right|_{2}\right]. \end{array} $$ In what follows, we first present the MSE results for different FD MIMO communication systems. We then investigate the residual SI power after digital cancelation and the bit error rate (BER) performance of these systems with the proposed PN estimation technique. MSE performance In this section, we investigate the MSE performance of the proposed PN estimation technique for a 2×2 FD MIMO system and assume that SIR=0 dB, i.e., the SI signal is as strong as the desired communication signal. Figure 3 shows the MSE performance of the proposed system against the derived theoretical bound in Section 3.2 for different quadrature amplitude modulations (QAM) and different PN variances. Firstly, as discussed in Remark 2, with increasing PN variance, the estimation performance degrades. Secondly, it can be observed from this figure that lower order modulations have better performance compared to the higher order modulations. This is because as shown in Section 3.1, the EKF algorithms require to detect the transmitted symbols. Hence, the MSE of EKF is affected by the detection error. Finally, Fig. 3 shows that at high SNRs, the MSE performance of the proposed estimator approaches the lower bound. MSE performance for PN variances \(\sigma ^{2}_{r}=\sigma ^{2}_{t}=\sigma ^{2}_{\text {SI}}=10^{-4},10^{-5}\) and different QAM modulations for a 2×2 FD MIMO system with SIR =0 dB In Fig. 3, we also plot the MSE result of the state-of-the-art pilot-based phase noise estimator in [20, 23] for microwave frequency. As expected, this estimator does not perform well compared to our proposed estimator. This is because it assumes that the PN variations are small, which is not applicable for the case for mmWave frequency. Note that we only show the MSE result of the estimator in [20, 23] for 64−QAM modulation since the MSE performance is invariant with respect to the modulation order (the estimator uses pilots and does not require detection). Comparison with unscented Kalman filter We compare the performance of the proposed EKF estimator with unscented Kalman filter (UKF). UKF provides an alternative for linearizing the observations. The detailed implementation of the UKF is provided in Appendix 5. Figure 4 shows the performance of the EKF and UKF estimators for 8−QAM modulation, SIR =0 dB, and different PN variances. We can see that the MSE performance of the proposed EKF estimator is better than the UKF estimator. This is because (i) UKF estimator works with the sigma point approximation of the mean of the state process, while EKF tracks the PN based on the true mean of the linear state vector; (ii) while the MSE performances of both EKF and UKF are degraded because of the detection error, this error affects UKF algorithm more than EKF. This is because the sigma points calculation are affected more by the error due to the symbol detection (Section 3.1); and (iii) UKF is inherently more suitable for the systems which experience high nonlinearities, i.e., both the state and process models are nonlinear and noise is nonlinear too. In our case, only the process model in (7) is nonlinear. MSE performance of the UKF and proposed EKF for PN variances \(\sigma ^{2}_{r}=\sigma ^{2}_{t}=\sigma ^{2}_{\text {SI}}=10^{-4},10^{-5}\) and 8–QAM modulation for a 2×2 FD MIMO system with SIR =0 dB Residual SI power In this section, we numerically investigate the remaining SI power after digital cancelation for a 2×2 MIMO FD system with 64−QAM modulation, assuming the PN variance for all the oscillators is 10−4. This residual power is given by $$\begin{array}{*{20}l} P_{\text{SI}}=\left|\left|\left(\mathbf{H}^{\text{SI}}(n)-\bar{\mathbf{H}}^{\text{SI}}(n)\right)\mathbf{x}^{\text{SI}}(n)\right|\right|_{2}, \end{array} $$ where ||·||2 is the Euclidean norm of a vector, and \(\bar {\mathbf {H}}^{\text {SI}}(n)\) is an estimate of the SI channel using the proposed EKF estimator. Figure 5 shows the residual SI power for different SIR values, where a SIR value of 0 dB indicates that passive and analog cancelation stages have managed to reduce the SI power to the same level as the desired signal power. The residual SI power PSI after digital cancelation The numerical result of Fig. 5 shows that the performance of digital canceler depends on the residual SI power after passive and analog cancelation stages. As the residual SI power after passive and analog cancelation decreases, so does the residual SI power after the digital cancelation. The results show that the residual SI power can be reduced to − 25 and − 40 dB for SIR of 0 and 15 dB, respectively. This is important as it shows the effectiveness of digital SI cancelation after passive and analog cancelation. BER performance Finally, in this section, we present the BER results of a 2×2 FD MIMO system with different QAM modulations, assuming that PN variance for all oscillators is 10−4. Figure 6 shows the BER performance of the system for different values of SIR. The results are consistent with the results of the residual SI power in Fig. 5, i.e., the higher the SIR, the better the BER results. Furthermore, 8−QAM system performs better than the 64−QAM system, which is consistent with the results of Fig. 6. BER performance of the proposed system for a 2×2 MIMO FD system with different QAM modulations In this paper, we considered a MIMO FD system for mmWave communication and proposed a joint channel and PN estimation algorithmFootnote 1. We also derived a lower bound on the estimation error and numerically showed that the MSE of the proposed estimator approaches the error bound. Furthermore, we investigated the residual SI power after the digital cancelation and showed that the digital canceler, which uses the estimated SI channel can reduce the SI power to − 25 to − 40 dB. These results indicate the effectiveness of digital cancelation after passive and analog cancelation stages. A lower bound of the estimation error In this section, we derive the lower bound of the estimation error. We start the proof by expanding \(\mathbb {E}\left [\left (\boldsymbol {\beta }(n)-\boldsymbol {\hat {\beta }}(n)\right)\left (\boldsymbol {\beta }(n)-\boldsymbol {\hat {\beta }}(n)\right)^{T}\right ]\). $$ {\begin{aligned} &\mathbb{E}\left[\left(\boldsymbol{\beta}(n)-\boldsymbol{\hat{\beta}}(n)\right)\left(\boldsymbol{\beta}(n)-\boldsymbol{\hat{\beta}}(n)\right)^{T}\right]=\mathbb{E}\left[\boldsymbol{\beta}(n)\boldsymbol{\beta}^{T}(n)\right]\\ &\quad+\mathbb{E}\left[\boldsymbol{\hat{\beta}}(n)\boldsymbol{\hat{\beta}}^{T}(n)\right] -\mathbb{E}\left[\boldsymbol{\beta}(n)\boldsymbol{\hat{\beta}}^{T}(n)\right] -\mathbb{E}\left[\boldsymbol{\hat{\beta}}(n)\boldsymbol{\beta}^{T}(n)\right] \end{aligned}} $$ Next, we show that the last two terms of (25) are zero. We do this by showing only \(\mathbb {E}\left [\boldsymbol {\beta }(n)\boldsymbol {\hat {\beta }}^{T}(n)\right ]=0\) as a similar approach can be used to show that \(\mathbb {E}\left [\boldsymbol {\hat {\beta }}(n)\boldsymbol {\beta }^{T}(n)\right ]=0\). We first note that β(n) given by (8) is a Gaussian autoregressive model (AR) with mean zero, i.e., \(\mathbb {E}\left [\boldsymbol {\beta }(n)\right ]=0\). Hence, $$ {\begin{aligned} \mathbb{E}\left[\boldsymbol{\beta}(n)\boldsymbol{\hat{\beta}}^{T}(n)\right]&=\int\int \boldsymbol{\beta}(n)\boldsymbol{\hat{\beta}(n)}~p(\boldsymbol{\beta}(n),\mathbf{y}(n))~d\boldsymbol{\beta}(n)~d\mathbf{y}(n)\\&=\int\boldsymbol{\hat{\beta}(n)}\int \boldsymbol{\beta}(n)~p(\boldsymbol{\beta}(n))~d\boldsymbol{\beta}(n)~p(\mathbf{y}(n)|\boldsymbol{\beta}(n))~d\mathbf{y}(n)\\ &=\int\boldsymbol{\hat{\beta}(n)}\mathbb{E}\left[\boldsymbol{\beta}(n)\right]~p(\mathbf{y}(n)|\boldsymbol{\beta}(n))~d\mathbf{y}(n)=0. \end{aligned}} $$ Consequently, we can rewrite (25) as follows: $$ \begin{aligned} &\mathbb{E}\left[\left(\boldsymbol{\beta}(n)-\boldsymbol{\hat{\beta}}(n)\right)\left(\boldsymbol{\beta}(n)-\boldsymbol{\hat{\beta}}(n)\right)^{T}\right]\\ &=\mathbb{E}\left[\boldsymbol{\beta}(n)\boldsymbol{\beta}^{T}(n)\right] +\mathbb{E}\left[\boldsymbol{\hat{\beta}}(n)\boldsymbol{\hat{\beta}}^{T}(n)\right] \end{aligned} $$ It is easy to show that \(\mathbb {E}\left [\boldsymbol {\hat {\beta }}(n)\boldsymbol {\hat {\beta }}^{T}(n)\right ]\) is a positive semi-definite matrix and hence $$\begin{array}{*{20}l} \mathbb{E}\left[\left(\boldsymbol{\beta}(n)-\boldsymbol{\hat{\beta}}(n)\right)\left(\boldsymbol{\beta}(n)-\boldsymbol{\hat{\beta}}(n)\right)^{T}\right]\geq \mathbb{E}\left[\boldsymbol{\beta}(n)\boldsymbol{\beta}^{T}(n)\right] \end{array} $$ Furthermore, the properties of trace allows us to write $$ {{\begin{aligned} \text{trace}\left(\mathbb{E}\left[\left(\boldsymbol{\beta}(n)\,-\,\boldsymbol{\hat{\beta}}(n)\right)\left(\boldsymbol{\beta}(n)\,-\,\boldsymbol{\hat{\beta}}(n)\right)^{T}\right]\right)\geq \text{trace}\left(\mathbb{E}\left[\boldsymbol{\beta}(n)\boldsymbol{\beta}^{T}(n)\right]\right) \end{aligned}}} $$ Finally, using (29) and the definitions of Q and MSE in (9) and (18), we can establish the proof of the proposition. B Complexity analysis of EKF In this section, we provide the complexity analysis of the EKF algorithm by counting the number of multiplications and additions. However, before we proceed, it can easily be shown that every entry of product of a K×L matrix by a L×M matrix requires L multiplications and L−1 additions, and hence, the whole matrix requires KML multiplications and KM(L−1) additions, where KM is the size of the resulting matrix. Furthermore, it is known that matrix inversion has the same complexity in terms of additions and multiplication as the matrix multiplication, up to a multiplicative constant γ [42]. We can now proceed with calculating the complexity of EKF algorithm in (30) to (32). $$ {\begin{aligned} &\boldsymbol{\hat{\beta}}(n|n)=\!\underbrace{\boldsymbol{\hat{\beta}}(n|n-1) \underbrace{\mathfrak{R}\left\{\mathbf{K}(n)\underbrace{\left(\mathbf{y}(n)- \underbrace{\bar{\mathbf{H}}(n)e^{j\boldsymbol{\hat{\beta}}(n|n-1)}}_{N_{r}(2N_{t}N_{r})+N_{r}(2N_{t}N_{r}-1)} \right)}_{N_{r}+N_{r}(2N_{t}N_{r})+N_{r}(2N_{t}N_{r}-1)} \right\}}_{2N_{t}N^{2}_{r}+2N_{t}N_{r}(N_{r}-1)+N_{r}+N_{r}(2N_{t}N_{r})+N_{r}(2N_{t}N_{r}-1)}}_{ \begin{aligned} &2N_{t}N_{r}\,+\,2N_{t}N^{2}_{r}\,+\,2N_{t}N_{r}(N_{r}-1)\,+\,N_{r}\,+\,N_{r}(2N_{t}N_{r})\,+\,N_{r}(2N_{t}N_{r}-1) \end{aligned}}, \end{aligned}} $$ $$ {{\begin{aligned} &\mathbf{K}(n)=\mathbf{M}(n|n-1)\mathbf{D}^{\dag}(n) \\ &\underbrace{{{\left({\sigma^{2}\mathbf{I}_{N_{r}}+{\mathbf{D}(n){\mathbf{M}(n|n-1)\mathbf{D}^{\dag}(n)}}}\right)^{-1}}}}_{\begin{aligned} &4N_{t}^{2}N_{r}^{3}+2N_{t}N_{r}^{2}(2N_{t}N_{r}-1)+2N_{t}N_{r}^{3}+2N_{t}N_{r}^{2}(N_{r}-1)\\&+\gamma N_{r}^{3}+\gamma N_{r}^{2}(N_{r}-1)+N_{r}^{2}+2N_{t}N_{r}^{2}+N_{r}(2N_{t}N_{r}-1)\\&+4N_{t}^{2}N_{r}^{3}+2N_{t}N_{r}^{2}(2N_{t}N_{r}-1)\end{aligned}}, \end{aligned}}} $$ $$\begin{array}{*{20}l} &\mathbf{M}(n|n-1)=\underbrace{\mathbf{M}(n-1|n-1)+\mathbf{Q}}_{4N_{t}^{2}N_{r}^{2}}, \end{array} $$ $$ {\begin{aligned} &\mathbf{M}(n|n)=\mathfrak{R}\left\{\underbrace{\mathbf{M}(n|n-1)\underbrace{\left(\mathbf{I}_{2N_{t}N_{r}}-\underbrace{\mathbf{K}(n)\mathbf{D}(n)}_{2N_{t}N_{r}^{3}+2N_{t}N_{r}^{2} (N_{r}-1)}\right)}_{4N_{t}^{2}N_{r}^{2}+2N_{t}N_{r}^{3}+2N_{t}N_{r}^{2} (N_{r}-1)}}_{8N_{t}^{3}N_{r}^{3}+4N_{t}^{2}N_{r}^{2}(2N_{t}N_{r}-1)+2N_{t}N_{r}^{3}+2N_{t}N_{r}^{2} (N_{r}-1)}\right\}, \end{aligned}} $$ C Unscented Kalman filter (UKF) Unscented Kalman filter (UKF) provides an alternative to EKF for nonlinear state vector estimation. In UKF instead of linearizing the observation vector, the probability distributions of states and observations are approximated using sigma points [43]. UKF can solve a very general class of problems, where both state process and observations are nonlinear. However, the joint channel and PN estimation problem, as given by the observation vector (7) and the state vector (8), has a linear state process and additive noise. This allows for the use of non-augmented state vectors for UKF [44]. For the state vector β(n) in (8), the sigma points \(\mathcal {B}(i,n)\) are given by $$\begin{array}{*{20}l} \mathcal{B}(0,n|n-1)&=\mathcal{B}(0, n-1), \end{array} $$ $$\begin{array}{*{20}l} \mathcal{B}_(i, n|n-1)&=\mathcal{B}(i, n-1)+\left(\sqrt{\left\{\left(L+\lambda\right)\mathbf{Q}\right\}}\right)_{i},\\ \quad i&=1,\cdots, L, \end{array} $$ (33b) $$\begin{array}{*{20}l} \mathcal{B}(i, n|n-1)&=\mathcal{B}(i, n-1)-\left(\sqrt{\left\{\left(L+\lambda\right)\mathbf{Q}\right\}}\right)_{i},\\ \quad i&=L+1,\cdots, 2L\, \end{array} $$ (33c) where, \(\sqrt {\{\cdot \}}\) is the matrix square root, (·)i is the ith column of the matrix, L=4NtNr, λ=α2L−L, where α=10−3 [43], and Q is the state covariance matrix given by (9). Subsequently, the mean of the sigma points, which is used as an approximate to the true mean of the probability distribution of states, is given by $$\begin{array}{*{20}l} \bar{\boldsymbol{\beta}}(n)=\sum_{i=0}^{2l}\mathcal{W}^{m}_{i} \mathcal{B}(i, n|n-1), \end{array} $$ $$\begin{array}{*{20}l} &\mathcal{W}^{m}_{0}=\frac{\lambda}{L+\lambda} \end{array} $$ $$\begin{array}{*{20}l} &\mathcal{W}^{m}_{i}=\frac{1}{2(L+\lambda)}, \quad i=1,\cdots,2L. \end{array} $$ Similarly, the covariance of the state vector based on the sigma points approximation is given by $$\begin{array}{*{20}l} \bar{\mathbf{P}}_{n}&=\sum\limits_{i=0}^{2L}\mathcal{W}^{m}_{i} \left[\mathcal{B}_(i, n|n-1)-\bar{\boldsymbol{\beta}}(n)\right]\\ &\quad\left[\mathcal{B}_(i, n|n-1)-\bar{\boldsymbol{\beta}}(n)\right]^{*} \end{array} $$ Moreover, the sigma points for the observations, and the corresponding approximate mean of probability distribution of observations are given by $$\begin{array}{*{20}l} &\mathcal{Y}(n|n-1)=\bar{\mathbf{H}}(n)e^{j \mathcal{B}(i, n|n-1)}, \end{array} $$ $$\begin{array}{*{20}l} &\bar{\mathbf{y}}(n)=\sum\limits_{i=0}^{2l}\mathcal{W}^{c}_{i} \mathcal{Y}(n|n-1), \end{array} $$ where \(\mathcal {W}^{c}_{0}=\mathcal {W}^{m}_{0}+(1-\alpha ^{2}+\beta)\), β=2, and \(\mathcal {W}^{c}_{i}=\mathcal {W}^{m}_{i}\) for i=1,⋯,2L. Once the state and the process models are approximated by the sigma points using (33a)–(33c), and (38a), respectively, the updated mean \(\boldsymbol {\hat {\beta }}(n)\) and variance \(\hat {\mathbf {P}}_{n}\) can be calculated as follows: $$\begin{array}{*{20}l} &\boldsymbol{\hat{\beta}}(n)=\bar{\boldsymbol{\beta}}(n)+\mathcal{K}\left(\mathbf{y}(n)-\bar{\mathbf{y}}(n)\right), \end{array} $$ $$\begin{array}{*{20}l} &\hat{\mathbf{P}}_{n}=\bar{\mathbf{P}}_{n}-\mathcal{K}\mathbf{P}_{y,y}\mathcal{K}^{T}, \end{array} $$ $$\begin{array}{*{20}l} \mathcal{K}&=\mathbf{P}_{x,y}\mathbf{P}^{-1}_{y,y}, \end{array} $$ $$\begin{array}{*{20}l} \mathbf{P}_{x,y}&\,=\,\sum\limits_{i=0}^{2L}\mathcal{W}_{i}^{c}\left[\mathcal{B}_(i, n|n-1)-\bar{\boldsymbol{\beta}}(n)\right]\left[\mathcal{Y}(n|n\,-\,1)-\bar{\mathbf{y}}(n)\right]^{*}, \end{array} $$ $$\begin{array}{*{20}l} \mathbf{P}_{y,y}&=\sum\limits_{i=0}^{2L}\mathcal{W}_{i}^{c}\left[\mathcal{Y}(n|n-1)-\bar{\mathbf{y}}(n)\right]\left[\mathcal{Y}(n|n-1)-\bar{\mathbf{y}}(n)\right]^{*}. \end{array} $$ Algorithm 1 summarizes the UKF joint channel and PN estimation algorithm. Indeed, the main focus of this work is to correctly estimate the channel and PN for effective SI cancelation. In case of inter-node interference [45], the proposed estimator would need to be modified. However, in the special case, if the inter-node interference can be treated as Gaussian, then the system model given by (1) can capture the effect of the inter-node interference by including an additional Gaussian noise term due to inter-node interference. S. A Busari, K. M. S Huq, S Mumtaz, L Dai, J Rodriguez, Millimeter-wave massive MIMO communication for future wireless systems: a survey. IEEE Commun. Surveys Tuts. 20(2), 836–869 (2018). R. W Heath, N Gonzalez-Prelcic, S Rangan, W Roh, A. M Sayeed, An overview of signal processing techniques for millimeter wave MIMO systems. IEEE J. Sel. Topics Signal Process. 10(3), 436–453 (2016). T. S Rappaport, S Sun, R Mayzus, H Zhao, Y Azar, K Wang, G. N Wong, J. K Schulz, M Samimi, F Gutierrez, Millimeter wave mobile communications for 5G cellular: it will work!IEEE Access. 1:, 335–349 (2013). S Rajagopal, S Abu-Surra, J Zhang, in Proc. IEEE SPAWC. Spectral mask filling for PAPR reduction in large bandwidth mmWave systems, (2015), pp. 131–135. S Buzzi, C D'Andrea, T Foggi, A Ugolini, G Colavolpe, Single-carrier modulation versus OFDM for millimeter-wave wireless MIMO. IEEE Trans. Commun. 66(3), 1335–1348 (2018). M Duarte, C Dick, A Sabharwal, Experiment-driven characterization of full-duplex wireless systems. IEEE Trans. Wireless Commun. 11(12), 4296–4307 (2012). Z Zhang, X Chai, K Long, A. V Vasilakos, L Hanzo, Full duplex techniques for 5G networks: self-interference cancellation, protocol design, and relay selection. IEEE Commun. Mag. 53(5), 128–137 (2015). H Mehrpouyan, M. R Khanzadi, M Matthaiou, A. M Sayeed, R Schober, Y Hua, Improving bandwidth efficiency in E-band communication systems. IEEE Commun. Mag. 52(3), 121–128 (2014). V Syrjala, M Valkama, L Anttila, T Riihonen, D Korpi, Analysis of oscillator phase-noise effects on self-interference cancellation in full-duplex OFDM radio transceivers. IEEE Trans. Wirel. Commun. 13(6), 2977–2990 (2014). S Hong, J Brand, J Choi, M Jain, J Mehlman, S Katti, P Levis, Applications of self-interference cancellation in 5G and beyond. IEEE Commun. Mag. 52(2), 114–121 (2014). A Koohian, H Mehrpouyan, M Ahmadian, M Azarbad, in Proc. IEEE ICC. Bandwidth efficient channel estimation for full duplex communication systems, (2015), pp. 4710–4714. M Duarte, A Sabharwal, in Proc. Asilomar Conf. on Signals, Syst. and Computers. Full-duplex wireless communications using off-the-shelf radios: feasibility and first results, (2010), pp. 1558–1562. M Duarte, A Sabharwal, V Aggarwal, R Jana, K. K Ramakrishnan, C. W Rice, N. K Shankaranarayanan, Design and characterization of a full-duplex multiantenna system for WiFi networks. IEEE Trans. Veh. Technol. 63(3), 1160–1177 (2014). E Everett, A Sahai, A Sabharwal, Passive self-interference suppression for full-duplex infrastructure nodes. IEEE Trans. Wirel. Commun. 13(2), 680–694 (2014). A. A Nasir, S Durrani, H Mehrpouyan, S. D Blostein, R. A Kennedy, Timing and carrier synchronization in wireless communication systems: a survey and classification of research in the last 5 years. EURASIP J. Wirel. Commun. Netw. 180(1) (2016). Available: https://doi.org/10.1186/s13638-016-0670-9. E Ahmed, A. M Eltawil, On phase noise suppression in full-duplex systems. IEEE Trans. Wirel. Commun. 14(3), 1237–1251 (2015). B Lee, J. B Lim, C Lim, B Kim, J. Y Seol, in Proc. IEEE Globecom Workshops. Reflected self-interference channel measurement for mmWave beamformed full-duplex system, (2015), pp. 1–6. A Demir, T Haque, E Bala, P Cabrol, in Proc. WAMICON. Exploring the possibility of full-duplex operations in mmWave 5G systems, (2016), pp. 1–5. T. A Thomas, M Cudak, T Kovarik, in Proc. IEEE ICC. Blind phase noise mitigation for a 72 GHz millimeter wave system, (2015), pp. 1352–1357. A Masmoudi, T Le-Ngoc, A maximum-likelihood channel estimator for self-interference cancelation in full-duplex systems. IEEE Trans. Veh. Technol. 65(7), 5122–5132 (2016). A Masmoudi, T Le-Ngoc, Channel estimation and self-interference cancelation in full-duplex communication systems. IEEE Trans. Veh. Technol. 66(1), 321–334 (2017). X Xiong, X Wang, T Riihonen, X You, Channel estimation for full-duplex relay systems with large-scale antenna arrays. IEEE Trans. Wireless Commun. 15(10), 6925–6938 (2016). R Li, A Masmoudi, T Le-Ngoc, Self-interference cancellation with nonlinearity and phase-noise suppression in full-duplex systems. IEEE Trans. Veh. Technol. 67(3), 2118–2129 (2018). H Mehrpouyan, A. A Nasir, S. D Blostein, T Eriksson, G. K Karagiannidis, T Svensson, Joint estimation of channel and oscillator phase noise in MIMO systems. IEEE Trans. Signal Process. 60(9), 4790–4807 (2012). D Korpi, L Anttila, V Syrjala, M Valkama, Widely linear digital self-interference cancellation in direct-conversion full-duplex transceiver. IEEE J. Sel. Areas Commun. 32(9), 1674–1687 (2014). D Korpi, T Riihonen, V Syrjala, L Anttila, M Valkama, R Wichman, Full-duplex transceiver system calculations: analysis of ADC and linearity challenges. IEEE Trans. Wirel. Commun. 13(7), 3821–3836 (2014). L Samara, M Mokhtar, O Ozdemir, R Hamila, T Khattab, Residual self-interference analysis for full-duplex OFDM transceivers under phase noise and I/Q imbalance. IEEE Commun. Lett. 21(2), 314–317 (2017). E Ahmed, A. M Eltawil, A Sabharwal, Rate gain region and design tradeoffs for full-duplex wireless communications. IEEE Trans. Wirel. Commun. 12(7), 3556–3565 (2013). A. A Nasir, H Mehrpouyan, R Schober, Y Hua, Phase noise in MIMO systems: Bayesian Cramer Rao bounds and soft-input estimation. IEEE Trans. Signal Process. 61(10), 2675–2692 (2013). K. I Pedersen, G Berardinelli, F Frederiksen, P Mogensen, A Szufarska, A flexible 5G frame structure design for frequency-division duplex cases. IEEE Commun. Mag. 54(3), 53–59 (2016). S Dutta, M Mezzavilla, R Ford, M Zhang, S Rangan, M Zorzi, Frame structure design and analysis for millimeter wave cellular systems. IEEE Trans. Wirel. Commun. 16(3), 1508–1522 (2017). C Gustafson, K Haneda, S Wyne, F Tufvesson, On mm-wave multipath clustering and channel modeling. IEEE Trans. Antennas Propag. 62(3), 1445–1455 (2014). D Kim, H Ju, S Park, D Hong, Effects of channel estimation error on full-duplex two-way networks. IEEE Trans. Veh. Technol. 62(9), 4666–4672 (2013). M. R Khanzadi, R Krishnan, D Kuylenstierna, T Eriksson, in Proc. IEEE Globecom Workshops. Oscillator phase noise and small-scale channel fading in higher frequency bands, (2014), pp. 410–415. S. M Kay, Fundamentals of Statistical Signal Processing: Estimation Theory (Prentice-Hall, Inc., Upper Saddle River, 1993). A. A Nasir, H Mehrpouyan, S. D Blostein, S Durrani, R. A Kennedy, Timing and carrier synchronization with channel estimation in multi-relay cooperative networks. IEEE Trans. Signal Process. 60(2), 793–811 (2012). A Koohian, H Mehrpouyan, A. A Nasir, S Durrani, S. D Blostein, Superimposed signaling inspired channel estimation in full-duplex systems. EURASIP J. Adv. Signal Process. 8(2018). Available: https://doi.org/10.1186/s13634-018-0529-9. I. A Hemadeh, K Satyanarayana, M El-Hajjar, L Hanzo, Millimeter-wave communications: physical channel models, design considerations, antenna constructions, and link-budget. IEEE Commun. Surveys Tuts. 20(2), 870–913 (2018). A. G Siamarou, Digital transmission over millimeter-wave radio channels: a review [wireless corner]. IEEE Antennas Propag. Mag. 51(6), 196–203 (2009). J Zhang, L Dai, X Zhang, E Bjornson, Z Wang, Achievable rate of rician large-scale MIMO channels with transceiver hardware impairments. IEEE Trans. Veh. Technol. 65(10), 8800–8806 (2016). T. S Rappaport, G. R MacCartney, S Sun, H Yan, S Deng, Small-scale, local area, and transitional millimeter wave propagation for 5G communications. IEEE Trans. Antennas Propag. 65(12), 6474–6490 (2017). S Arora, B Barak, Computational Complexity: A Modern Approach, 1st edn. (Cambridge University Press, New York, 2009). E. A Wan, R. V. D Merwe, in Proc. IEEE Adaptive Systems for Signal Processing, Communications, and Control Symposium. The unscented Kalman filter for nonlinear estimation, (2000), pp. 153–158. Y Wu, D Hu, M Wu, X Hu, Unscented Kalman filtering for additive noise case: augmented versus nonaugmented. IEEE Signal Process. Lett. 12(5), 357–360 (2005). W Feng, Y Wang, D Lin, N Ge, J Lu, S Li, When mmWave communications meet network densification: a scalable interference coordination perspective. IEEE J. Sel. Areas Commun. 35(7), 1459–1471 (2017). The authors would like to thank Professor Taneli Rihhonen for his insightful comments on this work. The work of Abbas Koohian was supported by an Australian Government Research Training Program (RTP) Scholarship. The work of Hani Mehrpouyan was partially funded by the NSF ERAS grant award number 1642865. Research School of Electrical, Energy, and Materials Engineering, Australian National University, Canberra, Australia Abbas Koohian & Salman Durrani Department of Electrical and Computer Engineering, Boise State University, Boise, ID, USA Hani Mehrpouyan Department of Electrical Engineering, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia Ali A. Nasir Search for Abbas Koohian in: Search for Hani Mehrpouyan in: Search for Ali A. Nasir in: Search for Salman Durrani in: All authors contributed equally to this work. The final manuscript has been read and approved by all authors for submission. Correspondence to Abbas Koohian. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Koohian, A., Mehrpouyan, H., Nasir, A. et al. Joint channel and phase noise estimation for mmWave full-duplex communication systems. EURASIP J. Adv. Signal Process. 2019, 18 (2019) doi:10.1186/s13634-019-0614-8 Millimeter-wave Residual self-interference power
CommonCrawl
Quantum Computing Meta Quantum Computing Stack Exchange is a question and answer site for engineers, scientists, programmers, and computing professionals interested in quantum computing. Join them; it only takes a minute: POVM three-qubit circuit for symmetric quantum states I have been reading this paper but don't yet understand how to implement a circuit to determine in which state the qubit is not for a cyclic POVM. More specifically, I want to implement a cyclic POVM with $m=3$; Update: I came across having to implement this unitary matrix: $$ M= \frac{1}{\sqrt{2}}\left[ {\begin{array}{cc} 1 & 1 \\ 1 & w \\ \end{array} } \right] $$ Where $w$ is a third root of unity using rotations, after which I am stuck. quantum-state quantum-information circuit-construction mathematics quantum-operation xbk365 xbk365xbk365 $\begingroup$ That's not a unitary matrix unless w=-1.. $\endgroup$ – Craig Gidney Mar 6 at 19:49 This is not the unitary that you have to implement: you need a two-qubit unitary $$ \frac{1}{\sqrt{3}}\left(\begin{array}{cccc} 1 & 1 & 1 & 0 \\ 1 & \omega & \omega^2 & 0 \\ 1 & \omega^2 & \omega & 0 \\ 0 & 0 & 0 & \sqrt{3} \end{array}\right), $$ where $\omega=e^{2i\pi/3}$, the point being that if you introduce an ancilla qubit in the state 0, apply this unitary, and then measure in the computational basis, the 3 measurement outcomes 00, 01 and 10 correspond to the 3 POVM elements. I don't (yet) have a circuit implementation for this. You'll see the paper you cite carefully avoids talking about the Fourier transform in non-power of 2 dimensions. You certainly could use the standard constructions based on Givens rotations, but the result is going to be fairly horrible. Here's an attempt at a circuit. I've made a few tweaks since I last ran it through a computer to check, so it's always possible that a slight error has crept in, but broadly... Here, I'm using $Z^r$ to denote $$ \left(\begin{array}{cc} 1 & 0 \\ 0 & e^{i\pi r} \end{array}\right), $$ and $$ V=\frac{1}{\sqrt{3}}\left(\begin{array}{cc} 1 & \sqrt{2} \\ -\sqrt{2} & 1 \end{array}\right). $$ DaftWullieDaftWullie 19k11 gold badge77 silver badges4848 bronze badges $\begingroup$ Hmm, I was actually talking about F2, not the whole unitary. Or, I am missing something? $\endgroup$ – chubakueno Mar 4 at 16:09 $\begingroup$ @chubakueno You said you wanted $m=3$, which means you need $F_3$, which is a $3\times 3$ matrix which we must embed into a $4\times 4$ matrix if we're using qubits. $\endgroup$ – DaftWullie Mar 4 at 16:11 $\begingroup$ I understand now. So, is there an example of IQFT for non power of twos there? I would be surprised to find there isn't, but what do I know. $\endgroup$ – chubakueno Mar 4 at 16:26 $\begingroup$ @chubakueno I didn't immediately find one. I've now constructed one for the $m=3$ case, but don't have time right now to write it into a circuit. $\endgroup$ – DaftWullie Mar 4 at 16:48 $\begingroup$ @xbk365 once i'm done with the evening's childcare responsibilities... $\endgroup$ – DaftWullie Mar 4 at 18:21 $$\frac{1}{\sqrt{3}}\left(\begin{array}{ccc} 1 & 1 & 1 \\ 1 & \omega & \omega^2 \\ 1 & \omega^2 & \omega \\ \end{array}\right) = \left(\begin{array}{cc} H & 0 \\ 0 & 1 \\ \end{array}\right) \cdot \frac{1}{\sqrt{3}}\left(\begin{array}{ccc} \sqrt{2} & 0 & 1 \\ 0 & \sqrt{3} & 0 \\ 1 & 0 & -\sqrt{2} \\ \end{array}\right) \cdot M_{3} $$ $$M_{3} = \left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & \frac{1}{\sqrt{2}}i\omega^2 & \frac{1}{\sqrt{2}}i\omega \\ 0 & -\frac{1}{\sqrt{2}}\omega^2 & -\frac{1}{\sqrt{2}}\omega \\ \end{array}\right) $$ $$ \frac{1}{\sqrt{2}}\left(\begin{array}{ccc} i\omega^2 & i\omega \\ -\omega^2 & -\omega \\ \end{array}\right) = X \cdot S \cdot X \cdot Z \cdot H \cdot \left(\begin{array}{ccc} \omega^2 & 0 \\ 0 & \omega \\ \end{array}\right) \cdot Z $$ Danylo YDanylo Y Thanks for contributing an answer to Quantum Computing Stack Exchange! Not the answer you're looking for? Browse other questions tagged quantum-state quantum-information circuit-construction mathematics quantum-operation or ask your own question. SWAP gate(s) in the $R(\lambda^{-1})$ step of the HHL circuit for $4\times 4$ systems Decomposition of arbitrary 2 qubit operator Understanding the Group Leaders Optimization Algorithm Three sender quantum simultaneous decoder conjecture Summing states of two qubit registers Understanding quantum circuit diagrams: a circuit that compares two states $|YX\rangle$ and $|AB\rangle$ Are X-state separability and PPT- probabilities the same for the two-qubit, qubit-qutrit, two-qutrit, etc. states? What proportions of certain sets of PPT-two-retrit states are bound entangled or separable? How to superpose two composite qubit states? Quantum secret Sharing using GHZ state paper
CommonCrawl
What information does fourier transform carry? [duplicate] Why is the Fourier transform so important? 8 answers As one starts learning signal processing, then comes inevitably the topic of Fourier Transforms. Unfortunately I have difficulties not in computing but in interpreting the results of the Fourier Transforms, in particular the one being Continuous-Time Fourier Transform, CTFT, of the signal $x(t)$ which is: $$X(j\omega) = \int_{-\infty}^{\infty}{x(t)e^{-j\omega t}dt}$$ Now I wonder what kind of information does this $X(j\omega)$ give about the signal $x(t)$? An example is highly appreciated, if possible. fourier-transform marked as duplicate by Marcus Müller, Laurent Duval, MBaz, jojek♦, Peter K.♦ Feb 15 '16 at 12:59 $\begingroup$ Also have a look at this question and its answers. $\endgroup$ – Matt L. Feb 14 '16 at 8:21 There are a variety of Fourier Transforms (and Series) such as Continuous Time FT, Discrete Time FT, Discrete FT all of which are generally attributed to the fundamental assertion made by J.B.Fourier about at the beginning of 19th century, which claims (without proof) that "if a continuous-time signal (function) x(t) is periodic with T, then it is possible to represent that signal x(t) as an infinite sum of harmonically related trigonometric functions (sines and cosines) as $ x(t)= \sum{ a_k \sin { 2\pi kt \over T} + b_k \cos { 2\pi kt \over T}}$ in which the weights $a_k$ and $b_k$ (the Fourier Coefficients) represent the amount of that particular harmonic in the signal x(t) being analysed" In fact the above argumentation is strictly for Continuous-Time Fourier Series. But the core idea is generalised into Fourier-Transforms of aperiodic and periodic (with the help of impulse $\delta (t)$ functions) signals. There are conditions on which signals can have such a representation. In essence, computing a Fourier Transform means finding those coefficients $a_k$ and $b_k$ for which the method is suggested by the analysis equation of the Fourier transform, while the equality in the first paragraph is noted as the synthesis equation. Eventhough it is quite solid to understand the meaning of those sinusoids inside a periodic signal, when it comes to non-periodic signals, for which we use the Fourier Transforms, the exact meaning of what a single sine wave represents inside such a signal is a little vague and instead we emphasize the transient character of that signal under concern and the necessity of existance of a continuum of infinetely many sine waves. Fat32Fat32 Not the answer you're looking for? Browse other questions tagged fourier-transform or ask your own question. Why is the Fourier transform so important? Choices of convention and notation for the Fourier transform? Techniques to deriving DTFTs What does the uncertainty principle say about recursive filters? About Fourier transform of periodic signal $2\pi$ periodicity of discrete-time Fourier transform 2D Fourier Transform of Rotated Discrete Domain Signal Periodicity of the discrete-time Fourier Transform Relation between the DTFT and CTFT in sampling- sample period isn't as the impulse train period Discrete inverse Fourier transform
CommonCrawl
Bootstrap[x] 62F40 62G09 62G10 Survival analysis 62G08 Monte Carlo simulation Nonparametric regression 62G20 Likelihood ratio test Confidence interval Kernel estimation Resampling 62E20 62F03 United States 64 (%) Spain 33 (%) Germany 23 (%) China 21 (%) Italy 17 (%) Universidade da Coruña 8 (%) Universidade Federal de Pernambuco 7 (%) Université catholique de Louvain 6 (%) Indian Statistical Institute 4 (%) Universidad Carlos III de Madrid 4 (%) Keilegom, Ingrid 6 (%) Cribari-Neto, Francisco 5 (%) Cordeiro, Gauss M. 4 (%) Heuchenne, Cédric 4 (%) Pešta, Michal 4 (%) Computational Statistics 35 (%) Annals of the Institute of Statistical Mathematics 34 (%) TEST 32 (%) Statistics and Computing 17 (%) Statistical Papers 12 (%) Book 36 (%) Springer 241 (%) Statistics [x] 241 (%) Statistics, general 163 (%) Statistics for Business/Economics/Mathematical Finance/Insurance 104 (%) Statistical Theory and Methods 97 (%) Probability Theory and Stochastic Processes 72 (%) 330 Institutions Showing 1 to 10 of 241 matching Articles Results per page: 10 20 50 Export (CSV) The use of random-model tolerance intervals in environmental monitoring and regulation Journal of Agricultural, Biological, and Environmental Statistics (2002-03-01) 7: 74-94 , March 01, 2002 By Smith, Robert W. When appropriate data from regional reference locations are available, tolerance-interval bounds can be computed to provide criteria or limits distinguishing reference from nonreference conditions. If the limits are to be to applied to locations and times beyond the original data, the data should include temporal and spatial variation and the tolerance interval calculations should utilize a random crossed or nested ANOVA statistical design. Two computational methods for such designs are discussed and evaluated with simulations. Both methods are shown to perform well, and the adverse effect of using an improper design model is demonstrated. Three real-world applications are shown, where tolerance intervals are used to (1) establish a reference threshold for a benthic community pollution index, (2) set criteria for chemicals in sediments, and (3) establish background thresholds for survival rates in sediment bioassay tests. Some practical considerations in the use of the tolerance intervals are discussed. An 'apples to apples' comparison of various tests for exponentiality Computational Statistics (2017-12-01) 32: 1241-1283 , December 01, 2017 By Allison, J. S.; Santana, L.; Smit, N.; Visagie, I. J. H. Show all (4) The exponential distribution is a popular model both in practice and in theoretical work. As a result, a multitude of tests based on varied characterisations have been developed for testing the hypothesis that observed data are realised from this distribution. Many of the recently developed tests contain a tuning parameter, usually appearing in a weight function. In this paper we compare the powers of 20 tests for exponentiality—some containing a tuning parameter and some that do not. To ensure a fair 'apples to apples' comparison between each of the tests, we employ a data-dependent choice of the tuning parameter for those tests that contain these parameters. The comparisons are conducted for various samples sizes and for a large number of alternative distributions. The results of the simulation study show that the test with the best overall power performance is the Baringhaus and Henze test, followed closely by the test by Henze and Meintanis; both tests contain a tuning parameter. The score test by Cox and Oakes performs the best among those tests that do not include a tuning parameter. Partially linear varying coefficient models with missing at random responses Annals of the Institute of Statistical Mathematics (2013-08-01) 65: 721-762 , August 01, 2013 By Bravo, Francesco This paper considers partially linear varying coefficient models when the response variable is missing at random. The paper uses imputation techniques to develop an omnibus specification test. The test is based on a simple modification of a Cramer von Mises functional that overcomes the curse of dimensionality often associated with the standard Cramer von Mises functional. The paper also considers estimation of the mean functional under the missing at random assumption. The proposed estimator lies in between a fully nonparametric and a parametric one and can be used, for example, to obtain a novel estimator for the average treatment effect parameter. Monte Carlo simulations show that the proposed estimator and test statistic have good finite sample properties. An empirical application illustrates the applicability of the results of the paper. Notes on the dimension dependence in high-dimensional central limit theorems for hyperrectangles Japanese Journal of Statistics and Data Science (2020-10-26): 1-41 , October 26, 2020 By Koike, Yuta Let $$X_1,\ldots ,X_n$$ be independent centered random vectors in $${\mathbb {R}}^d$$ . This paper shows that, even when d may grow with n, the probability $$P(n^{-1/2}\sum _{i=1}^nX_i\in A)$$ can be approximated by its Gaussian analog uniformly in hyperrectangles A in $${\mathbb {R}}^d$$ as $$n\rightarrow \infty$$ under appropriate moment assumptions, as long as $$(\log d)^5/n\rightarrow 0$$ . This improves a result of Chernozhukov et al. (Ann Probab 45:2309–2353, 2017) in terms of the dimension growth condition. When $$n^{-1/2}\sum _{i=1}^nX_i$$ has a common factor across the components, this condition can be further improved to $$(\log d)^3/n\rightarrow 0$$ . The corresponding bootstrap approximation results are also developed. These results serve as a theoretical foundation of simultaneous inference for high-dimensional models. Reducing bias in parameter estimates from stepwise regression in proportional hazards regression with right-censored data Lifetime Data Analysis (2008-03-01) 14: 65-85 , March 01, 2008 By Soh, Chang-Heok; Harrington, David P.; Zaslavsky, Alan M. When variable selection with stepwise regression and model fitting are conducted on the same data set, competition for inclusion in the model induces a selection bias in coefficient estimators away from zero. In proportional hazards regression with right-censored data, selection bias inflates the absolute value of parameter estimate of selected parameters, while the omission of other variables may shrink coefficients toward zero. This paper explores the extent of the bias in parameter estimates from stepwise proportional hazards regression and proposes a bootstrap method, similar to those proposed by Miller (Subset Selection in Regression, 2nd edn. Chapman & Hall/CRC, 2002) for linear regression, to correct for selection bias. We also use bootstrap methods to estimate the standard error of the adjusted estimators. Simulation results show that substantial biases could be present in uncorrected stepwise estimators and, for binary covariates, could exceed 250% of the true parameter value. The simulations also show that the conditional mean of the proposed bootstrap bias-corrected parameter estimator, given that a variable is selected, is moved closer to the unconditional mean of the standard partial likelihood estimator in the chosen model, and to the population value of the parameter. We also explore the effect of the adjustment on estimates of log relative risk, given the values of the covariates in a selected model. The proposed method is illustrated with data sets in primary biliary cirrhosis and in multiple myeloma from the Eastern Cooperative Oncology Group. Testing for one-sided alternatives in nonparametric censored regression TEST (2012-09-01) 21: 498-518 , September 01, 2012 By Heuchenne, Cédric; Pardo-Fernández, Juan Carlos Assume that we have two populations (X1,Y1) and (X2,Y2) satisfying two general nonparametric regression models Yj=mj(Xj)+εj, j=1,2, where m(⋅) is a smooth location function, εj has zero location and the response Yj is possibly right-censored. In this paper, we propose to test the null hypothesis H0:m1=m2 versus the one-sided alternative H1:m1<m2. We introduce two test statistics for which we obtain the asymptotic normality under the null and the alternative hypotheses. Although the tests are based on nonparametric techniques, they can detect any local alternative converging to the null hypothesis at the parametric rate n−1/2. The practical performance of a bootstrap version of the tests is investigated in a simulation study. An application to a data set about unemployment duration times is also included. Empirical process approach to some two-sample problems based on ranked set samples Annals of the Institute of Statistical Mathematics (2007-12-01) 59: 757-787 , December 01, 2007 By Ghosh, Kaushik; Tiwari, Ram C. We study the asymptotic properties of both the horizontal and vertical shift functions based on independent ranked set samples drawn from continuous distributions. Several tests derived from these shift processes are developed. We show that by using balanced ranked set samples with bigger set sizes, one can decrease the width of the confidence band and hence increase the power of these tests. These theoretical findings are validated through small-scale simulation studies. An application of the proposed techniques to a cancer mortality data set is also provided. Model-based INAR bootstrap for forecasting INAR(p) models By Bisaglia, Luisa ; Gerolimetto, Margherita In this paper we analyse some bootstrap techniques to make inference in INAR(p) models. First of all, via Monte Carlo experiments we compare the performances of these methods when estimating the thinning parameters in INAR(p) models; we state the superiority of model-based INAR bootstrap approaches on block bootstrap in terms of low bias and Mean Square Error. Then we adopt the model-based bootstrap methods to obtain coherent predictions and confidence intervals in order to avoid difficulty in deriving the distributional properties. Finally, we present an empirical application. Coverage plots for assessing the variability of estimated contours of a density Statistics and Computing (1996-12-01) 6: 325-336 , December 01, 1996 By Lin, Xun-Guo; Pope, Alun Methods for assessing the variability of an estimated contour of a density are discussed. A new method called the coverage plot is proposed. Techniques including sectioning and bootstrap techniques are compared for a particular problem which arises in Monte Carlo simulation approaches to estimating the spatial distribution of risk in the operation of weapons firing ranges. It is found that, for computational reasons, the sectioning procedure outperforms the bootstrap for this problem. The roles of bias and sample size are also seen in the examples shown. Changepoint in dependent and non-stationary panels Statistical Papers (2020-08-01) 61: 1385-1407 , August 01, 2020 By Maciak, Matúš ; Pešta, Michal ; Peštová, Barbora Detection procedures for a change in means of panel data are proposed. Unlike classical inference tools used for the changepoint analysis in the panel data framework, we allow for mutually dependent and generally non-stationary panels with an extremely short follow-up period. Two competitive self-normalized test statistics are employed and their asymptotic properties are derived for a large number of available panels. The bootstrap extensions are introduced in order to handle such a universal setup. The novel changepoint methods are able to detect a common break point even when the change occurs immediately after the first time point or just before the last observation period. The developed tests are proved to be consistent. Their empirical properties are investigated through a simulation study. The invented techniques are applied to option pricing and non-life insurance.
CommonCrawl
Lumerical Support Solver physics Discontinuous Galerkin Time-Domain (DGTD) solver introduction The Discontinuous Galerkin Time-Domain (DGTD) algorithm solves the macroscopic Maxwell equations for isotropic, dispersive but non-magnetic materials. This section will introduce the basic mathematical and physics formalism behind the DGTD algorithm: \begin{array}{l}{\frac{\partial \overrightarrow{D}}{\partial t}=\nabla \times \overrightarrow{H}} \\ {\frac{\partial \overrightarrow{H}}{\partial t}=-\frac{1}{\mu_{0}} \nabla \times \overrightarrow{E}}\end{array} with the constitutive relation $$\overrightarrow{D}(\overrightarrow{r}, \omega)=\varepsilon_{0} \varepsilon_{r}(\overrightarrow{r}, \omega) \overrightarrow{E}(\overrightarrow{r}, \omega)$$ Here, \( \overrightarrow{E} \), \( \overrightarrow{H} \) and \( \overrightarrow{D} \) are the magnetic, electric and displacement fields, respectively. While the simulation takes place in the time-domain, the DGTD solver can handle a relative permittivity specified in frequency domain. Dispersive materials can be either specified by using specific models such as a Plasma (Drude), Debye or Lorentz model. Alternatively, the solver can use an automatic fitting of a suitable model to simulate materials with refractive index (n,k) values tabulated as a function of wavelength. The DGTD solver supports a variety of boundary conditions, including Perfectly Electric Conductor (PEC), Perfect Magnetic Conductor (PMC), periodic, and absorbing boundary conditions. In addition, it supports Perfectly Matched Layers (PMLs) to more efficiently absorb radiation leaving the computation domain [see PML boundary conditions]. Meshing and polynomial Order The DGTD solver works on an unstructured simplex mesh, which means that in 2D the mesh consists of triangles while in 3D it consists of tetrahedra. A mesh with smaller elements generally leads to a more accurate representation of the geometry and the fields and therefore gives more accurate results. However, more elements also increase the simulation time. By default, the mesh generator will use approximately two elements per shortest wavelength in each material. In some cases, it is necessary to add a mesh constraint to either a domain or a surface to refine the mesh locally. On each element, the solver expands the electric field \( \overrightarrow{E} \) and the magnetic field \( \overrightarrow{H} \) into polynomials up to a user-specified order p. A higher order leads to a more accurate solution but it also increases the computational cost. Timestep Restriction The DGTD solver uses an explicit time-stepping method which means that there is a limit to how large each timestep can be until the simulation becomes unstable. This limit is determined by the smallest element in the mesh and by the polynomial order used. While there is not exact analytic expression for the largest stable timestep, the DG solver employs a heuristic formula that leads to fast and stable simulations in most cases. Units and normalization Unless otherwise specified, all quantities are returned in SI units. J. Hesthaven and T. Warburton, "Nodal Discontinuous Galerkin Methods", Springer (2008) K. Busch et al., "Discontinuous Galerkin methods in nanophotonics", Laser & Photonics Reviews 5(6), 773-809 (2011) Mie Scattering Nanobeam Grating Chromatic Polarizer Photothermal Heating in Plasmonic Nanostructures Nanoparticle Scattering DGTD product reference manual Mie scattering (DGTD) DVD surface scattering (DGTD) FEEM product reference manual
CommonCrawl
Detecting discordance enrichment among a series of two-sample genome-wide expression data sets Volume 18 Supplement 1 Proceedings of the 27th International Conference on Genome Informatics: genomics Yinglei Lai1, Fanni Zhang1, Tapan K. Nayak1, Reza Modarres1, Norman H. Lee2 & Timothy A. McCaffrey3 BMC Genomics volume 18, Article number: 1050 (2017) Cite this article With the current microarray and RNA-seq technologies, two-sample genome-wide expression data have been widely collected in biological and medical studies. The related differential expression analysis and gene set enrichment analysis have been frequently conducted. Integrative analysis can be conducted when multiple data sets are available. In practice, discordant molecular behaviors among a series of data sets can be of biological and clinical interest. In this study, a statistical method is proposed for detecting discordance gene set enrichment. Our method is based on a two-level multivariate normal mixture model. It is statistically efficient with linearly increased parameter space when the number of data sets is increased. The model-based probability of discordance enrichment can be calculated for gene set detection. We apply our method to a microarray expression data set collected from forty-five matched tumor/non-tumor pairs of tissues for studying pancreatic cancer. We divided the data set into a series of non-overlapping subsets according to the tumor/non-tumor paired expression ratio of gene PNLIP (pancreatic lipase, recently shown it association with pancreatic cancer). The log-ratio ranges from a negative value (e.g. more expressed in non-tumor tissue) to a positive value (e.g. more expressed in tumor tissue). Our purpose is to understand whether any gene sets are enriched in discordant behaviors among these subsets (when the log-ratio is increased from negative to positive). We focus on KEGG pathways. The detected pathways will be useful for our further understanding of the role of gene PNLIP in pancreatic cancer research. Among the top list of detected pathways, the neuroactive ligand receptor interaction and olfactory transduction pathways are the most significant two. Then, we consider gene TP53 that is well-known for its role as tumor suppressor in cancer research. The log-ratio also ranges from a negative value (e.g. more expressed in non-tumor tissue) to a positive value (e.g. more expressed in tumor tissue). We divided the microarray data set again according to the expression ratio of gene TP53. After the discordance enrichment analysis, we observed overall similar results and the above two pathways are still the most significant detections. More interestingly, only these two pathways have been identified for their association with pancreatic cancer in a pathway analysis of genome-wide association study (GWAS) data. This study illustrates that some disease-related pathways can be enriched in discordant molecular behaviors when an important disease-related gene changes its expression. Our proposed statistical method is useful in the detection of these pathways. Furthermore, our method can also be applied to genome-wide expression data collected by the recent RNA-seq technology. Genome-wide expression data have been widely collected by the recent microarray [1–3] or RNA-seq technologies [4, 5]. In addition to the differential expression analysis for the identification of potential study-related biomarkers [6], gene set enrichment analysis (or gene set analysis) for the identification of study-related pathways (or gene sets) has received a considerable attention in the recent literature [7, 8]. It enables us to detect weak but coherent changes in individual genes through aggregating information from a specific group of genes. In the current public databases, large genome-wide expression data sets or multiple genome-wide expression data sets have been made available [3, 9]. For a large data set, multiple subsets can be generated according to different stages of an important feature. Integrative analysis enables us to detect weak but coherent changes in individual datasets through aggregating information from different datasets [10–12]. Integrative gene set enrichment is an approach that aggregates information from a specific group of genes among different datasets [13–15]. Due to the aforementioned complex analysis scenario, different analysis methods are needed to address different study purposes. For example, the study purpose can be to identify gene sets with statistical significance after data integration (without considering whether changes are positive or negative) and an extension of traditional meta-analysis method can be used, or the study purpose can be to identify gene sets with concordance enrichment and a mixture model based approach can be used. In this study, we consider a series of related genome-wide expression data sets collected at different stages of an important feature. For an illustrative example, RNA-seq data can be collected at many different growth time points and we are interested in the following study purpose. The gene expression in some pathways may be overall high at early time points and overall low at later time points. It is biologically interesting to identify these pathways with clearly discordant behaviors. Pang and Zhao [16] have recently suggested a stratified gene set enrichment analysis. (Jones et al. [17] also recently conducted a stratified gene expression analysis.) The analysis purpose in this study is different from theirs. As we have explained, to achieve an efficient analysis for the detection of discordance among a series of related genome-wide expression data sets, we need a specific statistical method. In a differential expression analysis and/or gene set enrichment analysis, it is usually unknown whether a gene is truly differentially expressed (up-regulated or down-regulated) or non-differentially expressed (null). Statistically, we can conduct a test (e.g. t-test) for the observations from each gene and obtain a p-value to evaluate how likely the gene is differentially expressed. False discovery rate [6, 18] can be used to evaluate the proportion of false positives among claimed positives. Another approach can also be considered. It is based on the well-known finite normal-distribution mixture model [19]. Signed z-scores can be obtained from one-sided p-values [15, 20]. The assumption is that all the z-scores are a sample of a mixture model with three components: one with zero population mean representing non-differentially expressed genes and the other two with positive and negative population means representing up-regulated and down-regulated genes, respectively. The false discovery rate (FDR) can be conveniently calculated under this framework. In the mixture model approach, although the component information is still unknown, it can be estimated by the well-established E-M algorithm [19]. This information has been used to address the enrichment in concordance among different data sets [15]. In this study, our interest is to detect enrichment in discordance among a series of related genome-wide expression data sets collected at different stages of an important feature. The estimated component information can be useful in the calculation of discordance enrichment probability (see "Methods" for details). Therefore, our method is developed based on a mixture model. In the "Methods" section, we will review the background for our mixture model based approach. Without a structure consideration, the model parameter space increases exponentially with the increase of number of data sets. Therefore, a novel statistical contribution of this study is that we propose a two-level mixture model to achieve a linearly increased parameter space with the increase of number data sets. The model parameters can be estimated by the well-established E-M algorithm and the model-based probability of discordance enrichment can be calculated for gene set detection. Table 1 gives an artificial example to illustrate discordance enrichment. Assume there are six two-sample genome-wide expression data sets, and z-scores (see "Methods" for details) for all genes are calculated. Assume there is an important molecular pathway with nine genes, and their z-scores are shown in Table 1. A positive or negative z-score implies a possible up-regulation or down-regulation, respectively. In Table 1, there are several genes with some clearly positive and some clearly negative z-scores (like absolute value greater than 4). For examples, z-scores 7.7, 4.8, -4.9 and -7.6 are observed for gene G 4; z-scores 6.5 and -8.1 are observed for gene G 5; z-scores 7.9, 5.0, 4, -8.6 and -8.9 are observed for gene G 6; z-scores 4.6, -5.6 and -9.0 are observed for gene G 7, and z-scores 5.3, -4.1 and -4.8 are observed for G 8. These observations of clear discordance suggest that, in this pathway, some genes may behave clearly differently among different data sets. Furthermore, there are five out of nine genes with these clear discordant behaviors. If we only expect about 30% of genes with such behaviors, then this proportion is obviously large (>50%). An exploration of pathways (or gene sets) enriched in clear discordance will enable us to further understand the molecular mechanisms of complex diseases. Table 1 An artificial example for discordance illustration Pancreatic cancer related studies are important in public health [21]. Recently, gene PNLIP (pancreatic lipase) has been shown its association with the pancreatic cancer survival rate [22]. A paired two-sample microarray genome-wide expression data set has been collected for studying pancreatic cancer [23]. One advantage of this paired design is that we can focus on the expression ratio between tumor and non-tumor tissues for each gene. One related biological motivation is to use the genome-wide expression data set to understand molecular changes related to the change of expression ratio of gene PNLIP. In this study, more specifically, our interest is to identify pathways or gene sets showing clearly discordant behavior when the expression ratio of gene PNLIP changes. Understanding these molecular changes can help us further investigate the role of gene PNLIP and even the general disease mechanism of pancreatic cancer. Gene expression profiles are measured as continuous variables. However, if we can perform this analysis with a relatively simple method, then the results can be more interpretable. Therefore, our approach is to divide the microarray data set into a series of non-overlapping subsets according to the tumor/non-tumor paired expression ratio of gene PNLIP. The log-ratio ranges from a negative value (e.g. more expressed in non-tumor tissue) to a positive value (e.g. more expressed in tumor tissue). Our purpose is to understand whether any gene sets are enriched in discordant behaviors among these subsets (when the log-ratio is increased from negative to positive). Notice that we only use the expression ratio of gene PNLIP to divide the study data set. We do not consider the expression profiles of other genes for data division. There is no analysis optimization in data division and this strategy avoids the selection bias towards our analysis. The number of study subjects in the microarray data set is adequate so that we can divide the data set into many subsets (e.g. greater than five) so that the biological changes can be better explored. After dividing the study data set into K non-overlapping subsets, we can perform genome-wide differential expression analysis for each subset. Genes can be generally categorized as up-regulated (positively differentially expressed), down-regulated (negatively differentially expressed) or null (non-differentially expressed). Genes may show concordant behaviors or discordant behaviors among different subsets. For examples, showing positive differential expression in all K subsets is clearly a concordant behavior and showing negative differential expression in the first subset but positive differential expression in the last subset is clearly a discordant behavior. In a genome-wide differential expression analysis, we usually calculate the test scores based on a chosen statistic (e.g. t-test) to evaluate whether genes are differentially expressed or not. For simplicity, we choose the well-known two-sample t-test. A strong positive or negative differential expression would result in clearly positive or negative test score. A non-differential expression would result in a test score close to zero and the test score could be either positive or negative (but rarely zero exactly). Therefore, if a gene is concordantly differentially expressed (e.g. all up-regulated with clearly positive test scores) in some subsets but it is not differentially expressed (e.g. all null with slightly positive test scores) in the other subsets, then it can be statistically difficult to evaluate whether the gene has an overall discordant behavior. Therefore, in this study, we focus on genes with some clearly discordant behaviors: up-regulated in at least one subset and down-regulated in at least one subset (to avoid the statistical difficulty mentioned above). We are interested in identifying pathways or gene sets enriched in clearly discordant behaviors. We focus on KEGG pathways. The detected pathways will be useful for our further understanding of the role of gene PNLIP in pancreatic cancer research. Gene TP53 is well-known for its role as tumor suppressor in general cancer studies. Its log-ratio in the microarray data set also ranges from a negative value (e.g. more expressed in non-tumor tissue) to a positive value (e.g. more expressed in tumor tissue). We also divide the microarray data set according to the expression ratio of gene TP53 and repeat the discordance enrichment analysis. We consider the analysis result based on gene TP53 a useful comparison with the analysis result based on gene PNLIP. Multiple data sets In this study, we consider a detection of gene set enrichment in discordant behaviors (or discordance gene set enrichment) for a series of two-sample genome-wide expression data sets. The term "enrichment in discordant behaviors" will be mathematically defined later. Let K be the number of data sets and let m be the number of common genes among this series of data sets. Each data set is collected for two given groups (same for all K data sets). In general, one group represents a normal status and the other group represents an abnormal status. For a single two-sample genome-wide expression data set, differential expression analysis and gene set enrichment analysis are usually conducted. The purpose of analysis of differential expression is to identify genes showing significantly up-regulation or down-regulation when two sample groups are compared. The purpose of gene set enrichment analysis is to identify pathways (or gene sets) showing coordinate up-regulation or down-regulation, which may be considered as an extension of differential expression analysis. Therefore, the following gene behaviors are usually of our research interest in two-sample expression data analysis: positive change (or up-regulation), negative change (or down-regulation) and null (or non-differentially expressed). However, these underlying behaviors are usually not observed and expression data are collected to make statistical inference about them. Data pre-processing is important for both microarray and RNA-seq data and it has been well discussed in the literature [24–26]. In our study, the data can be downloaded from a well-known public database. We assume that the gene expression profiles have been appropriately pre-processed. In an analysis of multiple expression data sets, it is usually necessary to focus on common genes and gene identifiers can be useful for this purpose. In our study, we divide a relatively large data set into a series of non-overlapping subsets. Therefore, all the genes in the downloaded data are common. Many statistical tests have been proposed for analyzing a two-sample genome-wide expression data set [27, 28]. In this study, the traditional paired-two-sample t-test is chosen for its simplicity (although other statistics could be certainly considered, see below). For each gene in each data set (or subset), we perform the t-test to obtain a t-score. Its p-value is evaluated based on the permutation procedure (randomly switch the tumor/non-tumor labels for each pair of tissues) so that the normal distribution assumption is not assumed for the paired-difference data. All the permuted t-scores are pooled together so that tiny p-values can be calculated [29]. One-sided upper-tailed p-values are calculated so that the direction of change can be distinguished for each gene in each data set. Let p i,k be the p-value for the i-th gene in the k-th data set. z-scores are obtained by an inverse normal transformation $$z_{i,k} = \Phi^{-1}(p_{i,k}), $$ where Φ(·) is the cumulative distribution function (c.d.f.) of the standard normal distribution (mean zero and variance one). This transformation has been widely used [20] and our proposed multivariate normal mixture model will be applied to the transformed z-scores. Discordance enrichment Our proposed method is a type of gene set enrichment analysis. As it has been discussed by Lai et al. [15], we defined "enrichment" as "the number of events of interest is larger than expected" and our "event of interest" in this study is "a list of clearly discordant behaviors" from a gene. If we know whether the expression profile of a gene is up-regulated (simplified as "up"), down-regulated (simplified as "down") or non-differentially expressed (simplified as "null") in a data set, then a list of concordant behaviors among K data sets for this gene could be (up, up, …, up), (down, down, …, down) or (null, null, …, null). In this study, we focused on a list with at least one "up" and at least one "down" among K data sets. For example, a list like (down, up, up, …, up) is an event of interest but a list like (null, up, up, …, up) is not. The reason is "down" and "up" can be visually distinguished by the negative ("-") and positive ("+") signs in z-scores, respectively. However, zero z-scores are rarely observed. Therefore, it is less clear to distinguish "null" from "up" (or "null" from "down"). Based on the expression profiles, we obtain z-scores to make statistical inference about genes' behaviors in each data set. To evaluate "discordance enrichment" as defined above, we considered a mixture model approach that allows us to estimate the probability of a behavior ("up", "down" or "null") and the expected number of events of interest (notice that these are not directly observed in the data sets). Let S be the set of genes for a pathway (or gene set in general) and m S the number of genes in S. If the i-th gene in S is showing a list of clearly discordant behaviors, then we set an indicator variable U S,i =1; Otherwise, we set U S,i =0. Then, we can calculate the discordance enrichment score (DES) for gene set S that is a probability defined as $$DES_{S} = \mathbf{Pr}\left(\sum_{i=1}^{m_{S}} U_{S,i} > m_{S} \theta\right), $$ in which θ is the proportion of genes with clearly discordant behaviors. In our mixture model, we used normal distributions to model the z-scores. A novel contribution is that the parameter space of our model increases linearly when the number of data sets is increased. This is due to the two-level structure of our model. (The parameter space of a general model for this analysis increases exponentially when the number of data sets is increased). For each gene in each data set, we considered three normal distribution components that represent up-regulation (positive distribution mean), down-regulation (negative distribution mean) and null (zero mean). (Theoretically, p-values under the null hypothesis are uniformly distributed. Therefore, z-scores under the null hypothesis are normally distributed with mean zero and variance one). The mathematical details are described below. A two-level mixture model First, we described the basic model structure for just one data set. Then, we introduced our novel two-level mixture model. A simple three-component normal distribution mixture model [30, 31] is considered for each z-score z i,k (the i-th gene in the k-th data set, i=1,2,…,m and k=1,2,…,K): $$f(z_{i,k}) = \sum_{j_{k}=0}^{2} \rho_{j_{k},k} \phi_{\mu_{j_{k},k}, \sigma^{2}_{j_{k},k}}(z_{i,k}). $$ In the above model, \(\phantom {\dot {i}\!}\phi _{\mu, \sigma ^{2}}(\cdot)\) is the probability density function (p.d.f.) of a normal distribution with mean μ and variance σ 2. Three components represent up-regulation with μ 1,k >0, down-regulation with μ 2,k <0 and null with μ 0,k =0 (also recall that \(\sigma ^{2}_{0,k}=1\)). For this model, an assumption is that the p.d.f. of z i,k is simply \(\phi _{\mu _{j_{k},k}, \sigma ^{2}_{j_{k},k}}(z_{i,k})\) if we know the underlying component information j k for the i-th gene in the k-th data set. However, the component information is usually not observed in practice. Then, we have this one-dimensional mixture model after the introduction of component proportion parameters \(\left \{ \rho _{j_{k},k}, j_{k}=0,1,2 \right \}\) for the k-th data set. When we extend the above mixture model to a higher dimension (i.e. K data sets), without a structure consideration, the parameter space increases exponentially due to the 3K different component combinations (3 components in each of K data sets). Therefore, when K is not a small number (i.e. K>4), we need a more efficient model [15]. Biologically, when different data sets are collected for the same or similar research purpose, some genes are likely to show consistent behaviors across different data sets and some genes are likely to show different behaviors. For genes likely showing consistent behaviors across K data sets, we consider a complete concordance (CC) multivariate model to approximate the distribution of {z i,k ,k=1,2,…,K}. For genes likely showing different behaviors across K data sets, we consider a complete independence (CI) multivariate model to approximate the distribution of {z i,k ,k=1,2,…,K}. (Notice that there is no overlap among multiple data sets. If the component information among these data sets is known, then z-scores are independent.) We first describe the CI model and CC model as below. The CI model assumes that the behaviors of the i-th gene are independent across different data sets. Therefore, we have the following mixture model: $$f_{CI}(z_{i,1}, z_{i,2}, \ldots, z_{i,K}) = \prod_{k=1}^{K}\left[\sum_{j_{k}=0}^{2} \rho_{j_{k},k} \phi_{\mu_{j_{k},k},\sigma^{2}_{j_{k},k}}(z_{i,k})\right]. $$ This model is simply a product of K one-dimensional three-component mixture-models. The CC model assumes that the behaviors of the i-th gene are the same across different data sets. Although the component information is unknown, the components for different data sets must be consistent. Therefore, we have the following mixture model: $$f_{CC}(z_{i,1}, z_{i,2}, \ldots, z_{i,K})=\sum_{j=0}^{2}\left[\pi_{j} \prod_{k=1}^{K}\phi_{\mu_{j,k},\sigma_{j,k}^{2}}(z_{i,k})\right]. $$ This model has three components and each component is a product of K normal probability density functions. In practice, it is unknown whether the i-th gene is showing independent or consistent behaviors. Therefore, we consider CI and CC as two high-level components and propose the following two-level model for {z i,k ,k=1,2,…,K}: $$\begin{aligned} f(z_{i,1}, z_{i,2}, \ldots, z_{i,K}) &= \lambda f_{CC}(z_{i,1}, z_{i,2}, \ldots, z_{i,K}) \\ & \quad+ (1-\lambda) f_{CI}(z_{i,1}, z_{i,2}, \ldots, z_{i,K}). \end{aligned} $$ Notice that this two-level model is still a mixture model. We further assume that \(\{ \mu _{j_{k},k}, \sigma _{j_{k},k}^{2}, j_{k}=0,1,2, k=1,2,\ldots,K \}\) are shared by both CI and CC models. It is evident that the model parameter space increases linearly when the number of data sets (K) increases. We can use the well-established Expectation-Maximization (E-M) algorithm [19] for parameter estimation. First, it is necessary to introduce some indicator variables (for component information) for the z-scores {z i,k ,k=1,2,…,K} of the i-th gene. Then, we describe the E-step and M-step. For high-level component information, $${}\begin{aligned} \omega_{i}\,=\,\left\{\!\ \begin{array}{ll} 1 &\text{if gene's behaviors are consistent with CC model;}\\ 0 &\text{if gene's behaviors are consistent with CI model.} \end{array} \right. \end{aligned} $$ For CI model component information, $${}\begin{aligned} \eta_{i,j_{k},k}=\left\{ \begin{array}{ll} 1 &\text{if \(z_{i,k}\) is sampled from the \(j_{k}\)-th component;}\\ 0 &\text{otherwise.} \end{array} \right. \end{aligned} $$ For CC model component information, $${}{\begin{aligned} \xi_{i,j}=\left\{ \begin{array}{ll} 1 &\text{if all}\, \{ z_{i,k}, k=1,2,\ldots,K \}\, \text{are sampled from the}\, j\text{-th component;}\\ 0 &\text{otherwise.} \end{array} \right. \end{aligned}} $$ The E-step is the calculation of the following expected values when all the parameter values are given. $${}{\begin{aligned} {\mathrm{E}}(\omega_{i})& = &\frac{\lambda f_{CC}(z_{i,1}, z_{i,2}, \ldots, z_{i,K})}{\lambda f_{CC}(z_{i,1}, z_{i,2}, \ldots, z_{i,K}) + (1-\lambda) f_{CI}(z_{i,1}, z_{i,2}, \ldots, z_{i,K})}, \end{aligned}} $$ $${}{\begin{aligned} {\mathrm{E}}((1&-\omega_{i}) \eta_{i,j_{k},k})\\ & = \frac{(1-\lambda)\rho_{j_{k},k}\phi_{\mu_{j_{k},k},\sigma_{j_{k},k}^{2}}(z_{i,k}) \prod_{h=1,h\neq k}^{K}\sum_{j_{h}=0}^{2}\rho_{j_{h},h}\phi_{\mu_{j_{h},h},\sigma_{j_{h},h}^{2}}(z_{i,h})}{\lambda f_{CC}(z_{i,1}, z_{i,2}, \ldots, z_{i,K}) + (1-\lambda) f_{CI}(z_{i,1}, z_{i,2}, \ldots, z_{i,K})}. \end{aligned}} $$ $${}\begin{aligned} {\mathrm{E}}&(\omega_{i} \xi_{i,j})\\ &= \frac{\lambda \pi_{j} \prod_{k=1}^{K} \phi_{\mu_{j,k},\sigma_{j,k}^{2}}(z_{i,k})}{\lambda f_{CC}(z_{i,1}, z_{i,2}, \ldots, z_{i,K})\! +\! (1-\lambda) f_{CI}(z_{i,1}, z_{i,2}, \ldots, z_{i,K})}, \end{aligned} $$ The M-step is the calculation of the following parameter values when all the component information is given: $$\begin{array}{@{}rcl@{}} \hat{\lambda}&=&\frac{1}{m}\sum_{i=1}^{m}{\mathrm{E}}(\omega_{i}), \end{array} $$ $$\begin{array}{@{}rcl@{}} \hat{\rho}_{j_{k},k}&=&\frac{\sum_{i=1}^{m}{\mathrm{E}}\left((1-\omega_{i}) \eta_{i,j_{k},k}\right)}{\sum_{i=1}^{m}\sum_{j_{h}=0}^{2}{\mathrm{E}} \left((1-\omega_{i}) \eta_{i,j_{h},k}\right)}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \hat{\pi}_{j}=\frac{\sum_{i=1}^{m}{\mathrm{E}}(\omega_{i} \xi_{i,j})}{\sum_{i=1}^{m}\sum_{h=0}^{2}{\mathrm{E}}(\omega_{i} \xi_{i,h})}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \hat{\mu}_{j_{k},k}&=&\frac{\sum_{i=1}^{m}\left[{\mathrm{E}}(\omega_{i} \xi_{i,j_{k}})+{\mathrm{E}}((1-\omega_{i}) \eta_{i,j_{k},k})\right] z_{i,k}}{\sum_{i=1}^{m}[{\mathrm{E}}(\omega_{i} \xi_{i,j_{k}})+{\mathrm{E}}((1-\omega_{i}) \eta_{i,j_{k},k})]}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \hat{\sigma}^{2}_{j_{k},k}\,=\,\frac{\sum_{i=1}^{m}[{\mathrm{E}}(\omega_{i} \xi_{i,j_{k}})+{\mathrm{E}}((1-\omega_{i}) \eta_{i,j_{k},k})]] (z_{i,k} - \hat{\mu}_{j_{k},k})^{2}}{\sum_{i=1}^{m}[{\mathrm{E}}(\omega_{i} \xi_{i,j_{k}})+{\mathrm{E}}((1-\omega_{i}) \eta_{i,j_{k},k})]}\!. \end{array} $$ E-step and M-step are iterated until a numerical convergence is achieved. In this study, the numerical convergence is defined as that the difference between the current log-likelihood and the previous one is within a given tolerance value (e.g. 10−4). Enrichment score As we have discussed in Discordance enrichment, in this study, we focus on genes' behaviors with at least one up-regulation and at least one down-regulation among K data sets (our event of interest: a gene with clearly discordant behaviors). However, we do not need to enumerate all these combinations (among 3K in total). The related computing can be simplified if we enumerate the compliment events instead. There are three combinations for complete concordance: (up, up,..., up), (down, down,..., down) and (null, null,..., null). They will be excluded. There are \(\sum _{l=1}^{K-1}{K \choose l}\) combinations with both "null" and "up" (without "down") and there are \(\sum _{l=1}^{K-1}{K \choose l}\) combinations with both "null" and "down" (without "up"). They will also be excluded. Then, the remaining combinations are our events of interest (at least one "up" and at least one "down"). According to the above computing strategy, based on the two-level mixture model, the related proportion (θ) of genes with clearly discordant behaviors (also see "Discordance enrichment" for more details) can be calculated as follows. $${}\begin{aligned} \theta\! =\! (1\,-\,\lambda)\left(\!1\! -\!\! \sum_{j=0}^{2} \prod_{k=1}^{K} \rho_{j,k} \,-\,\!\! \sum_{\{j_{k}\} \in A} \prod_{k=1}^{K} \rho_{j_{k},k} -\!\! \sum_{\{j_{k}\} \in B} \prod_{k=1}^{K} \rho_{j_{k},k}\! \right)\!, \end{aligned} $$ where A is the set of lists with a mix of 0's and 2's, and B is the set of lists with a mix of 0's and 1's. Let S be a gene set with m S genes. As defined in Discordance enrichment, let the indicator variable U S,i =1 if the i-th gene in S is showing a list of clearly discordant behaviors, and U S,i =0 otherwise. Then, based on the two-level mixture model, the related probability can be calculated as follows. $${}\begin{aligned} \mathbf{Pr}(U_{S,i}=1) &= (1-\lambda)[ f_{CI}(z_{S,i,1}, z_{S,i,2}, \ldots, z_{S,i,K}) \\ & \quad- \sum_{j=0}^{2} \prod_{k=1}^{K} \rho_{j,k} \phi_{\mu_{j,k},\sigma^{2}_{j,k}}(z_{S,i,k}) \\ & \quad- \sum_{\{j_{k}\} \in A \cup B} \prod_{k=1}^{K} \rho_{j_{k},k} \phi_{\mu_{j_{k},k},\sigma^{2}_{j_{k},k}}(z_{S,i,k}) ] \\ & \quad / f(z_{S,i,1}, z_{S,i,2}, \ldots, z_{S,i,K}), \end{aligned} $$ where (z S,i,1,z S,i,2,…,z S,i,k ) are the related z-scores. Let ζ S,i =P r(U S,i =1), which is a conditional probability according to the given model and observed data. Under the assumption that z-scores from different genes are independent, the discordance enrichment score (DES) for gene set S, which has been defined in Discordance enrichment as \(DES_{S} = \mathbf {Pr}\left (\sum _{i=1}^{m_{S}} U_{S,i} > m_{S} \theta ]\right)\), can be calculated as follows. $$\begin{aligned} DES_{S} &= \sum_{U_{S,1}=0}^{1} \sum_{U_{S,2}=0}^{1} \cdots \sum_{U_{S,m_{S}}=0}^{1} \left[I\left(\sum_{i=1}^{m_{S}} U_{S,i}\right.\right.\\ & \left.\left. \quad > m_{S} \theta{\vphantom{\sum_{0}^{0}}}\right) \prod_{i=1}^{m_{S}} \zeta_{S,i}^{U_{S,i}} (1-\zeta_{S,i})^{1-U_{S,i}} \right], \end{aligned} $$ where I(true statement)=1 and I(false statement)=0 (indicator function). Since {ζ S,i ,i=1,2,…,m S } are usually different for different genes, the above formula is a calculation of a tail probability for a heterogeneous Bernoulli process. The related computing issue and the related false discovery rate have already been discussed by Lai et al. [15]. Therefore, we described them briefly as below. False discovery rate As discussed in the literature [15, 20], the above enrichment score is a conditional probability and a true positive proportion for gene set S. Therefore, the related false discovery rate [6, 18] for the top T gene sets {S 1,S 2,…,S T } identified by the above DES can be conveniently derived as below. $$FDR = 1 - \sum_{t=1}^{T} DES_{S_{t}}/T. $$ Computational approximation As discussed in Lai et al. [15], the exact calculation of DES can be difficult due to the complexity of heterogeneous Bernoulli process. A Monte Carlo approximation has been suggested as follows. First, set an integer variable X=0. For the i-th gene in S, simulate a Bernoulli random variable with probability of event ζ S,i . Then, count the number of events from all genes in S, and increase X by one if this number is larger than m S θ. Repeat the simulation and counting B times and report X/B as the approximated DES. B=2000 was suggested by Lai et al. [15]. Genome-wide expression data and KEGG pathway collection Zhang et al. [23] recently conducted a genome-wide expression study for forty-five matched pairs of pancreatic tumor and adjacent non-tumor tissues. The data were collected by the microarray technology (Affymetrix GeneChip Human Gene 1.0 ST arrays) and were made publicly available in the NCBI GEO database [23]. The collections of gene sets or pathways can be downloaded from the Molecular Signature Database [7, 8]. At the time of study, the collections have been updated to version 4.0. In this study, we focus on 186 KEGG pathways for our data analysis. There are 28677 genes available for our discordance enrichment analysis. As we have explained in the Methods, we expect to identify pathways with enrichment in clearly discordant gene behaviors among a series of pre-defined genome-wide expression data sets. (Notice that a pathway with DES∼1 is significantly enriched in clearly discordant behaviors; and a pathway with DES∼0 is evidently not enriched in clearly discordant behaviors). Data division based on gene PNLIP The hierarchical clustering tree (with Euclidean distance and the "median" agglomeration method) for the log2-transformed ratio values of gene PNLIP is included in Fig. 1 a. Several major clusters of subjects can be generated if we cut the tree at 0.15. After including these isolated subjects into their nearby clusters, we can obtain seven clusters (subgroups of tumor/non-tumor pairs). Therefore, seven subsets of genome-wide expression data were defined accordingly with sample size 7+7, 7+7, 6+6, 4+4, 6+6, 9+9, or 6+6 (see Fig. 2 a). Figure 3 b shows the paired expression ratio values of gene PNLIP [log2-transformation applied here for the convenience of visualization of up-regulation (positive sign) or down-regulation (negative sign)]. Figure 3 a shows the individual expression values for gene PNLIP in different subsets. Notice that, from Fig. 3 b, subsets 1 represents a clear down-regulation of gene PNLIP, and subsets 6 and 7 represents null and up-regulation of gene PNLIP, respectively. Hierarchical clustering for data division. a Tree of paired-ratio values (log2-transformed) of gene PNLIP. b Tree of paired-ratio values (log2-transformed) of gene TP53 Comparison of expression and paired-ratio between gene TP53 vs. gene PNLIP. a Comparison of paired-ratio values (log2-transformed). Gray dotted lines represent the cutoff values for defining subsets. b Comparison of expression values for non-tumor tissues. c Comparison of expression values for tumor tissues Expression and paired-ratio of gene PNLIP. a Expression values for tissues in seven subsets (gray color represents non-tumor and dark color represents tumor). b Paired-ratio values (log2-transformed) in seven subsets (gray dotted vertical lines for their separation) z-Scores based on gene PNLIP Figure 4 shows pair-wise scatterplot for comparing z-scores from the seven subsets defined by the paired-ratio of gene PNLIP. Most scatterplots for adjacent or close-to-adjacent subsets are showing a relatively regular positive correlation pattern (implying overall consistent gene behaviors). The scatterplots for far-from-adjacent subsets are mostly showing an irregular weak correlation pattern (implying a considerable amount of inconsistent gene behaviors). As mentioned above, subsets 1, 6 and 7 are representative for down-regulation, null and up-regulation of gene PNLIP, respectively. It is clear that the scatterplot for subsets 7 vs. 1 is showing the most irregular pattern, which implies that many genes have clearly discordant behaviors when gene PNLIP changes its behavior from down-regulation to up-regulation. z-score comparison (gene PNLIP). Pair-wise scatterplots for comparing z-scores from seven subsets defined by the paired-ratio of gene PNLIP Significant pathways based on gene PNLIP Table 2 lists the significant KEGG pathways identified by the discordance enrichment analysis (with DES>0.80, also the related maximum FDR<0.05). Among these eleven pathways, there are neuroactive ligand receptor interaction, olfactory transduction, alpha-linolenic-acid metabolism and linoleic-acid metabolism pathways. The literature support for the association between pancreatic cancer and each of these pathways will be discussed later. For the olfactory transduction and neuroactive ligand receptor interaction pathways, Fig. 5 shows their z-score pattern changes when all the adjacent subsets are pair-wisely compared and three representative subsets (1, 6, 7, see above for their details) are also pair-wisely compared. For the pairs of subsets 2 vs. 1, 3 vs. 2, concordant behaviors can be overall observed for the genes in these two pathways. Discordant behaviors can be overall observed for the pairs 6 vs. 5, 7 vs. 6, 6 vs. 1 and 7 vs. 1. Particularly for the pair 7 vs. 1 (up-regulation vs. down-regulation for gene PNLIP), the genes in olfactory transduction pathway are mostly down-regulated in subset 1 but evenly up-regulated or down-regulated in subset 7, and the genes in neuroactive ligand receptor interaction pathway are almost evenly up-regulated or down-regulated in both subsets. z-score comparison (gene TP53). Pair-wise scatterplots for comparing z-scores from six subsets defined by the paired-ratio of gene TP53 Table 2 Pathways identified by the discordance enrichment analysis Data division based on gene TP53 The hierarchical clustering tree (with Euclidean distance and the "median" agglomeration method) for the log2-transformed ratios values of gene TP53 is included in Fig. 1 b. Several major clusters of subjects can be generated if we cut the tree at 0.03. After including these isolated subjects into their nearest clusters, we can obtain six clusters (subgroups of tumor/non-tumor pairs). Therefore, six subsets of genome-wide expression data were defined accordingly with sample size 4+4, 7+7, 6+6, 13+13, 10+10, or 5+5 (see Fig. 2 a). Figure 6 b shows the paired expression ratio values of gene TP53 [log2-transformation applied here for the convenience of visualization of up-regulation (positive sign) or down-regulation (negative sign)]. Figure 6 a shows the individual expression values for gene TP53 in different subsets. Notice that, from Fig. 6 b, subsets 1 represents a clear down-regulation of gene TP53, and subsets 3 and 6 represents null and up-regulation of gene TP53, respectively. Expression and paired-ratio of gene TP53. a Expression values for tissues in six subsets (gray color represents non-tumor and dark color represents tumor). b Paired-ratio values (log2-transformed) in six subsets (gray dotted vertical lines for their separation) z-Scores based on gene TP53 Figure 7 shows pair-wise scatterplot for comparing z-scores from the six subsets defined by the paired-ratio of gene TP53. Many scatterplots for adjacent or close-to-adjacent subsets are still showing a relatively regular positive correlation pattern (implying overall consistent gene behaviors). Almost all the scatterplots for far-from-adjacent subsets are showing an irregular weak correlation pattern (implying a considerable amount of inconsistent gene behaviors). As mentioned above, subsets 1, 3 and 6 are representative for down-regulation, null and up-regulation of gene TP53, respectively. All the pair-wise scatterplots for these three subsets are showing irregular patterns (with the scatterplot for subsets 6 vs. 1 the most irregular), which implies that many genes have clearly discordant behaviors when gene TP53 change its behavior from down-regulation to null, and then to up-regulation. z-scores in two most significantly detected pathways (gene PNLIP). Pair-wise scatterplots for comparing z-scores in the given pathway (dark color) and out of the given pathway (gray color). All the adjacent subsets are pair-wisely compared (e.g. 2 vs. 1, 3 vs. 2, 4 vs. 3, 5 vs. 4, 6 vs. 5 and 7 vs. 6) and three representative subsets (1 for down-regulation, 6 for null, and 7 for up-regulation) are also pair-wisely compared (7 vs. 6 already shown, then 6 vs. 1 and 7 vs. 1). The order of scatterplots is shown as (a-p) Significant pathways based on gene TP53 Table 2 list the significant KEGG pathways identified by the discordance enrichment analysis (with DES>0.80, also the related maximum FDR<0.10). Among these five pathways, there are neuroactive ligand receptor interaction, olfactory transduction, alpha-linolenic-acid metabolism and linoleic-acid metabolism pathways (which have been identified above by the analysis based on gene PNLIP). For the olfactory transduction and neuractive ligand receptor interaction pathways, Fig. 8 shows their z-score pattern changes when all the adjacent subsets are pair-wisely compared and three representative subsets (1, 3, 6, see above for their details) are also pair-wisely compared. For the pairs of subsets 6 vs. 5, 5 vs. 4 and 4 vs. 3, concordant behaviors can be overall observed for the genes in these two pathways. Discordant behaviors can be overall observed for the pairs 2 vs. 1, 3 vs. 2, 3 vs. 1, 6 vs. 1 and 6 vs. 3. Particularly for the pair 6 vs. 1 (up-regulation vs. down-regulation for gene TP53), the genes in olfactory transduction pathway are mostly down-regulated in subset 6 but evenly up-regulated or down-regulated in subset 1, and the genes in neuractive ligand receptor interaction pathways are somewhat evenly up-regulated or down-regulated in both subsets. z-scores in two most significantly detected pathways (gene TP53). Pair-wise scatterplots for comparing z-scores in the given pathway (dark color) and out of the given pathway (gray color). All the adjacent subsets are pair-wisely compared (e.g. 2 vs. 1, 3 vs. 2, 4 vs. 3, 5 vs. 4, and 6 vs. 5) and three representative subsets (1 for down-regulation, 3 for null, and 6 for up-regulation) are also pair-wisely compared (3 vs. 1, 6 vs. 1 and 6 vs. 3). The order of scatterplots is shown as (a-p) Literature support We have conducted a discordance enrichment analysis based on gene PNLIP and a discordance enrichment analysis based on gene TP53. Among two lists of identified pathways, there are four in common: neuroactive ligand receptor interaction, olfactory transduction, alpha-linolenic-acid metabolism and linoleic-acid metabolism pathways (see Table 2). To further understand these pathways, we have checked the related biomedical literature. The genome-wide expression data analyzed in this study were collected based on the microarray technology for RNA profiling. Genome-wide association study (GWAS) data have also been collected for pancreatic cancer research based on the microarray technology for DNA profiling (single nucleotide polymorphism, or SNP). Wei et al. [32] recently conducted a pathway analysis for a large GWAS data on pancreatic cancer research. They reported only two pathways. Interestingly, these two pathways are neuroactive ligand receptor interaction and olfactory transduction pathways (top two identified from both of our analysis results, see above for details). Notice that their findings were based on a different type of molecular data. This is a strong support for the discordance enrichment analysis results. We also found at least one support for both alpha-linolenic-acid metabolism and linoleic-acid metabolism pathways. Wenger et al. [33] conducted a study on the roles of alpha-linolenic acid (ALA) and linoleic acid (LA) on pancreatic cancer and they observed an association between the disease and these two fatty acids. Insignificant pathways Figure 9 shows 186 DES based on PNLIP vs. DES based on TP53. These two lists of DES's are highly correlated (Spearman's rand correlation 0.642), although some pathways identified in the analysis results based on PNLIP are not significant in the analysis results based on TP53. (Notice that a pathway with DES∼1 is significantly enriched in clearly discordant behaviors; and a pathway with DES∼0 is evidently not enriched in clearly discordant behaviors.) Only a small number of pathways were identified by the discordance enrichment analysis. The histograms in the figure show that most pathways are showing insignificant DES's. For each of two analysis results, there are more than 140 pathways (among 186) with DES<0.05. The number of pathways with both DES<0.01 or both DES<0.05 is 111 (60%) or 138 (74%), respectively. For both DES<0.25, 0.5 or 0.75, there are 154 (83%), 164 (88%) or 173 (93%) pathways, respectively. Therefore, most pathways are evidently not enriched in clearly discordant behaviors among the series of subsets defined by the paired expression ratio of gene PNLIP; neither are they among the series of subsets defined by the paired expression ratio of gene TP53. Many disease related pathways have been listed by KEGG (http://www.genome.jp/kegg/pathway.html). The collection of pancreatic cancer related pathways (or KEGG pancreatic cancer) and the collection of cancer related pathways (or KEGG pathways in cancer) are not enriched from both analysis results (DES<0.001). Among the pathway components of these two collections (e.g. cell cycle pathway, apoptosis pathway, etc.), the highest DES value is <0.01 for the PPAR signaling pathway from the analysis results based on PNLIP, and the highest DES value is <0.05 for the cytokine-cytokine receptor interaction pathway from the analysis results based on TP53. Pathways like hedgehog signaling, proteasome, and primary immunodeficiency are also showing low DES values (all <0.05). Comparison of DES between gene TP53 vs. gene PNLIP. (left, lower) Scatterplot of DES based on gene TP53 vs. DES based on gene PNLIP, notice that there are overlapped dots in the scatterplot. (left, upper) Histogram of DES based on gene TP53. (right, lower) Histogram of DES based on gene PNLIP Expression profiles of PNLIP vs. TP53 PNLIP is a gene shown recently its association with pancreatic cancer [22]. TP53 is a well-known tumor suppressor gene. From the above comparison, it is interesting that the discordance enrichment analysis results based on PNLIP are highly correlated with the discordance enrichment analysis results based on TP53. To further understand this correlation, we compared the expression profile of PNLIP with the expression profile of TP53. Figure 2 a shows a relatively weak negative correlation (Spearman's rank correlation -0.250) between two lists of paired-ratios but the correlation is not statistically significant (p-value=0.098). In the non-tumor group (Fig. 2 b), the negative correlation (Spearman's rank correlation -0.318) achieves a p-vlaue 0.033. In the tumor group (Fig. 2 c), the negative correlation (Spearman's rank correlation -0.276) is again not statistically significant (p-value=0.066). Furthermore, the ratio cutoff values for defining subsets were added to Fig. 2 a. A contingency table can be generated according to these grids (for example, the cell number is one for row one and column one in the table). The chi-square test for this sparse contingency table is not statistically significant (simulation based p-value >0.3). Therefore, in summary, gene PNLIP may be negatively associated with gene TP53 but no clear statistical significance has been observed in this study. Comparison to gene set analysis Efron and Tibshirani [34] have proposed a gene set analysis (GSA) method for analyzing enrichment in pathways (or gene sets). It was suggested by Maciejewski [35] that this method is preferred in a gene set enrichment analysis. In some situations of integrative data analysis, different data sets cannot be simply pooled together. For each data set, the p-value of enrichment in up-regulation can be obtained for each gene set. To integrate the p-values from multiple data sets (for the same gene set), we can consider Fisher's method (Fisher's combined probability test). log-Transformed p-values are summed up and then multiplied by -2, which is well-known to follow a chi-squared distribution under the null hypotheses. In this way, we can perform an integrative gene set enrichment analysis of multiple data sets (when different data sets cannot be pooled together). Gene sets (or pathways) can be ranked by their chi-squared p-values. (Similarly, the p-value of enrichment in down-regulation can also be obtained by GSA for each gene set and each data set. Then, the related chi-squared p-values can be calculated by Fisher's method.) Notice that, our analysis purpose is to detect discordance enrichment among multiple data sets. However, the discordance feature is usually not considered in a traditional integrative analysis. In this study, our analysis results were based on several subsets divided from a genome-wide expression data set with a relatively large sample size. These subsets could be pooled back (to be the original large data set). Therefore, we applied GSA to the original data (so that we could take the advantage of its relatively large sample size). However, after considering the adjustment for multiple hypothesis testing, no pathways (or gene sets) could be identified even at the false discovery rate 0.3 (or FDR<30%). (Therefore, the detail of GSA results is not reported). An application to The Cancer Genome Atlas (TCGA) data sets For a further illustration of our method, we performed a discordance enrichment analysis of the RNA sequencing (RNA-seq) data collected by The Cancer Genome Atlas (TCGA) project [3]. At the time of study, with the consideration of adequate numbers of normal/tumor subjects, we selected the RNA-seq data for studying prostate adenocarcinoma (PRAD), colon adenocarcinoma (COAD), stomach adenocarcinoma (STAD), head and neck squamous cell carcinoma (HNSC), thyroid carcinoma (THCA) and liver hepatocellular carcinoma (LIHC). Among these different types of diseases, we expected a certain level of dissimilarity in genome-wide expression profiles. Therefore, we applied our method to these six TCGA RNA-seq data sets (and our proposed two-level mixture model was useful to reduce the number of model parameters). Gene expression profiles for more than 20,000 common genes were available for our analysis. Among 186 KEGG pathways, we report the analysis results for a collection of cancer related pathways. There are sixteen of these pathways in KEGG but fourteen of them are available in the Molecular Signatures Database [7, 8]. In Table 3, the discordance enrichment analysis results are also compared to the results based on GSA-based Fisher's method (see Comparison to Gene Set Analysis for details). However, it is important to emphasize that the detection of discordance enrichment is our focus in this study and the feature of discordance is usually not considered in a traditional integrative analysis (e.g. Fisher's method). Table 3 A comparison study Table 3 shows the comparison of our discordance enrichment scores (DES) to the p-values calculated by GSA-based Fisher's method (up-regulation or down-regulation). (Lower p-value for more significant result but higher DES for more significant result.) The p53 signaling pathway, cell cycle pathway, and PPAR signaling pathway are three pathways with significant GSA-Fisher p-values. For the p53 signaling pathway and cell cycle pathway, their DES suggest low discordance among different types of diseases for these two well-known pathways. For the PPAR signaling pathway, its DES is also highly significant. Figure 10 shows a considerable amount of concordance as well as a considerable amount of discordance among different types of diseases for this pathway. With the consideration of either Bonferroni-type adjustment or FDR-type adjustment, no detection can be further observed based on GSA-based Fisher's method. However, our method identified a few pathways with significant discordance enrichment (DES>0.999) including the focal adhesion, MAPK signaling, VEGF signaling and apoptosis pathways. Figure 11 shows a considerable amount of discordance among different types of diseases for the well-known apoptosis pathway. Furthermore, the WNT signaling, adherens junction, MTOR signaling and TGF-beta signaling pathways are also showing high DES, which suggest possible discordance enrichments for these pathways. z-scores in PPAR signaling pathway (TCGA data). Pair-wise scatterplots for comparing z-scores in the given pathway (dark color) and out of the given pathway (gray color). x-Axis and y-axis represent z-scores for different types of diseases. The order of scatterplots is shown as (a-o) z-scores in apoptosis pathway (TCGA data). Pair-wise scatterplots for comparing z-scores in the given pathway (dark color) and out of the given pathway (gray color). x-Axis and y-axis represent z-scores for different types of diseases. The order of scatterplots is shown as (a-o) In this study, we suggested a discordance gene set enrichment analysis for a series of two-sample genome-wide expression data sets. To reduce the parameter space, we proposed a two-level multivariate normal distribution mixture model. Our model is statistically efficient with linearly increased parameter space when the number of data sets is increased. Then, gene sets can be detected by the model-based probability of discordance enrichment. Based on our two-level model, if the proportion of complete concordance component is high, then more genes behave concordantly among different data sets. Similarly, if the proportion of complete independence component is high, then more genes behave discordantly among different data sets. In the complete concordance component (model), only complete concordant behaviors are considered: all "up," all "down" or all "null." Therefore, there are only three items j=0,1,2 for the outer summation term. For each complete concordant behavior, we have independence among different data sets. Statistically, conditional on a underlying complete concordant behavior (with probability π j ), we have an inner product term of probability density functions calculated based on different data sets. In the complete independence component (model), genes behave completely independent among different data sets, which is reflected in the outer product term. For each data set, the underlying behavior for each gene can be "up," "down" or "null." However, the behavior cannot be directly observed and the related probability density function is calculated based on a mixture model. Our method was applied to a microarray expression data set collected for pancreatic cancer research. The data were collected for forty-five matched tumor/non-tumor pairs of tissues. These pairs were first divided into seven subgroups for defining seven subsets of genome-wide expression data, according to the paired expression ratio of gene PNLIP. This gene was recently shown its association with pancreatic cancer. Our purpose was to understand discordance gene set enrichment when gene PNLIP changes its behavior from down-regulation to up-regulation. Among a few identified pathways, the neuroactive ligand receptor interaction, olfactory transduction pathways were the most significant two. The alpha-linolenic-acid metabolism and linoleic-acid metabolism pathways were also among the list. To better understand these results, we divided again the original data with forty-five pairs of tumor/non-tumor tissues into six subsets, according to the paired expression ratio of gene TP53 (a well-known tumor suppressor gene). The above four pathways were also identified by the discordance gene set enrichment analysis, with the neuroactive ligand receptor interaction, olfactory transduction pathways still the most significant two. After our literature search, we found that these two pathways were the only two identified for their association with pancreatic cancer in a recent independent pathway analysis of genome-wide association study (GWAS) data. For the alpha-linolenic-acid metabolism and linoleic-acid metabolism pathways, we found a previous study that the association between pancreatic cancer and these two fatty acids (alpha-linolenic acid and linoleic acid) was observed. A few discordant behaviors from individual genes can be observed from Figs. 7 and 8. In Fig. 7 p, among genes in the neuroactive ligand receptor interaction pathway (black dots), a gene with the most negative z-score in subset 1 has the most positive z-score in subset 7. This is a clear change from down-regulation to up-regulation. In Fig. 8 a-b, among genes in the olfactory transduction pathway (black dots), a gene with the most positive z-score in subset 2 has a moderately positive z-score in subset 1, but its z-score in subset 3 is clearly negative. This is a clear change from up-regulation to down-regulation. We conducted a discordance gene set enrichment analysis based on gene PNLIP and a discordance gene set enrichment analysis based on gene TP53. Only a few among 186 KEGG pathways were identified. Most pathways (like cancer and pancreatic cancer related pathways) were evidently not enriched in discordant gene behaviors. This suggest unique molecular roles of both genes PNLIP and TP53 in pancreatic cancer development. There were four pathways identified from both analysis results and we found biomedical literature to support the association between pancreatic cancer and these pathways. Some pathways identified in one analysis were not identified in the other analysis. It is also biologically interesting to understand these pathways. It was biologically interesting to observe pathways with clearly discordant gene behaviors when the paired expression ratio of an important disease-related gene was changing. The analysis results in this study illustrated the usefulness of our proposed statistical method. Our method was developed based on z-scores that are statistical measures of differential expression, and many existing two-sample statistical tests could be used for generating z-scores. Therefore, in this study, we demonstrated our method based on a partition of a relatively large two-sample microarray data set as well as several two-sample genome-wide expression data sets collected by the recent RNA-seq technology. Our method is statistically novel for its two-level structure, which is developed based on a biological motivation (genes' behaviors among different data sets). Due to this two-level structure, the parameter space of our model is increased linearly when the number of data sets is increased. Then, the parameter estimates can be statistically efficient. In our mixture model, conditional independence is the key to reduce the complexity of multivariate data analysis. For each gene, when the mixture component information is given for all the data sets, its z-scores are independent. (Notice that there is no overlap among multiple data sets). Mathematical and computational convenience is achieved for our statistical model due to this unique feature. Our method is based on the well-established mixture model framework and the Expectation-Maximization (EM) algorithm for parameter estimation. One limitation is that the proposed three-component mixture model may not fit z-scores well for some data. This can be improved by considering more components in the mixture model. For example, instead of a simple consideration of down-regulation, null and up-regulation, we may consider more components like strong-down-regulation, weak-down-regulation, null, weak-up-regulation and strong-up-regulation. This will only proportionally increase the parameter space (still linear with the number of data sets for our two-level mixture model). It is also interesting to extend our method for more complicated analysis purpose. For example, we may be interested in identifying trend changes (monotonically increasing or decreasing) instead of general changes. Also, for example, we may have multiple data sets collected for different disease stages, but the data set for normal/reference/control stage is not large enough to be divided and it has to be used repeatedly in two-sample comparisons (then z-scores are not even conditionally independent). For these situations, the extension of our method would require a considerable amount of research effort. Schena M, Shalon D, Davis RW, Brown PO. Quantitative monitoring of gene expression patterns with a complementary dna microarray. Science. 1995; 270:467–70. Lockhart D, Dong H, Byrne M, Follettie M, Gallo M, Chee M, Mittmann M, Wang C, Kobayashi M, Horton H, Brown E. Expression monitoring by hybridization to high-density oligonuleotide arrays. Nat Biotechnol. 1996; 14:1675–80. Network TCGA. Comprehensive genomic characterization defines human glioblastoma genes and core pathways. Nature. 2008; 455:1061–8. Nagalakshmi U, Wang Z, Waern K, Shou C, Raha D, Gerstein M, Snyder M. The transcriptional landscape of the yeast genome defined by rna sequencing. Science. 2008; 320:1344–9. Wilhelm BT, Marguerat S, Watt S, Schubert F, Wood V, Goodhead I, Penkett CJ, Rogers J, Bahler J. Dynamic repertoire of a eukaryotic transcriptome surveyed at single-nucleotide resolution. Nature. 2008; 453:1239–43. Storey JD, Tibshirani R. Statistical significance for genomewide studies. Proc Nat Acad Sci USA. 2003; 100:9440–5. Mootha VK, Lindgren CM, Eriksson KF, Subramanian A, Sihag S, Lehar J, Puigserver P, Carlsson E, Ridderstrale M, Laurila E, Houstis N, Daly MJ, Patterson N, Mesirov JP, Golub TR, Tamayo P, Spiegelman B, Lander ES, Hirschhorn JN, Altshuler D, Groop L. PGC-1 α-response genes involved in oxidative phos-phorylation are coordinately downregulated in human diabetes. Nat Genet. 2003; 34:267–73. Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, Paulovich A, Pomeroy SL, Golub TR, Lander ES, Mesirov JP. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Nat Acad Sci USA. 2005; 102:15545–50. Edgar R, Barrett T. NCBI GEO standards and services for microarray data. Nat Biotechnol. 2006; 24:1471–2. de Magalhaes JP, Curado J, Church GM. Meta-analysis of age-related gene expression profiles identifies common signatures of aging. Bioinformatics. 2009; 25:875–81. Choi JK, Yu U, Kim S, Yoo OJ. Combining multiple microarray studies and modeling interstudy variation. Bioinformatics. 2003; 19 Supplement 1:84–90. Tanner SW, Agarwal P. Gene vector analysis (geneva): A unified method to detect differentially-regulated gene sets and similar microarray experiments. BMC Bioinforma. 2008; 9:348. Shen K, Tseng GC. Meta-analysis for pathway enrichment analysis when combining multiple genomic studies. Bioinformatics. 2010; 26:1316–23. Chen M, Zang M, Wang X, Xiao G. A powerful bayesian meta-analysis method to integrate multiple gene set enrichment studies. Bioinformatics. 2013; 29:862–9. Lai Y, Zhang F, Nayak TK, Modarres R, Lee NH, McCaffrey TA. Concordant integrative gene set enrichment analysis of multiple large-scale two-sample expression data sets. BMC Genomics. 2014; 15 Suppl 1:6. Pang H, Zhao H. Stratified pathway analysis to identify gene sets associated with oral contraceptive use and breast cancer. Cancer Inform. 2014; 13 (Suppl 4):73–8. Jones AR, Troakes C, King A, Sahni V, De Jong S, Bossers K, Papouli E, Mirza M, Al-Sarraj S, Shaw CE, Shaw PJ, Kirby J, Veldink JH, Macklis JD, Powell JF, Al-Chalabi A. Stratified gene expression analysis identifies major amyotrophic lateral sclerosis genes. Neurobiol Aging. 2015; 36:2006–19. Benjamini Y, Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J R Stat Soc Series B. 1995; 57:289–300. McLachlan GJ, Krishnan T. The EM Algorithm and Extensions, 2nd Edition. Hoboken, New Jersey, USA: John Wiley & Sons, Inc.; 2008. McLachlan GJ, Bean RW, Jones LB. A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays. Bioinformatics. 2006; 22:1608–15. Brower V. Genomic research advances pancreatic cancer's early detection and treatment. J Nat Cancer Inst. 2015; 107:95. Zhang G, He P, Tan H, Budhu A, Gaedcke J, Ghadimi BM, Ried T, Yfantis HG, Lee DH, Maitra A, Hanna N, Alexander HR, Hussain SP. Integration of metabolomics and transcriptomics revealed a fatty acid network exerting growth inhibitory effects in human pancreatic cancer. Clin Cancer Res. 2013; 19:4983–93. Zhang G, Schetter A, He P, Funamizu N, Gaedcke J, Ghadimi BM, Ried T, Hassan R, Yfantis HG, Lee DH, Lacy C, Maitra A, Hanna N, Alexander HR, Hussain SP. DPEP1 inhibits tumor cell invasiveness, enhances chemosensitivity and predicts clinical outcome in pancreatic ductal adenocarcinoma. PLoS One. 2012; 7:31507. Amaratunga D, Cabrera J. Exploration and Analysis of DNA Microarray and Protein Array Data. Hoboken, New Jersey, USA: John Wiley & Sons, Inc; 2003. Oshlack A, Robinson MD, Young MD. From RNA-seq reads to differential expression results. Genome Biol. 2010; 11:220. Zheng W, Chung LM, Zhao H. Bias detection and correction in RNA-sequencing data. BMC Bioinforma. 2011; 12:290. Tusher VG, Tibshirani R, Chu G. Significance analysis of microarrays applied to the ionizing radiation response. Proc Nat Acad Sci USA. 2001; 98:5116–21. Smyth GK. Linear models and empirical bayes methods for assessing differential expression in microarray experiments. Stat Appl Genet Mol Biol. 2004; 3:3. Dudoit S, Shaffer JP, Boldrick JC. Multiple hypothesis testing in microarray experiments. Stat Sci. 2003; 18:71–103. Lai Y, Adam BL, Podolsky R, She JX. A mixture model approach to the tests of concordance and discordance between two large scale experiments with two-sample groups. Bioinformatics. 2007; 23:1243–50. Lai Y, Eckenrode SE, She JX. A statistical framework for integrating two microarray data sets in differential expression analysis. BMC Bioinforma. 2009; 10 (Suppl. 1):23. Wei P, Tang H, Li D. Insights into pancreatic cancer etiology from pathway analysis of genome-wide association study data. PLoS One. 2012; 7:46887. Wenger FA, Kilian M, Jacobi CA, Schimke I, Guski H, Müller JM. Does alpha-linolenic acid in combination with linoleic acid influence liver metastasis and hepatic lipid peroxidation in bop-induced pancreatic cancer in syrian hamsters? Prostaglandins Leukot Essent Fatty Acids. 2000; 62:329–34. Efron B, Tibshirani R. On testing the significance of sets of genes. Ann Appl Stat. 2007; 1:107–29. Maciejewski H. Gene set analysis methods: statistical models and methodological differences. Brief Bioinforma. 2014; 15:504–18. This article has been published as part of BMC Genomics Volume 18 Supplement 1, 2016: Proceedings of the 27th International Conference on Genome Informatics: genomics. The full contents of the supplement are available online at http://bmcgenomics.biomedcentral.com/articles/supplements/volume-18-supplement-1. This work was partially supported by the NIH grant GM-092963 (Y.Lai). The publication costs were funded by the Department of Statistics at The George Washington University. YL conceived of the study, developed the methods, performed the statistical analysis, and drafted the manuscript; FZ developed the methods, performed the statistical analysis, and helped to draft the manuscript; TKN, RM, NHL and TAM helped to draft the manuscript. All authors read and approved the final manuscript. Department of Statistics, The George Washington University, 801 22nd St. N.W., Rome Hall, 7th Floor, Washington, 20052, D.C., USA Yinglei Lai, Fanni Zhang, Tapan K. Nayak & Reza Modarres Department of Pharmacology and Physiology, The George Washington University Medical Center, Washington, 20037, D.C., USA Norman H. Lee Department of Medicine, Division of Genomic Medicine, The George Washington University Medical Center, Washington, 20037, D.C., USA Timothy A. McCaffrey Yinglei Lai Fanni Zhang Tapan K. Nayak Reza Modarres Correspondence to Yinglei Lai. Lai, Y., Zhang, F., Nayak, T.K. et al. Detecting discordance enrichment among a series of two-sample genome-wide expression data sets. BMC Genomics 18 (Suppl 1), 1050 (2017). https://doi.org/10.1186/s12864-016-3265-2 Discordance Gene set enrichment Mixture models
CommonCrawl
Experimental and bioinformatics study for production of l-asparaginase from Bacillus licheniformis: a promising enzyme for medical application Nada A. Abdelrazek ORCID: orcid.org/0000-0001-8082-22981, Walid F. Elkhatib ORCID: orcid.org/0000-0001-5815-32002,3, Marwa M. Raafat ORCID: orcid.org/0000-0001-5614-51271 & Mohammad M. Aboulwafa ORCID: orcid.org/0000-0002-0828-14202 A Bacillus licheniformis isolate with high l-asparaginase productivity was recovered upon screening two hundred soil samples. This isolate produces the two types of bacterial l-asparaginases, the intracellular type I and the extracellular type II. The catalytic activity of type II enzyme was much higher than that of type I and reached about 5.5 IU/ml/h. Bioinformatics analysis revealed that l-asparaginases of Bacillus licheniformis is clustered with those of Bacillus subtilis, Bacillus haloterans, Bacillus mojavensis and Bacillus tequilensis while it exhibits distant relatedness to l-asparaginases of other Bacillus subtilis species as well as to those of Bacillus amyloliquefaciens and Bacillus velezensis species. Upon comparison of Bacillus licheniformis l-asparaginase to those of the two FDA approved l-asparaginases of E. coli (marketed as Elspar) and Erwinia chrysanthemi (marketed as Erwinaze), it observed in a cluster distinct from- and with validly predicted antigenic regions number comparable to those of the two mentioned reference strains. It exhibited maximum activity at 40 °C, pH 8.6, 40 mM asparagine, 10 mM zinc sulphate and could withstand 500 mM NaCl and retain 70% of its activity at 70 °C for 30 min exposure time. Isolate enzyme productivity was improved by gamma irradiation and optimized by RSM experimental design (Box–Behnken central composite design). The optimum conditions for maximum l-asparaginase production by the improved mutant were 39.57 °C, 7.39 pH, 20.74 h, 196.40 rpm, 0.5% glucose, 0.1% ammonium chloride, and 10 mM magnesium sulphate. Taken together, Bacillus licheniformis l-asparaginase can be considered as a promising candidate for clinical application as antileukemic agent. Enzymes play an important role in metabolic and biochemical reactions and microorganisms are the primary source (Nigam 2013), as they can be cultured in large quantities in short span of time (Anbu et al. 2013; Gopinath et al. 2013). l-Asparaginase is a therapeutic enzyme which has proved to be promising for the treatment of acute lymphocytic leukemia (Sinha et al. 2013). Unlike normal cells, malignant cells can only slowly synthesize l-asparagine, due to their deficiency in l-asparagine synthetase. Thus depletion of the circulating pools of l-asparagine by l-asparaginase leads to the destruction of the tumor cells, since they are unable to complete protein synthesis by inhibition of RNA and DNA synthesis with subsequent blastic cell apoptosis (Bansal et al. 2012). l-Asparaginase has been introduced into the pretreatment of potato slices and bread dough before frying or baking to prevent acrylamide formation (carcinogenic toxicant) (Krishnapura et al. 2016). Also, this enzyme acts as a biosensor to detect the amount of asparagine in leukemia and food industry (Batool et al. 2016). The current study used bioinformatics and experimental approaches for production and characterization of l-asparaginase from the recovered soil isolate, Bacillus licheniformis. The study gives evidence for the introduction of Bacillus licheniformis l-asparaginase as a potentially comparable and additional source to those of the two FDA approved ones from E. coli (marketed under the brand name Elspar) and Erwinia chrysanthemi (marketed under the brand name Erwinaze) to be used as antileukemic agent. All chemicals were supplied, unless otherwise indicated, by El-Nasr chemicals ADWIC (Cairo, Egypt). l-Asparagine monohydrate was product of AppliChem GmbH (Darmstadt, Germany). Bacterial strain and maintenance Bacillus licheniformis isolate was obtained from screening of 722 soil isolates for l-asparaginase production. Isolation and qualitative detection of l-asparaginase production by recovered soil bacteria This was principally carried out according to Izadpanah et al. (2014). This method depends on the appearance of pink zone around l-asparaginase producing colonies on modified M9 agar medium containing 1% w/v asparagine and phenol red as an indicator. Inoculum preparation and l-asparaginase production The inoculum was prepared by inoculating 20 ml modified M9 broth contained in 250 ml Erlenmeyer flask with single isolated colony. The flask was incubated at 37 °C and 180 rpm for 24 h. The broth culture obtained was diluted by fresh M9 broth medium to an O.D. = 1.0 at 600 nm to be used as an inoculum. The enzyme production was carried out in 250 ml Erlenmeyer flasks, the flasks were inoculated with 2% v/v from the cell suspension (Mahajan et al. 2012) and incubated at 37 °C and 180 rpm for 24 h. An aliquot (2 ml) of the broth culture obtained was centrifuged at 4 °C and 5000 rpm for 20 min using cooling centrifuge (Jain et al. 2012). The produced supernatant was termed crude enzyme preparation and used for quantitative assay of extracellular l-asparaginase while the produced pellets were lysed and tested for any intracellular enzyme activity. For the preparation of crude cell lysate, the cell pellets were washed twice with 50 mM Tris–HCl (pH 7.5) and suspended in 30 ml lysis buffer (Straight et al. 2007). Cells were then disrupted by sonication using sonication probe under cooling condition at 4 °C. Cellular debris and unbroken cells were removed by centrifugation at 15,000 rpm and 4 °C for 15 min (Sakr et al. 2014) and the supernatant was collected for determination of intracellular enzyme activity. Quantitative assay of l-asparaginase l-Asparaginase activity was measured by the method described by of Mashburn and Wriston (1963). The assay depends on hydrolysis of l-asparagine by the enzyme preparation to release ammonia. One unit of l-asparaginase activity is defined as the amount of enzyme required for the release of one micromole of ammonia per hour at 37 °C and pH 8.6 (Mahajan et al. 2014). Identification of soil isolate with highest l-asparaginase productivity The isolate of the highest l-asparaginase productivity was identified by microscopical examination (Gram stain), biochemical reactions (using Biolog® system) and confirmed by the 16S rRNA gene sequencing. Bioinformatics analysis The degrees of relatedness of Bacillus licheniformis l-asparaginase to other microbial l-asparaginases (the amino acid sequence of the enzyme of that organism was used as a probe to retrieve NCBI database similar sequences in BLAST) and to the two FDA approved l-asparaginases of E. coli (marketed under the brand name Elspar) and Erwinia chrysanthemi (marketed under the brand name Erwinaze) were inferred by the Maximum Likelihood method based on the JTT matrix-based model (Jones et al. 1992) using their amino acid sequences. The phylogenetic tree was drawn to scale, with branch lengths measured in the number of substitutions per site. Evolutionary analyses were conducted in MEGA X (Kumar et al. 2018). The antigenic sites in Bacillus licheniformis l-asparaginase as compared to those in the two FDA approved l-asparaginases of E. coli and Erwinia chrysanthemi were predicted using the method of Kolaskar and Tongaonkar (1990) (EMBOSS: antigenic—Bioinformatics web site http://www.bioinformatics.nl/cgi-bin/emboss/antigenic). l-Asparaginase characterization The crude preparation of l-asparaginase (supernatant of growth culture) of the selected isolate was evaluated for different characteristics of industrial importance which included thermal stability, activity at different temperatures, pH values, salinities, substrate concentrations, and metal ions. Improvement of l-asparaginase production of the selected isolate by mutation with gamma irradiation Five ml aliquot of a prepared spore suspension (Seale et al. 2008), contained in 10 ml sterile screw capped glass tubes, was exposed to different doses of gamma rays 0.1, 0.5, 1, 3, and 5 KGy (Diep et al. 2017). After irradiation, a number of recovered colonies were selected randomly to be qualitatively and quantitatively assessed for l-asparaginase production in comparison to the parent wild strain. The mutant with the highest l-asparaginase productivity was selected for completing the present study. Effect of different environmental and physiological factors influencing l-asparaginase production by the selected mutant Different environmental factors including incubation temperature, initial pH, incubation time, agitation rate as well as various media components were evaluated for their effects on l-asparaginase production. In all cases, at the end of the incubation period l-asparaginase activity was quantitatively determined as described before except that in case of studying the effect of incubation time where samples were removed at different time intervals for l-asparaginase activity measurements. Optimization of l-asparaginase production using response surface methodology (RSM) experimental design From preliminary conducted studies, four process parameters [incubation temperature coded (A), pH values coded (B), incubation time coded (C) and agitation rate coded (D)] were optimized by RSM experimental design (Box–Behnken central composite design). Each parameter was examined at 3 levels that correspond to the 3 highest l-asparaginase productivity obtained. The mean level of each parameter was coded (0) and it represents the average of the 2 levels that showed the highest and the lowest l-asparaginase production. The maximum level coded (+ 1) and the lower level coded (− 1). The range of studied variables is shown in Additional file 1: Table S1. Design-Expert 7 (Stat-Ease Inc., Minneapolis, MN, USA) was used for experimental design as well as graphical analyses of the data and regressions. A number of 27 experiments were obtained. The codes and values of the three levels for the studied variables (n = 4) are shown in Additional file 1: Table S2. These experiments were principally carried out as mentioned before, except that the environmental conditions (incubation temperature, initial pH, incubation time, agitation) were set at the values listed in Additional file 1: Table S2. The results obtained from the 27 experiments were analyzed by the used software to determine the response surface contour plots, the regression equation and the test variables optimum levels. Effect of different media components The effect of different carbon sources (glucose, sucrose, fructose, lactose, maltose, glycerol, starch and arabinose), nitrogen sources (ammonium chloride, potassium nitrate, ammonium nitrate, yeast extract, peptone, urea, tryptone) and metal ions (copper sulphate, calcium chloride dihydrate, magnesium sulphate heptahydrate, cobalt chloride, manganese sulphate zinc sulphate heptahydrate) were evaluated for their effects on l-asparaginase production by the test mutant. Each of the carbon source, the nitrogen source and the metal ion source that showed maximum l-asparaginase productivity was re-tested at different concentrations. Statistical and graphical analyses All experiments were carried out in triplicates and the mean as well as standard deviation were calculated. The data were statistically analyzed using one way ANOVA followed by Dunnett's Multiple Comparison Test. All tests were performed using Graph Pad Prism Version 5.0 (GraphPad Software, La Jolla, CA, USA). Recovery and identification of a promising l-asparaginase producing isolate A promising l-asparaginase-producing bacterial isolate was selected using an extensive screening program on 722 recovered bacterial soil isolates. This isolate was identified using the methods listed in materials and methods as Bacillus licheniformis and its 16S rRNA gene sequence was deposited in GenBank database under the accession number MG665995, the strain also deposited in Egypt Microbiological Culture collection (EMCC) with number EMCC 2290. The extracellular l-asparaginase productivity of this test isolate exceeded the intracellular one by at least threefolds (data not shown). Accordingly, l-asparaginase characterization and production optimization were based on the extracellular enzyme productivity. The molecular phylogenetic tree of l-asparaginases with amino acids sequence similarities not less than 74% to the target query l-asparaginase of Bacillus licheniformis is shown in Fig. 1 while that of the target query l-asparaginase and the two FDA approved l-asparaginases of E. coli (marketed under the brand name Elspar) and Erwinia chrysanthemi (marketed under the brand name Erwinaze) is shown in Fig. 2. The corresponding pairwise distances among l-asparaginases for bacterial species presented in Fig. 1 is shown in Table (S3) while those for l-asparaginases of bacterial species presented in Fig. 2 is illustrated in Additional file 1: Table S4. Potentially antigenic regions of a l-asparaginase sequence of Bacillus licheniformis were predicted and compared to those determined for E. coli and Erwinia chrysanthemi using the prediction program EMBOSS antigenic explorer® as mentioned in Materials and Methods. The results presented in Table 1 reveal 18, 16 and 17 antigenic regions, their positions and sequences for l-asparaginases of E. coli, Erwinia chrysanthemi and Bacillus licheniformis, respectively. Molecular phylogenetic analysis by maximum likelihood method of Bacillus licheniformis l-asparaginase (accession WP_075750324) as compared to those retrieved by blasting the query sequence against amino acid sequences deposited in NCBI databases. l-asparaginases presented in the tree are non-redundant ones with amino acids sequence similarities not less than 74% Molecular phylogenetic analysis by maximum likelihood method of Bacillus licheniformis l-asparaginase (accession WP_075750324) when blasted against amino acid sequences of the FDA approved l-asparaginases of E. coli and Erwinia chrysanthemi Table 1 Antigenic regions, their positions, and sequences of E. coli, Erwinia chrysanthemi, and Bacillus licheniformis l-asparaginases l-Asparaginase characterization of the test isolate The results (Fig. 3) revealed that l-asparaginase activity was not dramatically affected when exposed to temperature up to 50 °C for 30 min, and the maximal activity was observed at 40 °C, pH 8.6 and 40 mM asparagine concentration. The enzyme activity increased by incorporation of sodium chloride up to 100 mM and the enzyme could efficiently retain its activity at high salinity up to 500 mM sodium chloride. From the tested metal salts (copper sulphate, nickel chloride, cobalt chloride, ferrous sulphate and zinc sulphate), zinc sulphate was the only metal salt that showed significant (p < 0.05) increase in enzyme activity. Other tested metal salts showed no significant effect as compared to the control. Thermal stability (a) and catalytic activities at different temperatures (b), pH values (c), salinities (d), substrate concentrations (e) and metal ions (f) of l-asparaginase produced by Bacillus licheniformis Strain improvement Gamma irradiation was utilized to improve l-asparaginase production by the test isolate. The results revealed that exposure to 5 KGy gamma radiations enabled the selection of a mutant with higher l-asparaginase productivity compared to the lower tested doses of gamma radiations (data not shown). The enzyme productivity of the selected mutant was 1.4 fold higher than that of the parent strain. Model-based optimization of l-asparaginase production by Bacillus licheniformis mutant Effect of environmental conditions and RSM experimental design Regarding the effect of incubation temperature, it was found that low level of l-asparaginase production by Bacillus licheniformis mutant occurs at 20 °C, reached its maximum level at 37 °C and slightly decreased thereafter up to 50 °C. Concerning the effect of initial pH, there was a considerable l-asparaginase production at all the tested pH values with a maximum productivity achieved at pH 7. Regarding the effect of incubation time, the results revealed that the maximum l-asparaginase production by the test mutant is attained at 24 h followed by gradual decrease in the production. In case of agitation rate, the maximum l-asparaginase productivity was obtained at 180 rpm and lower productivities at agitation rates around this value were noted (Fig. 4). RSM experimental design was applied on pretested environmental parameters (four variables) which included incubation temperature, initial pH, incubation time and agitation rate. The results of the response surface model including observed, predicted, residual values are given in Table 2. According to the mathematical model regression equation, the predicted values were calculated as follows: $$ \begin{aligned} {\text{Sqrt }}\left( \textsc{l}{\text{-asparaginase activity}} \right) \, & = \, + 2. 6 8 + \, 0.0 1 4*{\text{A }} + \, 0.0 5 6*{\text{B }} - \, 0. 1 4*{\text{C }} + \, 0.0 2 4*{\text{D}} \\ \, & -{ 2}. 1 2 7 {\text{E}} - 00 4*{\text{A}}*{\text{B }} + \, 0.0 1 3*{\text{A}}*{\text{C }} + \, 0.0 3 3*{\text{A}}*{\text{D }} \\ & - \, 0.0 1 3*{\text{B}}*{\text{C }} + \, 0.0 1 7*{\text{B}}*{\text{D }} + \, 0.0 1 7*{\text{C}}*{\text{D }} \\ & {-}{ 2}. 4 10{\text{E}} - 00 4*{\text{A}}^{ 2} - \, 0. 100*{\text{B}}^{ 2} - \, 0.0 1 7*{\text{C}}^{ 2} - \, 0.0 1 8*{\text{D}}^{ 2} \\ \end{aligned} $$ where l-asparaginase activity was expressed in square root values, and A, B, C and D represent incubation temperature, initial pH value, incubation time and agitation rate, respectively. Effect of incubation temperature (a), initial pH (b), incubation time (c), and agitation rate (d) on l-asparaginase production by Bacillus licheniformis mutant. The enzyme productivity was expressed in terms of catalytic activity Table 2 Observed, predicted, and residual values for process parameters optimization of l-asparaginase productivity by Bacillus licheniformis mutant using Box–Behnken central composite design Additional file 1: Table S5 shows ANOVA of the obtained quadratic model. Model F-value of 19.59 implies that the model is significant. For the obtained F-value, a p-value less than 0.0001 means that there is only 0.01% chance that this large model F-value could occur due to noise. The regression coefficient of the model (R2) was evaluated to test the fit of the model. The R2 was calculated to be 0.9581, referring that the model could explain 95.81% of the variability. Only 4.19% of the total variation is not explained by the model. The "Predicted R-Squared" of 0.7600 is in reasonable agreement (a difference not exceeding the recommended value of 0.3 with the "Adjusted R-Squared" of 0.9092 (Frost 2013). The adequate precision measures the signal (response) to noise ratio. It is desirable that the ratio to be greater than 4. A ratio 15.913 indicates an adequate signal to noise ratio. Therefore this model can be used to navigate the design space. The "lack of fit" used to compare the residual error to the "pure error" from replicated design points. The "lack of fit F-value" of 19.2 implies that the lack of fit is not significant relative to the pure error. Non-significant lack of fit indicates the model fits. As the magnitude of F-value became larger and the magnitude of p value smaller, the corresponding coefficient is more significant (Adinarayana and Ellaiah 2002). Table 3 lists the process parameters that proved to be significant for l-asparaginase productivity. Table 3 Process parameters having significant effects on l-asparaginase productivity by Bacillus licheniformis shown in descending order of significance Graphical representation of dual interactions of process parameters The 3D response surface and 2D contour plots (Figs. 5, 6) are the graphical representations of the regression equation. Response surface plots [RSPs] (a) and (b) show the effect of temperature and its interactions with pH and agitation, respectively. The interactions reveal the predicted maximum l-asparaginase productivity of 7.30761 IU/ml/h at pH 7.19 and temperature 39.99 °C and 7.48121 IU/ml/h at the same temperature (39.99 °C) 200 rpm. On the other hand, RSPs (c) and (d) demonstrate the effect of pH and its interactions with agitation and incubation time, respectively. The interactions show the predicted maximum l-asparaginase productivity of 7.3004 IU/ml/h at pH 7.35 and agitation 199.95 rpm and 7.90794 IU/ml/h at pH 7.46 and 18.01 h. RSPs (e) and (f) represent the effect of incubation time and its interactions with temperature and agitation rate, respectively. Their interactions also predict maximum l-asparaginase productivity of 7.85147 IU/ml/h at temperature 40 °C and time 18 h and of 7.8464 IU/ml/h at the same incubation time (18 h) and 184.44 rpm. Response surface plots for the optimization of process parameters showing the effect of a the interaction between temperature (°C) and pH; b the interaction between temperature (°C) and agitation (rpm); c the interaction between pH and agitation (rpm) on l-asparaginase production by Bacillus licheniformis mutant strain. The enzyme productivity was expressed in terms of catalytic activity Response surface plots for the optimization of process parameters showing the effect of a the interaction between pH and time (h); b the interaction between time (h) and temperature (°C); and c the interaction between time (h) and agitation (rpm) on l-asparaginase production by Bacillus licheniformis mutant strain. The enzyme productivity was expressed in terms of catalytic activity The main objective of applying response surface methodology is to specify the optimum value of each variable to maximize the studied response. According to the applied model, the predicted maximum value of l-asparaginase productivity of Bacillus licheniformis is 7.9518 IU/ml/h and it can be obtained at temperature of 39.5 °C, pH of 7.4, incubation time of 21 h, and agitation rate of 196 rpm. Effect of media components By studying the effect of different carbon sources (glucose, sucrose, fructose, lactose, maltose, glycerol, starch and arabinose), glucose proved to be the best carbon source for l-asparaginase production. Regarding the nitrogen sources, ammonium chloride showed the highest enzyme production. Furthermore, incorporation of different metal ions into the culture medium revealed that the highest enzyme productivity occurred with magnesium sulphate. Based on the obtained results, different concentrations of the best carbon and nitrogen sources as well as metal ions for l-asparaginase production were tested. Concentrations of 0.5% w/v glucose, 0.1% w/v ammonium chloride and 10 mM magnesium sulphate were noted to be the optimum concentrations for maximum l-asparaginase production (Fig. 7). Effect of different carbon sources (a), nitrogen sources (b), metal ions (c) and different concentrations of glucose (d), ammonium chloride (e) and magnesium sulphate (f) on l-asparaginase production by Bacillus licheniformis mutant strain. The enzyme productivity was expressed in terms of catalytic activity l-Asparaginase has received considerable attention as a primary component in the treatment of acute lymphoblastic leukemia (ALL) (Rati Sinha et al. 2013). Extracellular enzymes have an advantage over the intracellular ones; they could be produced plentifully in the culture medium under normal conditions and could be purified economically (Joseph and Rajan 2011; Vaibhav D Deokar et al. 2010). In this study the extracellular activity was about 305% more than the intracellular one, this offers easy enzyme recovery without the need for cell lysis. It is reported that bacterial type l-asparaginases are classified into subtypes I and II, which is defined by their intra or extra cellular localization (Michalska and Jaskolski 2006). Type I (cytosolic) has a lower affinity for l-asparagine, while type II (periplasmic) has a high substrate affinity and they also differ in oligomeric form. The periplasmic proteins, known as type II asparaginases (Campbell and Mashburn 1969), from Escherichia coli (EcAII) and Erwinia chrysanthemi (ErA), have been in clinical use in the treatment of acute lymphoblastic leukemia and some other tumors for more than 30 years (Roberts et al. 1966; Boyse et al. 1967; Bodey et al. 1974; Lay et al. 1975). Accordingly, the present study was focused on extracellular enzyme productivity of the used bacterial isolate. The phylogenetic tree shown in Fig. 1 and Table 3 shows that l-asparaginases of Bacillus licheniformis is clustered with those of Bacillus subtilis (pairwise distance 0.00267), Bacillus haloterans (pairwise distance 0.1128), Bacillus mojavensis (pairwise distance 0.11579) and Bacillus tequilensis (pairwise distance 0.0892) while it shows distant relatedness to l-asparaginases of other Bacillus subtilis species (pairwise distance 2.07678) as well as for those of Bacillus amyloliquefaciens and Bacillus velezensis species (pairwise distances 2.13201 for each). Figure 2 and Additional file 1: Table S4 show the situation of l-asparaginase for Bacillus licheniformis to those of the two FDA approved l-asparaginases of E. coli (marketed under the brand name Elspar) and Erwinia chrysanthemi (marketed under the brand name Erwinaze), both were used as two reference enzymes. The results reveal that occurrence of Bacillus licheniformis l-asparaginase is a cluster distinct from those of the two other reference enzymes (Fig. 2) and a pairwise distances of 1.253 and 1.161 for l-asparaginases of E. coli and Erwinia chrysanthemi, respectively (Additional file 1: Table S4). Prediction of antigenic determinants (epitopes) along the amino acid sequences of the corresponding l-asparaginases of E. coli, Erwinia chrysanthemi (as two reference strains for FDA approved l-asparaginases) and Bacillus licheniformis showed in between antigenic regions number for Bacillus licheniformis. The validity of this prediction is supported by the observation that Erwinia asparaginase has less immunogenic associated toxicity as compared to that of E. coli asparaginase (Barry et al. 2007). Also Cavanna et al. (1976) reported that l-asparaginase from E. coli has more immuno-depressive and immuno-toxic potential than that from E. carotovora. The therapeutic effect of l-asparaginases from these two bacterial species (E. coli and Erwinia) is accompanied by side effects which are partially attributed to the immunogenicity of these enzymes. Comparable number of antigenic regions detected in l-asparaginase of Bacillus licheniformis suggests fewer side effects and this could introduce such enzyme source as a potential candidate for therapeutic and medical application. Regarding enzyme thermal stability, a remarkable stability was demonstrated as the produced enzyme could preserve 80% of its activity after exposure to temperature 70 °C for 30 min. This stability level suits its use medically since higher thermal stability is usually required for industrial enzymes. Our results agreed with a previous report (Elshafei et al. 2012), where l-asparaginase produced from Penicillium brevicompactum was stable over wide range of temperatures till 70 °C. The activity of the enzyme preparation of the test isolate increased gradually over the range of 20 up to 35 °C and decreased at 45 °C by 27.7%. It showed maximum activity at 40 °C. The optimum alkaline pH of the enzyme is attributed to that the aspartate liberated by asparagine hydrolysis has lower affinity to the active catalytic site of the enzyme. This enables more binding of asparagine to the enzyme. On the other hand at acidic pH the breakdown of asparagine by the enzyme results in the production of aspartic acid which has high affinity to the enzyme catalytic site, disabling the binding of asparagine to the enzyme (El-Sabbagh et al. 2013). The same results were recorded by other researchers (Elshafei et al. 2012; El-Sabbagh et al. 2013) who found the maximum enzyme activity from Streptomyces halstedii and Penicillium brevicompactum at pH 8.0, while others reported the maximum activity from B. licheniformis and Streptomyces gulbargensis at pH 9.0 (Amena et al. 2010; Mahajan et al. 2014). Water molecules play a significant role in protein's biological function by attaching to the surface and entering into the inner part of protein molecules (Persson and Halle 2008). When water activity is affected by drastic conditions, like extreme temperature, pH or high salinity, normally water may limit the enzyme activity. In our study, l-asparaginase showed increased activity (about 48.0%) at 500 mM sodium chloride concentration. The increase in the activity can be explained as that salinity may stimulate loop flexibility in the structure of the enzyme that can be more involved in enzymatic activity. Halophilic enzymes have high negative charges so can be easily dissociated and become more flexible in the presence of sodium chloride (Han et al. 2014). The current results agree with that of some researches (Elshafei et al. 2012; Dash et al. 2016; Shechtman 2013; Han et al. 2014). A gradual increase in the enzyme activity was reported by increasing asparagine concentration followed by slight decrease in the activity at higher concentration. Similar results were also reported previously (El-Mched et al. 2015). This finding may be attributed to the saturation of the active enzymatic sites by the substrate. Metal ions acts as cofactor for binding at the catalytic site of the enzyme. Zinc and iron slightly increased the activity which is the same as reported by some investigators (Han et al. 2014) but not affected by nickel, copper and cobalt as mentioned by others (Moorthy et al. 2010). Conversely, it disagreed with what was reported previously (El-Sabbagh et al. 2013) where zinc inhibited the activity of l-asparaginase. Gamma ray, used in strain improvement, it causes mutation through breakage of single and double stranded DNA resulting structural changes or oxidation (Huma et al. 2012). In this study, it improved l-asparaginase production by 1.4 folds compared to the wild type strain. Many researches supported the use of gamma radiation in increasing the enzymes production (Hoe et al. 2016; Huma et al. 2012; Hyster and Ward 2016; Diep et al. 2017). The RSM includes a group of statistical and mathematical techniques for development of a sufficient relationship between a response of interest and number of variables through conducting some preliminary studies to determine the optimum range for each factor to be used in RSM. Maximum l-asparaginase production occurred at 37 °C. Any decrease or increase from the optimum temperature slows down the metabolic activity of the enzymes as reported by some investigators (El-Hefnawy et al. 2015; Jayaramu et al. 2010). However, Prakasham et al. reported progressive increase in activity by increasing temperature and optimum temperature attained at 39 °C (Prakasham et al. 2007). Our results agreed with what was reported previously (Bahrani 2016). Optimum pH was 7 followed by slight decline in productivity at higher pH values, this may be due to partial enzyme denaturation in response to dissociation of ionizable groups of the enzyme. A significant decrease in enzyme productivity at low pH may be attributed to inhibition of substrate binding to the enzyme as a result of change in the properties and shape of the enzyme and/or the substrate (El-Hefnawy et al. 2015). This agreed with some studies (Kavitha and Vijayalakshmi 2010; Bahrani 2016) but also differed with others (Pradhan and Dash 2013; Jayaramu et al. 2010; Prakasham et al. 2007), where their optimum reported pHs were 6.5, 7.5 and 6, respectively. The importance of incubation time was recorded by some researchers (Pradhan and Dash 2013; Maysa et al. 2010), who mentioned similar results to that of the current study. At prolonged incubation time the production level started to decrease, long incubation time can lead to degradation of the enzyme by the proteolytic enzymes, and also it causes depletion of medium components or production of some enzyme inhibitors in the medium. The shorter incubation time is cost effective and reduces the chance of the enzyme decomposition (El-Hefnawy et al. 2015). On contrary, other studies demonstrated diverse results; the optimum incubation time for Emericella nidulans and Stenotrophomonas maltophilia was 48 h (El-Mched et al. 2014; Jayaramu et al. 2010), 72 h for Streptomyces tendae and Penicillium oxalicum (Kavitha and Vijayalakshmi 2010; El-Hefnawy et al. 2015). Owing to the agitation influence on the availability of the oxygen and nutrient in the medium (Sooch and Kauldhar 2013), the increase in the agitation rate helps in mixing of the nutrients which enhance its absorption by the microorganisms (Pansuriya and Singhal 2011), and the decrease in the production at higher agitation rate may be attributed to the shear stress on the bacterial cells (Sooch and Kauldhar 2013). Other published reports revealed higher enzyme production at 220 rpm (Bahrani 2016) and at 150 rpm (Sooch and Kauldhar 2013). The RSM model has a second order polynomial equation that relates the square root values of l-asparaginase activity to the tested variables. A model of high significance was obtained, as evident from the Fisher's F-test, with a very low probability value (P model > F) = 0.0001. The goodness of fit of the model was checked by several statistical criteria. The high determination coefficient indicates that only 4.91% of the total variation is not explained by the model. The lack of fit F-value of 19.23 and p-value of 0.0504 imply that lack of fit is not significant relative to the pure error and the model accordingly shows excellent fit. The adjusted R2 meant that the model can explain 90.92% of the variability if the sample was a subset of the population other than the studied sample, its value is always lower than the R2. The Predicted R2 of the model explained that 76.0% of observations other than the fed in data can be correctly anticipated by the model and the model has high predictive abilities (Frost 2013). The model adequately compares the range of the predicted values at the design points to the average prediction error. Ratios greater than 4 indicate adequate model discrimination and it can navigate the design space. Reliability is defined as the overall consistency of measure. Its index measure is the coefficient of variation (CV) which is the ratio of the standard deviation to the mean. The lower the value of the CV, the more reliable and precise the model is considered. It implies high reliability and excellent precision (Shechtman 2013). All of these considerations indicate the adequacy of the established regression model. The results of the 3D response surface and 2D contour plots revealed that increasing the medium pH to 7.5, shifting the incubation temperature to 40 °C, increasing the agitation rate to 190 rpm and decreasing the incubation time to 18 h increase l-asparaginase productivity by the test isolate. Regarding media components, glucose has been responsible for bacterial catabolic repression as it is considered as a quickly metabolized substance but in some cases its incorporation help in enhancing the metabolite production. In our study, glucose increased l-asparaginase productivity. Glucose may provide a positive effect on enhancing l-asparaginase biosynthesis (Baskar and Renganathan 2011; El-Hefnawy et al. 2015). Similarly, glucose enhanced l-asparaginase productivity from Aeromonas sp., and Aspergillus terreus (Amena et al. 2010; Baskar and Renganathan 2011; El-Hefnawy et al. 2015; Doriya and Kumar 2016; Varalakshmi and Raju 2013), while sucrose and sorbitol increased the enzyme biosynthesis from Streptomyces tendae (Kavitha and Vijayalakshmi 2010) and lactose in case of E. coli (Bahrani 2016). Nitrogen sources help in the production of nucleic acid, protein and cell wall and it affects enzymes production. Ammonium chloride was the best source and it was mentioned in many reports (Baskar and Renganathan 2009; Meghavarnam and Janakiraman 2015; Hymavathi et al. 2010). The optimum concentration for maximum production was 0.1% w/v in our study and it was 0.5% and 2% w/v in other studies (Kavitha and Vijayalakshmi 2010; Amena et al. 2010; El-Hefnawy et al. 2015). Magnesium increased l-asparaginase production, as published previously. In this study, 10 mM magnesium (1.2% w/v) proved to be optimum concentration for l-asparaginase production and a significant decrease in enzyme production occurred at higher concentrations as magnesium may interfere with the bacterial cell division at high concentrations. Based on the above discussion, Bacillus licheniformis revealed a promising l-asparaginase production for different applications. Model-based optimization for enzyme production was established in this study and it can be exploited for enzyme production at a large scale level. Adinarayana K, Ellaiah P (2002) Response surface optimization of the critical medium components for the production of alkaline protease by a newly isolated Bacillus sp. J Pharm Pharm Sci 5(3):272–278 Amena S, Vishalakshi N, Prabhakar M, Dayanand A, Lingappa K (2010) Production, purification and characterization of l-asparaginase from Streptomyces gulbargensis. Braz J Microbiol 41(1):173–178 Anbu P, Gopinath SCB, Chaulagain BP, Lakshmipriya T (2013) Microbial enzymes and their applications in industries and medicine. Biomed Res Int. 2017:2195808 Bahrani MH (2016) Study the optimum parameters for production of cloned l-asparaginase type I by Escherichia coli. Int J Curr Microbiol App Sci 5(8):479–485 Bansal S, Srivastava A, Mukherjee G, Pandey R, Verma AK, Mishra P, Kundu B (2012) Hyperthermophilic asparaginase mutants with enhanced substrate affinity and antineoplastic activity: structural insights on their mechanism of action. FASEB J 26(3):1161–1171 Barry E, DeAngelo DJ, Neuberg D, Stevenson K, Loh ML, Asselin BL, Barr RD, Clavell LA, Hurwitz CA, Moghrabi A (2007) Favorable outcome for adolescents with acute lymphoblastic leukemia treated on Dana-Farber Cancer Institute acute lymphoblastic leukemia consortium protocols. J Clin Oncol 25(7):813–819 Baskar G, Renganathan S (2009) Evaluation and screening of nitrogen source for l-asparaginase production by Aspergillus terreus MTCC 1782 using latin square design. Res J Math Stat 1(2):55–58 Baskar G, Renganathan S (2011) Production of l-asparaginase from natural substrates by Aspergillus terreus MTCC 1782: optimization of carbon source and operating conditions. Int J Chem React Eng. https://doi.org/10.1515/1542-6580.2479 Batool T, Makky EA, Jalal M, Yusoff MM (2016) A comprehensive review on l-asparaginase and its applications. Appl Biochem Biotechnol 178(5):900–923 Biswaprakash Pradhan S, Dash SS (2013) Optimization of some physical and nutritional parameters for the production of l-asparaginase by isolated thermophilic Pseudomonas aeruginosa strain F1. Biosci Biotech Res Asia 10:389–395 Bodey GP, Hewlett JS, Coltman CA, Rodriguez V, Freireich EJ (1974) Therapy of adult acute leukemia with daunorubicin and l-asparaginase. Cancer 33(3):626–630 Boyse EA, Old LJ, Campbell HA, Mashburn LT (1967) Suppression of murine Leukemias By l-asparaginase: incidence of sensitivity among leukemias of various types: comparative inhibitory activities of guinea pig serum l-asparaginase and Escherichia coli l-asparaginase. J Exp Med 125(1):17–31 Campbell HA, Mashburn LT (1969) l-Asparaginase EC-2 from Escherichia coli Some substrate specificity characteristics. Biochemistry 8(9):3768–3775 Cavanna M, Celle G, Dodero M, Picciotto A, Pannacciulli I, Brambilla G (1976) Comparative experimental evaluation of immunodepressive and toxic effects of l-asparaginase (NSC-109229) from Escherichia coli and from Erwinia carotovora. CancerTreatRep 60(3):255–257 Dash C, Mohapatra SB, Maiti PK (2016) Optimization, purification, and characterization of l-asparaginase from Actinomycetales bacterium BkSoiiA. Prep Biochem Biotechnol 46(1):1–7 Deokar VD, Vetal MD, Rodrigues L (2010) Production of intracellular l-asparaginase from Erwinia carotovora and its statistical optimization using response surface methodology (RSM). Int J Chem Sci 1:25–36 Diep TB, Thom NT, Sang HD, Thao HP, Van Binh N, Thuan TB, Lan VTH, Quynh TM (2017) Screening streptomycin resistant mutations from gamma ray irradiated Bacillus subtilis B5 for selection of potential mutants with high production of protease. VNU J Sci Nat Sci Technol 32(1S):16–19 Doriya K, Kumar DS (2016) Isolation and screening of l-asparaginase free of glutaminase and urease from fungal sp. Biotech 6(2):239 El-Hefnawy MAA, Attia M, El-Hofy ME, Ali SMA (2015) Optimization Production of L asparaginase by locally isolated filamentous fungi from Egypt. Curr Sci Int 4(3):330–341 El-Mched F, Olama Z, Holail H (2014) Optimization of the enviromental factors affecting Stenotrophomonas SSPFZL-asparaginase production. Int Res J Nat Sci 2(3):1–16 El-Mched F, Olama Z, Holail H (2015) Purification and characterization of l-asparaginase from soil isolate under solid state fermentation. Int Res Pure Appl Phys. 3(1):30–43 El-Sabbagh SM, El-Batanony SM, Salem TA (2013) l-Asparaginase produced by Streptomyces strain isolated from Egyptian soil: purification, characterization and evaluation of its anti-tumor. Afr J Microbiol Res 7(50):5677–5686 Elshafei AM, Hassan MM, Abouzeid MA-E, Mahmoud DA, Elghonemy DH (2012) Purification, characterization and antitumor activity of l-asparaginase from Penicillium brevicompactum NRC 829. Br Microbiol Res 2(3):2231 Frost J (2013) Multiple regression analysis: Use adjusted R-squared and predicted R-squared to include the correct number of variables. The Minitab Blog, http://blog.minitab.com/blog/adventures-in-statistics/multiple-regessionanalysis-use-adjusted-r-squared-and-predicted-r-squared-to-include-the-correctnumber-of-variables. June, 30, 2015 Gopinath SC, Anbu P, Lakshmipriya T, Hilda A (2013) Strategies to characterize fungal lipases for applications in medicine and dairy industry. Biomed Res Int 2013:1–2 Han S, Jung J, Park W (2014) Biochemical characterization of l-asparaginase in NaCl-tolerant Staphylococcus sp. OJ82 isolated from fermented seafood. J Microbiol Biotechnol 24(8):1096–1104 Hoe PCK, Khairuddin AR, Halimi MS (2016) A review on microbial mutagenesis through gamma irradiation for agricultural applications. Jurnal Sains Nuklear Malaysia 28(2):20–29 Huma T, Rashid MH, Javed MR, Ashraf A (2012) Gamma ray mediated mutagenesis of Phialocephala humicola: effect on kinetics and thermodynamics of a-amylase production. Afr J Microbiol Res 6(22):4639–4646 Hymavathi M, Sathish T, Brahmaiah P, Prakasham RS (2010) Impact of carbon and nitrogen sources on l-asparaginase production by isolated Bacillus circulans (MTCC 8574): application of saturated Plackett-Burman design. Chem Biochem Eng Q 24(4):473–480 Hyster TK, Ward TR (2016) Genetic optimization of metalloenzymes: enhancing enzymes for non-natural reactions. Angew Chem Int Ed 55(26):7344–7357 Izadpanah QF, Javadpour S, Malekzadeh K, Tamadoni Jahromi S, Rahimzadeh M (2014) Persian gulf is a bioresource of potent l-asparaginase producing bacteria: isolation & molecular differentiating. Int J Environ Res 8(3):813–818 Jain R, Zaidi K, Verma Y, Saxena P (2012) l-asparaginase: a promising enzyme for treatment of acute lymphoblastic leukiemia. People's J Sci Res 5(1):29–35 Jayaramu M, Hemalatha NB, Rajeshwari CP, Siddalingeshwara KG, Mohsin SM, Sunil Dutt PLNSN (2010) A novel approach for detection, confirmation and optimization of l-asparaginase from Emericella nidulans. Curr Pharm Res 1(1):20 Jones DT, Taylor WR, Thornton JM (1992) The rapid generation of mutation data matrices from protein sequences. Comput Appl Biosci 8(3):275–282 Joseph B, Rajan SS (2011) L-lysine alpha oxidase from fungi as an anti tumor enzyme agent. Adv Biotechnol 10(8):27–30 Kavitha A, Vijayalakshmi M (2010) Optimization and purifi cation of l-asparaginase produced by Streptomyces tendae TK-VL_333. Z Naturforsch C 65(7–8):528–531 Kolaskar AS, Tongaonkar PC (1990) A semi-empirical method for prediction of antigenic determinants on protein antigens. FEBS Lett 276(1–2):172–174 Krishnapura PR, Belur PD, Subramanya S (2016) A critical review on properties and applications of microbial l-asparaginases. Crit Rev Microbiol 42(5):720–737 Kumar S, Stecher G, Michael L, Knyaz C, Tamura K (2018) MEGA X: molecular evolutionary genetics analysis across computing platforms. Mol Biol Evol 35(6):1547–1549 Lay HN, Ekert H, Colebatch JH (1975) Combination chemotherapy for children with acute lymphocytic leukemia who fail to respond to standard remission induction therapy. Cancer 36(4):1220–1222 Mahajan RV, Saran S, Kameswaran K, Kumar V, Saxena RK (2012) Efficient production of l-asparaginase from Bacillus licheniformis with low-glutaminase activity: optimization, scale up and acrylamide degradation studies. Bioresour Technol 125:11–16 Mahajan RV, Kumar V, Rajendran V, Saran S, Ghosh PC, Saxena RK (2014) Purification and characterization of a novel and robust l-asparaginase having low-glutaminase activity from Bacillus licheniformis: in vitro evaluation of anti-cancerous properties. PLoS ONE 9(6):e99037 Mashburn LT, Wriston JC (1963) Tumor inhibitory effect of l-asparaginase. Biochem Biophys Res Commun 12(1):50–55 Maysa EM, Amira M, Gamal E, Sanaa T, Sayed EI (2010) Production, immobilization and anti-tumor activity of l-asparaginase of Bacillus sp. R36. J Am Sci 6(8):157–165 Megavarnam AK, Janakiraman S (2015) Optimization of physiological growth conditions for maximal production of l-asparaginase by Fusarium species. Int J Bioassays 4(10):4369–4375 Michalska K, Jaskolski M (2006) Structural aspects of l-asparaginases, their friends and relations. Acta Bio chim Pol 53(4):627 Moorthy V, Ramalingam A, Sumantha A, Shankaranaya RT (2010) Production, purification and characterisation of extracellular l-asparaginase from a soil isolate of Bacillus sp. Afr J Microbiol Res 4(18):1862–1867 Nigam PS (2013) Microbial enzymes with special characteristics for biotechnological applications. Biomolecules 3(3):597–611 Pansuriya RC, Singhal RS (2011) Effects of dissolved oxygen and agitation on production of serratiopeptidase by Serratia marcescens NRRL B-23112 in stirred tank bioreactor and its kinetic modeling. J Microbiol Biotechnol 21(4):430–437 Persson E, Halle B (2008) Cell water dynamics on multiple time scales. Proc Natl Acad Sci USA 105(17):6266–6271 Prakasham RS, Rao CS, Rao RS, Lakshmi GS, Sarma PN (2007) l-asparaginase production by isolated Staphylococcus sp.–6A: design of experiment considering interaction effect for process parameter optimization. J Appl Microbiol 102(5):1382–1391 Roberts J, Prager MD, Bachynsky N (1966) The antitumor activity of Escherichia coli l-asparaginase. Cancer Res 26(10):2213–2217 Sakr MM, Aboulwafa MM, Aboshanab KMA, Hassouna NAH (2014) Screening and preliminary characterization of quenching activities of soil bacillus isolates against acyl homoserine lactones of clinically isolated Pseudomonas aeruginosa. Malays J Microbiol 10:80–91 Seale RB, Flint SH, McQuillan AJ, Bremer PJ (2008) Recovery of spores from thermophilic dairy bacilli and effects of their surface characteristics on attachment to different surfaces. Appl Environ Microbiol 74(3):731–737 Shechtman O (2013) The coefficient of variation as an index of measurement reliability Methods Clin Epidemiol. Springer, Berlin, pp 39–49 Sinha RA, Singh HR, Jha SK (2013) Microbial l-asparaginase: present and future prospective. Int J Innov Res Sci Eng 2(11):7031–7051 Sooch BS, Kauldhar BS (2013) Influence of multiple bioprocess parameters on production of lipase from Pseudomonas sp. BWS-5. Braz Arch Biol Technol 56(5):711–721 Straight PD, Fischbach MA, Walsh CT, Rudner DZ, Kolter R (2007) A singular enzymatic megacomplex from Bacillus subtilis. Proc Natl Acad Sci USA 104(1):305–310 Varalakshmi V, Raju KJ (2013) Optimization of l-asparaginase production by Aspergillus terreus mtcc 1782 using bajra seed flour under solid state fermentation. Int J Res Eng Technol 2(09):121–129 The authors contributed to the work done in the manuscript as follows: NAA: Conducted the experimental work, summarized the data, wrote the draft of the manuscript. WFE: Shared in design of experiments, analyzed and interpreted the data, revised the manuscript. MMR: Shared in the design and supervised the laboratory experiments, helped in draft writing of the manuscript and data analysis. MMA: Put the work idea, performed the bioinformatics analysis, analyzed and interpreted the data and revised the manuscript. All authors read and approved the final manuscript. The authors would like to thank Dr. Amal Emad and Dr. Nouran Elleboudy for their professional support in genetic based identification and model based optimization, respectively. Please contact author for data request. Department of Microbiology and Immunology, Faculty of Pharmaceutical Sciences and Pharmaceutical Industries, Future University, Cairo, Egypt Nada A. Abdelrazek & Marwa M. Raafat Department of Microbiology and Immunology, Faculty of Pharmacy, Ain Shams University, African Union Organization St. Abbassia, Cairo, 11566, Egypt Walid F. Elkhatib & Mohammad M. Aboulwafa Department of Microbiology and Immunology, School of Pharmacy & Pharmaceutical Industries, Badr University in Cairo (BUC), Entertainment Area, Badr, Cairo, Egypt Search for Nada A. Abdelrazek in: Search for Walid F. Elkhatib in: Search for Marwa M. Raafat in: Search for Mohammad M. Aboulwafa in: Corresponding authors Correspondence to Walid F. Elkhatib or Mohammad M. Aboulwafa. 13568_2019_751_MOESM1_ESM.docx Additional file 1: Table S1. Levels of reaction conditions of process parameters as independent variables studied in RSM experimental design for optimization of l-asparaginase production by the selected test mutant. Table S2. Experiments that were deduced by the RSM experimental design and performed for l-asparaginase production by the mutant. Table S3. Pairwise distances among l-asparaginases of bacterial species presented in the phylogenetic tree shown in Fig. 1. Table S4. Pairwise distances among l-asparaginases of Bacillus licheniformis, E. coli and Erwinia chrysanthemi presented in the phylogenetic tree shown in Figure 2. Table S5. ANOVA of the quadratic model for the process parameters optimization of l-asparaginase productivity by Bacillus licheniformis mutant using Box–Behnken central composite design. Abdelrazek, N.A., Elkhatib, W.F., Raafat, M.M. et al. Experimental and bioinformatics study for production of l-asparaginase from Bacillus licheniformis: a promising enzyme for medical application. AMB Expr 9, 39 (2019) doi:10.1186/s13568-019-0751-3 Bacillus licheniformis Response surface methodology
CommonCrawl
arXiv.org > math > arXiv:0801.3021v3 math.AG Mathematics > Algebraic Geometry Title:The maximum number of singular points on rational homology projective planes Authors:Dongseon Hwang, JongHae Keum (Submitted on 20 Jan 2008 (v1), revised 9 Mar 2008 (this version, v3), latest version 12 Oct 2008 (v8)) Abstract: A normal projective complex surface is called a rational homology projective plane if it has the same Betti numbers with the complex projective plane $\mathbb{C}\mathbb{P}^2$. It is known that a rational homology projective plane with quotient singularities has at most 5 singular points. But all known examples have at most 4 singular points. In this paper, we prove that a rational homology projective plane $S$ with quotient singularities such that $K_S$ is nef has at most 4 singular points except one case. The exceptional case comes from Enriques surfaces with a configuration of 9 smooth rational curves whose Dynkin diagram is of type $ 3A_1 \oplus 2A_3$. We also obtain a similar result in the differentiable case and in the symplectic case under certain assumptions which all hold in the algebraic case. Comments: 20 pages. A statement for symplectic orbifolds is added Subjects: Algebraic Geometry (math.AG) MSC classes: 14J17; 14J28 Cite as: arXiv:0801.3021 [math.AG] (or arXiv:0801.3021v3 [math.AG] for this version) From: JongHae Keum [view email] [v1] Sun, 20 Jan 2008 04:07:53 UTC (17 KB) [v2] Tue, 12 Feb 2008 05:58:19 UTC (18 KB) [v3] Sun, 9 Mar 2008 07:12:04 UTC (19 KB) [v4] Tue, 18 Mar 2008 02:33:42 UTC (19 KB) [v5] Fri, 28 Mar 2008 03:02:02 UTC (21 KB) [v6] Sun, 30 Mar 2008 06:10:05 UTC (21 KB) [v7] Tue, 8 Apr 2008 05:11:31 UTC (21 KB) [v8] Sun, 12 Oct 2008 03:28:35 UTC (21 KB)
CommonCrawl
TQFT- Adding a $Q$-exact term which is equal to the action itself It is known that Witten-type topological quantum field theories (TQFT) are invariant when $Q$-exact terms are added to the classical action, where $Q$ is the BRST charge. But for these theories, the action itself is $Q$-exact, so what stops us from cancelling off the entire action? To be precise, given the partition function $$ Z=\int\mathcal{D}X\textrm{ }e^{-Q\psi} $$ of a TQFT, where $Q\psi$ is the $Q$-exact action, we may add a term $Q\chi$ to the action, which can be shown (upon expanding $e^{-Q\chi}$) to be inconsequential, using the fact that $$ \langle Q O\rangle=0 $$ for any operator $O$; i.e., we find $$ \int\mathcal{D}X\textrm{ }e^{-(Q\psi+Q\chi)}=\int\mathcal{D}X\textrm{ }e^{-Q\psi}(1-Q\chi+\frac{1}{2}Q(\chi Q \chi)+\ldots)=\int\mathcal{D}X\textrm{ }e^{-Q\psi}. $$ But if we choose $\chi=-\psi$, this would mean that $$ \int\mathcal{D}X=\int\mathcal{D}X\textrm{ }e^{-Q\psi}, $$ which seems to be an absurd statement, since the LHS is a divergent quantity. quantum-field-theory path-integral topological-field-theory gauge brst MtheoristMtheorist $\begingroup$ You are only allowed to make deformations which preserve the convergence of the Euclidean path integral. If you start with a positive-definite action $S_E$, you may deform it by $t Q[V]$ for any $t>0$ provided that $Q[V]$ is also positive. In your example, you are deforming the action by a negative-definite term, which is why you're getting nonsense. $\endgroup$ – Elliot Schneider Aug 30 '17 at 15:36 $\begingroup$ But aren't there $Q$-exact deformations which are negative-definite, yet preserve convergence? For example, with the action written as $Q\psi$, the deformation $-\frac{1}{2}Q\psi$ would result in the original action multiplied by $1/2$. This surely is convergent. $\endgroup$ – Mtheorist Aug 31 '17 at 10:41 $\begingroup$ Sure, but you can't scale that deformation with an arbitrarily large coefficient, which is the usual strategy in localization. $\endgroup$ – Elliot Schneider Aug 31 '17 at 21:39 The statement that one may add a $Q$ exact term to the action $S\to S+Q\chi$ without altering the correlators is not always strictly true. One has to be careful to make a choice of $\chi$ that does not change the asymptotic behaviour of the action $S$ at the boundary of field space. The choice of $\chi=-\psi$ is clearly a choice that drastically alters the asymptotic behaviour of $S$. Furthermore, $\left<Q\mathcal{O}\right>$ only vanishes up to boundary terms in field space. Non-zero boundary terms would cause the standard arguments you made to fail. Edit: This can be demonstrated by a finite dimensional example which can be found in section 3.22 of the lectures by G. Moore on Donaldson Invariants and 4-manifolds at: http://www.physics.rutgers.edu/~gmoore/SCGP-FourManifoldsNotes-2017.pdf. Consider the supersymmetric integral $$\mathcal{Z}=\int d\mathcal{M}e^{-S},\quad S=\frac{1}{2}H^2+iHs(x)-i\bar{\psi}\frac{ds(x)}{dx}\psi,\quad d\mathcal{M}=\frac{dxdHd\psi d\bar{\psi}}{2\pi i},$$ where $s:\mathbb{R}\to\mathbb{R}$ is a real valued function satisfying $|s(x)|\to\infty$ as $|x|\to\infty$. The action is invariant under a supersymmetry with $$Qx=\psi,\quad Q\psi=0,\quad Q\bar{\psi}=H,\quad QH=0.$$ Furthermore, the action is $Q$-exact $$S=Q\Psi,\quad \Psi=\left(\frac{1}{2}\bar{\psi}H+i\bar{\psi}s(x)\right).$$ Upon integrating out $\psi,\bar{\psi}$ and the auxiliary $H$ we find that $\mathcal{Z}$ is given by $$\mathcal{Z}=\int^{\infty}_{-\infty}\frac{dx}{\sqrt{2\pi}}s'(x)e^{-\frac{1}{2}s(x)^2}=\deg(f)\int^{\infty}_{-\infty}\frac{dy}{\sqrt{2\pi}}e^{-\frac{1}{2}y^2}=\deg(f)=\sum_{z(s)}\frac{s'(x_0)}{|s'(x_0)|}$$ to evaluate the the integral we changed variables $f:x\mapsto y=s$ and $\deg(f)$ denotes the degree of that map, finally $z(s)=\{x_0|s(x_0)=0\}$ is the zero set of $s$. Note that upon integrating out $H$, i.e. setting $H=-is(x)$, $z(s)$ coincides precisely with the set of $Q$ fixed points. However, we could have considered deforming the action by a $Q$ exact term, say, $S\to S+Q(i\bar{\psi}t(x))$ and repeating the above steps yields that $$\mathcal{Z}_t=\int^{\infty}_{-\infty}\frac{dx}{\sqrt{2\pi}}(s'(x)+t'(x))e^{-\frac{1}{2}(s(x)+t(x))^2}\stackrel{?}{=}\mathcal{Z}.$$ It should be clear that there are choices of $t$ such that $\mathcal{Z}_t$ doesn't exist, we require a $t$ such that $|s(x)+t(x)|\to\infty$ as $|x|\to\infty$ still holds. Even if this holds true there are choices such that $\mathcal{Z}_t\neq\mathcal{Z}$, as a simple demonstration you could just choose $s(x)=x^2-2x+1$ and $t(x)=-x^2$ then $z(s)=\{1\}$ and $s'(1)=0$ hence $\mathcal{Z}=0$ on the other hand the zero locus $z(s+t)=\{1/2\}$ $s'(1/2)+t'(1/2)=-2$ hence $\mathcal{Z}_{t=-x^2}=-1$. Thomas BourtonThomas Bourton $\begingroup$ Why is changing the asymptotic behavior of $S$ not allowed? Moore does not explain this. $\endgroup$ – Mtheorist Aug 25 '17 at 14:56 $\begingroup$ Thank you for the clarification. By the way, how does one evaluate the integral? I have posted this as a question on MathStackexchange here - math.stackexchange.com/questions/2407571/…, but there is no answer yet. $\endgroup$ – Mtheorist Aug 31 '17 at 8:51 $\begingroup$ I have added an extra step but the idea is to make a change of variables $f:x\mapsto y=s$. Since that map is generically not one-to-one we pick up an extra factor $\deg(f)$ which is just counting the number of preimages of a point $y$ with signs. $\endgroup$ – Thomas Bourton Sep 2 '17 at 15:13 The statement that the path integral is independent of gauge-fixing (fermion) comes with various caveats. Recall that one of the reasons that we gauge-fix is to avoid integrating over an ill-defined infinite gauge volume. In general, a gauge symmetry manifests itself as a zero eigenvalue in the Hessian (i.e. the second derivative) of the ungauge-fixed action. In other words, we should make sure that the Hessian of the gauge-fixed action is non-degenerate. In particular, choosing the full action (and thereby the Hessian) to be identically zero would violate that. See also my related Phys.SE answer here. Qmechanic♦Qmechanic Not the answer you're looking for? Browse other questions tagged quantum-field-theory path-integral topological-field-theory gauge brst or ask your own question. Gauge fixing and equations of motion Fermionic path integral on the disk - Recovering the vacuum state Problems with covariant action of the superstring Obstruction in calculating $\mathcal{Z}_{\mathcal{N}=1}$ SYM partition function Quantum field theory in spacetime with different topologies APS $\eta$-invariant and spin-Ising TQFT Parity Anomaly and Gauge Invariance Does the massless fermion in $2+1$ dimensions suffer from gauge anomaly?
CommonCrawl
Transactions of the American Mathematical Society Published by the American Mathematical Society, the Transactions of the American Mathematical Society (TRAN) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. The 2020 MCQ for Transactions of the American Mathematical Society is 1.43. Journals Home eContent Search About TRAN Editorial Board Author and Submission Information Journal Policies Subscription Information On actions of adjoint type on complex Stiefel manifolds by McKenzie Y. Wang PDF Trans. Amer. Math. Soc. 272 (1982), 611-628 Request permission Let $G(m)$ denote ${\rm {SU}}(m)$ or ${\rm {Sp}}(m)$. It is shown that when $m \geq 5 G(m)$ cannot act smoothly on $W_{n,2}$, the complex Stiefel manifold of orthonormal $2$-frames in $\mathbf C^n$, for $n$ odd, with connected principal isotropy type equal to the class of maximal tori in $G(m)$. This demonstrates an important difference between $W_{n,2}$, $n$ odd, and $S^{2n-3}\times S^{2n-1}$ in the behavior of differentiable transformation groups. Exactly the same holds for ${\rm {SO}}(m)$ or Spin$(m)$ if it is further assumed that a maximal $2$-torus of ${\rm {SO}}(m)$ has fixed points.$^{2}$ Armand Borel, Seminar on transformation groups, Annals of Mathematics Studies, No. 46, Princeton University Press, Princeton, N.J., 1960. With contributions by G. Bredon, E. E. Floyd, D. Montgomery, R. Palais. MR 0116341 A. Borel and J. De Siebenthal, Les sous-groupes fermés de rang maximum des groupes de Lie clos, Comment. Math. Helv. 23 (1949), 200–221 (French). MR 32659, DOI 10.1007/BF02565599 Glen E. Bredon, Introduction to compact transformation groups, Pure and Applied Mathematics, Vol. 46, Academic Press, New York-London, 1972. MR 0413144 Glen E. Bredon, Homotopical properties of fixed point sets of circle group actions. I, Amer. J. Math. 91 (1969), 874–888. MR 259905, DOI 10.2307/2373308 Theodore Chang and Tor Skjelbred, The topological Schur lemma and related results, Ann. of Math. (2) 100 (1974), 307–321. MR 375357, DOI 10.2307/1971074 Wu-chung Hsiang and Wu-yi Hsiang, Differentiable actions of compact connected classical groups. II, Ann. of Math. (2) 92 (1970), 189–223. MR 265511, DOI 10.2307/1970834 Wu-yi Hsiang, Structural theorems for topological actions of $Z_{2}$-tori on real, complex and quaternionic projective spaces, Comment. Math. Helv. 49 (1974), 479–491; addendum, ibid. 50 (1975), 277–279. MR 375362, DOI 10.1007/BF02566743 Wu-yi Hsiang, On characteristic classes of compact homogeneous spaces and their applications in compact transformation groups. I, Bol. Soc. Brasil. Mat. 10 (1979), no. 2, 87–161. MR 607008, DOI 10.1007/BF02584633 Wu-yi Hsiang, Cohomology theory of topological transformation groups, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 85, Springer-Verlag, New York-Heidelberg, 1975. MR 0423384 J. C. Su, Periodic transformations on the product of two spheres, Trans. Amer. Math. Soc. 112 (1964), 369–380. MR 163309, DOI 10.1090/S0002-9947-1964-0163309-8 M. Wang, Doctoral Dissertation, Stanford University, 1980. McKenzie Y. Wang, On actions of regular type on complex Stiefel manifolds, Trans. Amer. Math. Soc. 272 (1982), no. 2, 589–610. MR 662055, DOI 10.1090/S0002-9947-1982-0662055-2 Retrieve articles in Transactions of the American Mathematical Society with MSC: 57S15, 57S25 Retrieve articles in all journals with MSC: 57S15, 57S25 © Copyright 1982 American Mathematical Society Journal: Trans. Amer. Math. Soc. 272 (1982), 611-628 MSC: Primary 57S15; Secondary 57S25 DOI: https://doi.org/10.1090/S0002-9947-1982-0662056-4
CommonCrawl
Crystallization behavior of ion beam sputtered HfO2 thin films and its effect on the laser-induced damage threshold Zoltán Balogh-Michels1,2, Igor Stevanovic1, Aurelio Borzi2, Andreas Bächli1, Daniel Schachtler1, Thomas Gischkat1, Antonia Neels2, Alexander Stuck1 & Roelene Botha1 Journal of the European Optical Society-Rapid Publications volume 17, Article number: 3 (2021) Cite this article In this work, we present our results about the thermal crystallization of ion beam sputtered hafnia on 0001 SiO2 substrates and its effect on the laser-induced damage threshold (LIDT). The crystallization process was studied using in-situ X-ray diffractometry. We determined an activation energy for crystallization of 2.6 ± 0.5 eV. It was found that the growth of the crystallites follows a two-dimensional growth mode. This, in combination with the high activation energy, leads to an apparent layer thickness-dependent crystallization temperature. LIDT measurements @355 nm on thermally treated 3 quarter-wave thick hafnia layers show a decrement of the 0% LIDT for 1 h @773 K treatment. Thermal treatment for 5 h leads to a significant increment of the LIDT values. Thin-film interference coatings, like antireflection or high reflectance coatings as well as wavelength and polarization selective coatings, are key elements of optical components as they allow to tune or even radically alter their optical properties [1, 2]. Ion beam sputtered (IBS) multilayers represent the current high-end optical coatings in the VIS/NIR range. Compared to other optical coating techniques, the sputtered atoms have high surface mobility due to their high energies (> 10 eV). Consequently, thin films by means of IBS are amorphous and provide the lowest surface roughness and lowest defect concentration for optical applications [3]. Furthermore, light scattering due to the grain boundaries is avoided in the amorphous film structure. Contemporary IBS allows manufacturing high reflective mirror coatings with optical losses below 1 ppm at a wavelength of 643 nm [4]. In laser applications, SiO2 and HfO2 are preferred in the multilayer stack as low and high refractive index material, respectively. Besides low optical losses, low absorption in the UV range, HfO2 exhibits a high laser-induced damage threshold compared to other high index materials [5,6,7]. However, future applications, like new gravitational wave detectors [8, 9] or more precise optical clocks [10], require even smaller optical losses. The primary source of the losses in amorphous mirrors is the Brownian motion of the atoms [8, 11]. Current possibilities of circumventing the problem of the Brownian losses are by active cooling during operation e.g. [12], which is neither very efficient nor simple or straightforward. Single crystalline coatings instead of amorphous coatings can achieve an inherent reduction of Brownian loss. This was already shown for optical coatings working in the infrared range by applying epitaxial grown GaAs/AlxGa1-xAs multilayers and a subsequent substrate transfer [13,14,15]. By the fact that no matching substrates exist for the typical coating layers (e.g. HfO2/SiO2, Ta2O5/SiO2), this method is not applicable for coatings in the visible range. To enable applications, which requires single crystalline coatings in the visible range, it is essential to gain knowledge on non-epitaxial crystallization to tailor the process. Thus, the material and its phase transitions have attracted research interest in the past decades since it also promises high-k dielectric for semiconductor applications. A few reports on the crystallization of hafnia on Si substrates exist. These include thermal crystallization of amorphous oxides [16], thermal oxidation of metallic hafnium [17], observation of spontaneous crystallization during atomic layer deposition (ALD) [18, 19], the layer thickness [20] or composition dependence of the crystallization temperature [21] and even an attempt to crystallize the hafnia using laser irradiation [22]. Liu et al. investigated the effect of heat treatment on thick IBS hafnia layers deposited on fused silica and single crystalline silicon substrates and reported a stress reduction accompanying the partial crystallization [23]. A recent publication by Abromavicius et al. [24] revealed that crystallization of IBS-HfO2 can lead to higher LIDT due to stress reduction and/or better thermal management. The coating was done by reactive IBS from a metallic Hf-target (Plasmaterials with 3 N purity) using a Veeco Spector 1.5 Dual Ion Beam Sputter (DIBS) instrument. The substrates for XRD measurements were epi-polished 10 × 10 mm2 SiO2 single crystals with 0001 orientation from CrysTec. Prior to coating, the samples were in-situ treated with energetic O2 ions (1 keV) for 10 min using the second plasma source of the Veeco DIBS machine. To achieve complete oxidization, the target was flooded with 35 ccm O2 during the target cleaning and the sputtering sequence. To investigate thickness dependence of the crystallization, we deposited 10, 15, 20, and 50 nm thick HfO2 layers on the substrate. We determined the sputter rate using an ex-situ profilometer measurement. The in-situ XRD annealing experiments were carried out using a Bruker D8 Discovery DaVinci diffractometer. The X-ray beam from a standard Cu Kα source (in this experiment λ = 1.5418 Å) is parallelized in the scattering plane by a Goebel mirror and detected by a LynxEye 1D detector. Since polycrystalline films were expected, we selected a grazing incidence geometry, with ω = 2° as an incoming angle. The 2θ range was 28–35°, which contained the expected main peaks of both the cubic and monoclinic HfO2 [17]. The single crystalline SiO2 substrates are zero-background substrates for this method, in contrast to the common fused silica. The in-situ measurements were carried out using an Anton Paar DHS1100 high-temperature stage. We carried out an isochronal annealing sequence to gain an overall view of the transformation with 50 K steps and a holding time of 30 min using a 10 nm thick specimen. Based on this information, we performed isothermal scans with temperatures between 823 and 923 K using the 15 to 50 nm thick samples. The period between each single scan was 6 min. The scans were repeated until no significant change in the XRD pattern was observed in a few subsequent scans. Figure 1 shows the cubic (111) HfO2 peak's proximity in a typical scan (50 nm thick layer, 823 K, 30 min). To integrate the peaks, the background was deduced by a simple linear function approximation, marked by a gray line in Fig. 1. From the peak areas, the transformed fraction f (normalized peak area) was calculated and later fitted as a function of the time t in accordance to the JMAK-equation [25]: $$ f=1-\exp \left(-{\left(\frac{t}{t_c}\right)}^{1+{n}^{\prime }}\right), $$ with the two fitting parameters: 1 + n' and tc. Hereby the well-known Avrami-exponent containing information on the nucleation and the growth is presented in 1 + n' form. Assuming a constant nucleation rate, the exponent n' is the dimensionality of the growth, e.g. n' = 3 means three dimensional or volumetric growth, while n' = 2 means two dimensional or film like growth. The tc factor is the crystallization time, which can be derived from the N nucleation and G growth rate: $$ {t}_c\sim {\left(N{G}^{n\prime}\right)}^{-\frac{1}{n^{\prime }+1}}. $$ Background fitting for a HfO2 peak Rewriting Eq. (1) into: $$ \ln \left(-\ln \left(1-f\right)\right)=\left(1+n^{\prime}\right)\ln t-\ln {t}_c, $$ a linear fit can be performed to determine the parameters tc and n'. This fitting strategy is more sensitive in the detection of changes in the dimensionality. To verify the measurements' consistency, the raw data and the fitting function were also compared with the transformed function vs. time scale (e.g. Eq. 1). To estimate the activation energies of the crystallization, an Arrhenius-plot was performed according to: $$ {t}_c\left(T,d\right)={t}_{0,d}\exp \left(\frac{Q}{k_BT}\right), $$ where the tc is dependent of the temperature T and the layer thickness d, t0,d is a layer thickness dependent prefactor, Q is the layer thickness independent activation energy, and kB is the Boltzmann-constant. Finally, we investigated the sample surfaces in their as-deposited and annealed state using a Veeco Dimension 3100 atomic force microscope (AFM). Laser-damage testing For the LIDT measurements, we used P4 polished (rms roughness of about 3 Å, scratch-dig 20–10) fused silica substrates provided by WZW Optics AG. We chose a layer thickness of 3 quarter-wave @355 nm, which ensured an electric field maxima in the layer itself. We produced three sets of specimens; the first set was left in the as-prepared state ("amorphous"), the second one we annealed at 773 K for 1 h ("intermediate"), while the last set was annealed for 5 h at 773 K ("crystalline"). Laser-damage testing was performed using a Litron LPYG-450-100 diode-pumped laser at 355 nm [26]. The repetition rate was 100 Hz, while the pulse duration was 11.6 ns. The effective beam diameter, calculated according to ISO 21254-1:2011 [27], at the surface was 230 μm. The specimens were acclimatized for 24 h before the experiments in the laboratory (20 °C temperature, < 50% relative humidity). We carried out S-on-1 measurements with S = 5000. At least 150 sites were irradiated with appropriately chosen laser fluences. We performed the well-known but non-ISO conform test data reduction according to Jensen et al. [28] to improve the data quality in the relevant regime. We used a scattered light detector to identify the first damage. To verify the detected damage events, a visual inspection through a differential interference contrast microscopy (Leica DM4000 M Led) was performed. According to the ISO norm 21,254 [27], for each test site, there are three pieces of information: 1.) whether the site is damaged (1/0), 2.) the applied laser fluence Q, and 3.) the number of pulses applied before the damage onset Nmin. If no damage took place, this value is S. The derived data is the probability of damage by a given combination of Q ± ΔQ fluence interval and N pulse number. A site is considered damaged if $$ \left\{\begin{array}{c}N>{N}_{min}\kern3.5em 1\\ {}N\le {N}_{min}\kern3.5em 0\end{array}\right\} $$ The cumulative test method [28] assumes that if an experiment with any fluence of Q' > Q and Nmin pulses damage would also have been observed. There are different interpretations of the data. We chose the following two: 1.) the characteristic damage curve or the probability of damage if S pulses are applied at a given fluence 2.) the damage probability as a function of the fluence a given number of pulses. The specimens used in the laser-damage testing were also checked in ex-situ measurements for their crystallinity using a PanAlytical Empyrean X-ray diffractometer. This system is also equipped with a Cu Kα source. We applied two methods GIXRD at ω = 2° to identify the crystalline phases and X-ray reflectometry (XRR) to determine the density of the layers. The crystallization was first observed at 973 K for the 10 nm thick films, and the phase was identified to be cubic HfO2. After increasing the temperature to 1273 K, a transformation to the stable monoclinic phase took place. We carried out room temperature ex-situ measurements with slow scan speeds for a better signal-to-noise ratio. These scans are shown in Fig. 2. Ex-situ, room temperature XRD from the 10 nm thick HfO2 layer after peak annealing temperature of 923 K (amorphous), 973 K (cubic), and 1273 K (monoclinic) Since the intensity of the peaks for 10 nm thick layers were not high enough for reliable kinetics experiments, we studied the crystallization kinetics only of the 15–50 nm thick films. Figure 3a shows an example of such kinetics: the development of the cubic (111) peak for the 50 nm thick HfO2 film annealed at 823 K. a The development of the 111 peak of the cubic HfO2 phase during the 823 K annealing of the 50 nm thick layer. b The peak integral fitted according to Eq. 3. The inset shows the logarithmic representation In Fig. 3b, the measured and the fitted transferred fraction as a function of time are shown for the 50 nm HfO2 film. The small insert gives the logarithmic plot in accordance to Eq. (3). The fitting function is in good agreement with the measured data both in the linear and logarithmic scale. The n' exponent, in this case, is 2.15 ± 0.1 for this experiment, and no deviation from this behavior is visible in the early stages. The transformation for the 50 nm film took place at a much lower temperature than for the 10 nm thick film. Table 1 summarizes the crystallization times for the different temperatures and the corresponding dimensionality of the growth n' for the different investigated film thicknesses. For some experiments, the crystallization was too fast. Consequently, only an upper limit of the crystallization time can be given in these cases. We evaluated the exponent n' only where the amount of data could provide a reliable fitting. Table 1 The evaluation of the experiments To gain information on the crystallinity state of the sample GIXRD was performed on the samples used in the laser damage test. Figure 4 shows the results of these GIXRD experiment for the 5 h annealed "crystalline" specimen. Fused silica is not an ideal substrate for such investigations since the broad peak centered around 32° could belong to either SiO2 or HfO2. Nevertheless, the much sharper c-HfO2 peak could be separated and characterized (Fig. 4b). Table 2 shows the results of GIXRD and XRR experiments: the ratio of the peak area of the c-HfO2 to the amorphous peak, the Scherrer-size of the crystallites, and the layer density. GIXRD of the "crystalline" (5@ at 500 °C) specimen (a) and the deconvolution of the broad peak at ~ 30° Table 2 Crystallization parameters of the specimens Figure 5 shows the results of the LIDT experiments using two interpretations. In Fig. 5a, the damage probability (N = 5000) is plotted as a function of the pulse energies for the different specimens. In Fig. 5b, the 5% LIDT is plotted against the number of pulses. Damage probability curves for N = 5000 pulses as a function of the pulse energies for the different specimens. b The 5% LIDT as a function of the number of pulses (according to Jensen et al. [27]) for the amorphous, intermediate, and crystalline sample. The dashed lines are spline fits to guide the eyes It can clearly be seen from Fig. 5a that the "crystalline" specimen has the highest damage onset, highest 0% LIDT and its 100% LIDT is not worse than that of the "amorphous" specimen. Surprisingly the "intermediate" specimen produced the worst result. A similar trend can be seen for the 5% LIDT curves. The "crystalline" specimen shows higher laser-damage resistance for all but the lowest number of laser pulses than the "amorphous" one. On the other hand, the "intermediate" specimens show consequently the worst 5% LIDT values. The differences are larger than the expected measurement scatter. As it can be seen in Table 1, crystallization experiments performed on the 15, 20, and 50 nm thick films provided exponents of about 2, which indicates a two-dimensional growth. This shows that the distances between different nucleation centers are larger than the 50 nm film thickness. This is supported by the findings of the atomic AFM measurements shown in Fig. 6. Figure 6a shows the AFM of the 50 nm HfO2 coated sample before while Fig. 6b shows a specimen after the thermal treatment. The smooth surface (RMS roughness 0.26 nm) of the as deposited sample changes to structured surface with a RMS roughness of about 0.65 nm. The needlelike and triangular structures with a few hundreds of nanometers lateral dimensions and 5–10 nm height have a mean distance in the micrometer range. AFM picture of the 50 nm thick layer in the a as coated b annealed state In Fig. 7, all crystallization times are plotted as a function of the inverse temperature. The depicted lines are the fitted Arrhenius-type functions. Arrhenius-plot of the crystallization times for the 15 (black), 20 (dark gray), and 50 nm layers (gray); a common activation energy was used in the fitting procedure The average activation energy turned out to be 2.6 ± 0.5 eV, while the prefactors for the 50, 20, and 15 nm layers are respectively 5.8 × 10− 14 s, 8.4 × 10− 14 s, and 1.7 × 10− 13 s. This ratio of 1:1.5:3 is comparable to the layer thicknesses. Thus, in our experiment, the layer thickness dependence is probably a consequence of the crystallization's two-dimensional nature. For this situation, the nucleation rate per area and not per volume is decisive for the transformation rate. For a thinner film, even for equal volumetric nucleation rate, there is a much-reduced number of nuclei per area. In general, crystallization is getting easier with an increasing layer thickness, even though our layers are much thicker than the 1.5–5 nm critical thickness reported by Nie et al. [20]. According to Eqs. (1–3), the transformed volume fraction is dominated by the growth term even for two-dimensional growth modes. Short and long-range atomic jumps i.e. diffusion is a thermally activated rate-controlling process which usually plays a crucial role in crystallite growth. Since hafnia has two sublattices it is interesting whether the cation or the anion jumps are controlling the crystallization. The activation energy of 2.6 ± 0.5 eV is much higher than the reported experimental activation energy of 1 eV for oxygen self-diffusion in monoclinic hafnia [29]. Some theoretical reports [30, 31] assume even lower oxygen activation energies. We found no report on the self-diffusion of Hf in the different HfO2 phases. However, ZrO2 phases have almost identical lattice parameters as the respective HfO2 phases. Thus, it is often used for comparisons. An experimental series by Swaroop et al. [32] to compare the diffusivity of different cations in yttria-stabilized tetragonal zirconia has shown that Hf′s activation energy is virtually identical to that of the Zr (5.3 eV for lattice and 3.8 eV for grain boundary diffusion). Computational studies e.g. Refs [33, 34] on the Zr self-diffusion lead to an activation energy well above that of the oxygen's (2.5 eV or higher). During the sintering of ZrO2 particles, Suárez et al. found that the activation energy of Zr self-diffusion in ZrO2 is 2.3 eV. Furthermore, they also reported that it is the Zr self-diffusion, which controls the sintering process [35]. Our activation energy of 2.6 ± 0.5 eV is comparable to that of the cation diffusion in ZrO2. Therefore, cation atomic jumps could play a more important role in the crystallization of HfO2 films. Thermal treatment to cure defects and/or reduce stress levels is a well-known method for improving the laser-damage threshold (e.g. [23, 24, 36, 37]). However, typical annealing treatments stay well below the crystallization limits, as the presence of grain boundaries leads to an increase in the scattered light. Nevertheless, in the case of hafnia, it is also proven that crystallization [24] is an efficient method of increasing the single layer LIDT and multilayer coatings. Our results for the long annealing times are in qualitative agreement with this finding. We did not observe the very prominent increase of the LIDT as reported in [24], which is not surprising as we annealed at a lower temperature of 500 °C instead of the 600 and 700 °C marked as the optimal "high-temperature" annealing by Abromavicius et al. Indeed the observed Scherrer-size indicates that HfO2 crystallites are either very small or they are highly strained [38]. Irrespective of the origin of the line-broadening, the layers did not reach a relaxed, coarse-grained state. We also found an intermediate decrement of the LIDT values, which is not reported in [24] and worth further investigation. One explanation for the small Scherrer-size is high stress, this according to our own experience leads to low LIDT [39]. The other possibility is a high concentration of grain boundaries, which according to Tateno et al. [40] can also be a cause for reduced LIDT. Due to the higher temperatures, no comparable specimen was present in Ref. [24], which could explain why we observed a reduction of the LIDT. We analyzed the crystallization kinetics of IBS deposited HfO2 via in-situ XRD. We found that the crystallization for thin films up to 50 nm thickness follows the two-dimensional growth mode. The activation energy of the crystallization kinetics is 2.6 ± 0.5 eV which indicates that the displacement of Hf atoms could play a rate controlling role in the crystallization. We have proven that crystallization leads to a higher LIDT value. However, we also found that a decrement of the laser-damage resistance takes place at an intermediate stage. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Kaiser, N., Pulker, H.K.: Optical Interference Coatings. Springer Verlag, Berlin (2003) Ristau, D.: Laser-Induced Damage in Optical Materials. CRC Press, Boca Raton, FL (USA), 2015 ISBN: 978-1-4398-7217-8 Langdon, B., Patel, D., Krous, E., Rocca, J.J., Menoni, C.S., Tomasel, F., Kholi, S., McCurdy, P.R., Langston, P., Ogloza, A.: Influence of process conditions on the optical properties of HfO2/SiO2 coatings for high-power laser coatings. Proc. SPIE. 6720, 67200X (2007). https://doi.org/10.1117/12.753027 Rempe, G., Thompson, R.J., Kimble, H.J., Lalezari, R.: Measurement of ultralow losses in an optical interferometer. Opt. Lett. 17, 363 (1992). https://doi.org/10.1364/OL.17.000363 Alvisi, M., Di Giulio, M., Marrone, S.G., Perrone, M.R., Protopapa, M.L., Valentini, A., Vasanelli, L.: HfO2 films with high laser damage threshold. Thin Solid Films. 358, 250 (2000). https://doi.org/10.1016/S0040-6090(99)00690-2 Akhtar, S.M.J., Ristau, D., Ebert, J., Welling, H.: High damage threshold single and double layer antireflection (AR) coating for Nd:YAG Laser: conventional systems. J. Optoelectron. Adv. Mater. 9, 2391 (2007) Stolz, C.J., Thomas, M.D., Griffin, A.J.: BDS thin film damage competition. Proc. SPIE. 7132, 71320C (2008). https://doi.org/10.1117/12.806287 Miller, J., Barsotti, L., Vitale, S., Fritschel, P., Evans, M., Sigg, D.: Prospects for doubling the range of Advanced LIGO. Phys. Rev. D. 91, 062005 (2015). https://doi.org/10.1103/PhysRevD.91.062005 Steinlechner, J.: Development of mirror coatings for gravitational wave detectors. Philos. Trans. R. Soc. A. 376, 0282 (2018). https://doi.org/10.1098/rsta.2017.0282 Jiang, Y.Y., Ludlow, A.D., Lemke, N.D., Fox, R.W., Sherman, J.A., Ma, L.S., Oates, C.W.: Making optical atomic clocks more stable with 10-16-level laser stabilization. Nat. Photonics. 5, 158 (2011). https://doi.org/10.1038/nphoton.2010.313 Harry, G., Bodiya, T., DeSalvo, R.: Optical Coatings and Thermal Noise in Precision Measurement. Cambridge University Press, Cambridge (UK), (2012) ISBN: 9781107003385 Aso, Y., Michimura, Y., Somiga, K., Ando, M., Miyakawa, O., Sekiguchi, T., Tatsumi, D., Yamamoto, H.: Interferometer design of the KAGRA gravitational wave detector. Phys. Rev. D. 88, 043007 (2013). https://doi.org/10.1103/PhysRevD.88.043007 Cole, G.D., Zhang, W., Martin, M.J., Ye, J., Aspelmeyer, M.: Tenfold reduction of Brownian noise in high-reflective optical coatings. Nat. Photonics. 7, 644 (2013). https://doi.org/10.1038/nphoton.2013.174 Cole, G.D., Zhang, W., Bjork, B.J., Follman, D., Heu, P., Deutsch, C., Sonderhouse, L., Robinson, J., Franz, C., Alexandrovski, A., Notcutt, M., Heckl, O.H., Ye, J., Aspelmeyer, M.: High performance near and mid-infrared crystalline coatings. Optica. 3, 647 (2016). https://doi.org/10.1364/OPTICA.3.000647 Marchiò, M., Flaminio, R., Pinard, L., Forest, D., Deutsch, C., Heu, P., Follman, D., Cole, G.D.: Optical performance of large area crystalline coatings. Opt. Express. 5, 6117 (2018). https://doi.org/10.1364/OE.26.006114 He, G., Liu, M., Zhu, L.Q., Chang, M., Fang, Q., Zhang, L.D.: Effect of postdeposition annealing on the thermal stability and structural characteristics of sputtered HfO2 films on Si (100). Surf. Sci. 576, 67 (2005). https://doi.org/10.1016/j.susc.2004.11.042 Xie, Y., Ma, Z., Su, Y., Liu, Y., Liu, L., Zhao, H., Zhou, J., Zhang, Z., Li, J., Xie, E.: The influence of mixed phases on optical properties of HfO2 thin films prepared by thermal oxidation. J. Mater. Res. 26, 50 (2011). https://doi.org/10.1557/jmr.2010.61 Rammula, R., Aarik, J., Mänder, H., Ritslaid, P., Sammelselg, V.: Atomic layer deposition of HfO2: effect of structure development on growth rate, morphology and optical properties of thin films. Appl. Surf. Sci. 257, 1043 (2010). https://doi.org/10.1016/j.apsusc.2010.07.105 Wei, Y., Xu, Q., Wang, Z., Liu, Z., Pan, F., Zhang, Q., Wang, J.: Growth properties and optical properties for HfO2 thin films deposited by atomic layer deposition. J. Alloys Cmpd. 738, 1422 (2018). https://doi.org/10.1016/j.jallcom.2017.11.222 Nie, X., Ma, F., Ma, D.: Thermodynamics and kinetic behaviors of thickness-dependent crystallization in high-k thin films deposited by atomic layer deposition. J. Vacuum Sci. Technol. A. 33, 01A140 (2015). https://doi.org/10.1116/1.4903946 Biswas, D., Singh, M.N., Sinha, A.K., Bhattacharyya, S., Chakraborty, S.: Effect of excess hafnium on HfO2 crystallization temperature and leakage current behavior of HfO2/Si metal-oxide semiconductor devices. J. Vacuum Sci. Tecnol. B. 34, 022201 (2016). https://doi.org/10.1116/1.4941247 Kim, D.H., Park, J.W., Chang, Y.M., Lim, D., Chung, H.: Electrical properties and structure of laser-spike-annealed hafnium oxide. Thin Solid Films. 518, 2812 (2010). https://doi.org/10.1016/j.tsf.2009.08.039 Liu, H., Jiang, Y., Wang, L., Li, S., Yang, X., Jiang, C., Liu, D., Ji, Y., Zhang, F., Chen, D.: Effect of heat tretament on properties of HfO2 film deposited by ion beam sputtering. Opt. Mater. 73, 95 (2017). https://doi.org/10.1016/j.optmat.2017.07.048 Abromavicius, G., Kicas, S., Buzelis, R.: High temperature annealing effects on spectral, microstructural and laser damage resistance properties of sputtered HfO2 and HfO2-SiO2 mixture-based UV mirrors. Opt. Mater. 95, 109245 (2019). https://doi.org/10.1016/j.optmat.2019.109245 Avrami, M.: Kinetics of phase change. I general theory. J. Chem. Phys. 7, 1103 (1939) Gischkat, T., Schachtler, D., Balogh-Michels, Z., Botha, R., Mocker, A., Eiermann, B., Günther, S.: Influence of ultra-sonic frequency during substrate cleaning on the laser resistance of antireflection coatings. In: Proc. SPIE 11173, Laser-Induced Damage in Optical Materials, p. 1117317 (2019). https://doi.org/10.1117/12.2536442 Chapter Google Scholar ISO 21254-1:2011. International Organization for Standardization, Geneva.https://www.iso.org/standard/43001.html. Accessed 12 Nov 2018. Jensen, L., Mrohs, M., Gyamfi, M., Mäderbach, H., Ristau, D.: Higher certainty of the laser-induced damage threshold testwith a redistributing data treatment. Rev. Sci. Instrum. 86, 103106 (2015). https://doi.org/10.1063/1.4932617 Vos, M., Grande, P.L., Venkataalam, D.K., Nandi, S.K., Elliman, R.G.: Oxygen self-diffusion in HfO2 studied by electron spectrography. Phys. Rev. Lett. 112, 175901 (2014). https://doi.org/10.1103/PhysRevLett.112.175901 Capron, N.: Migration of oxygen vacancy in HfO2 and across the HfO2/SiO2 interface: a first principle investigation. Appl. Phys. Lett. 91, 192905 (2007). https://doi.org/10.1063/1.2807282 Shen, W., Kumari, N., Gibson, G., Jeon, Y., Henze, D., Silverthorn, S., Bash, C., Kumar, S.: Effect of annealing on structural changes and oxygen diffusion in amoprhous HfO2 using classical molecular dynamics. J. Appl. Phys. 123, 085113 (2018). https://doi.org/10.1063/1.5009439 Swaroop, S., Kilo, M., Argirusis, C., Borchardt, G., Chokshi, A.H.: Lattice and grain boundary diffusion of cations in 3YTZ analyzed using SIMS. Acta Mater. 53, 4975 (2005). https://doi.org/10.1016/j.actamat.2005.05.031 González-Romero, R.L., Meléndez, J.J., Gómez-García, D., Cumbrera, F.L., Domínguez-Rodríguez, A., Wakai, F.: Cation diffusion in yttria-zirconia by molecular dynamics. Solid State Ion. 204-205, 1 (2011). https://doi.org/10.1016/j.ssi.2011.10.006 Dong, Y., Qi, L., Li, J., Chen, I.W.: A computational study of yttria-stabilized zirconia: II. Cation diffusion. Acta Mater. 126, 438 (2017) Suárez, G., Garrido, L.B., Aglietti, E.F.: Sintering kinetics of 8Y–cubic zirconia: Cation diffusion coefficient. Mater. Chem. Phys. 110, 370 (2008). https://doi.org/10.1016/j.matchemphys.2008.02.021 Yao, J.K., Shao, H.D., He, H.B., Fan, Z.X.: Effects of annealing on laser-induced damage threshold of TiO2/SiO2 high reflectors. Appl. Surf. Sci. 253, 8911–8914 (2007). https://doi.org/10.1016/j.apsusc.2007.05.005 Tan, T., Liu, Z., Lu, H., Liu, W., Tian, H.: Structure and optical properties of HfO2 thin films on silicon after rapid thermal annealing. Opt. Mater. 32, 432–435 (2010). https://doi.org/10.1016/j.optmat.2009.10.003 Borzi, A., Dolabella, S., Szmyt, W., Geler-Kremer, J., Abel, S., Fompeyrine, J., Hoffmann, P., Neels, A.: Microstructure analysis of epitaxial BaTiO3 thin films on SrTiO3-buffered Si: Strain and dislocation density quantification using HRXRD methods. Materialia. 14, 100953 (2020). https://doi.org/10.1016/j.mtla.2020.100953 Stevanovic, I., Balogh-Michels, Z., Bächli, A., Wittwer, V.J., Südmeyer, T., Stuck, A., Gischkat, T.: Influence of the secondary ion beam source on the laser damage mechanism and stress evolution of IBS hafnia layers. Appl. Sci. 11/1, 189 (2021). https://doi.org/10.3390/app11010189 Tateno, R., Okada, H., Otobe, T., Kawase, K., Koga, J.K., Kosuge, A., Nagashima, K., Sugiyama, A., Kashiwagi, K.: Negative effect of crystallization on the mechanism of laser damage in a HfO2/SiO2 multilayer. J. Appl. Phys. 112, 123103 (2012). https://doi.org/10.1063/1.4767231 The authors thank Dr. Marlies Höland for the contributions to the experimental results. We would like to thank the Swiss Canton of St. Gallen and the Principality of Liechtenstein for supporting RhySearch. RhySearch, Werdenbergstrasse 4, CH9471, Buchs, SG, Switzerland Zoltán Balogh-Michels, Igor Stevanovic, Andreas Bächli, Daniel Schachtler, Thomas Gischkat, Alexander Stuck & Roelene Botha Center for X-ray Analytics, Empa, Überlandstrasse 129, CH8600, Dübendorf, Switzerland Zoltán Balogh-Michels, Aurelio Borzi & Antonia Neels Zoltán Balogh-Michels Igor Stevanovic Aurelio Borzi Andreas Bächli Daniel Schachtler Thomas Gischkat Antonia Neels Alexander Stuck Roelene Botha Z. Balogh-Michels: data acquisition, interpretation, design of work, original draft; I. Stevanovic: data acquisition; A. Borzi: data acquisition, interpretation, A. Bächli: conception of work, revision of draft; D. Schachtler: data acquisition, interpretation; T. Gischkat: conception of work, design of work, interpretation, revision of draft; A. Neels: design of work, data interpretation; A. Stuck: conception of work, design of work; R. Botha: conception of work. The author(s) read and approved the final manuscript. Correspondence to Zoltán Balogh-Michels. Balogh-Michels, Z., Stevanovic, I., Borzi, A. et al. Crystallization behavior of ion beam sputtered HfO2 thin films and its effect on the laser-induced damage threshold. J. Eur. Opt. Soc.-Rapid Publ. 17, 3 (2021). https://doi.org/10.1186/s41476-021-00147-w Hafnia Grain growth Thin films Laser-induced damage threshold EOS Annual Meeting (EOSAM) 2020
CommonCrawl
Your search history is empty. American Association for the Advancement of Science (AAAS) (1) Swift and NuSTAR Observations of GW170817: Detection of a Blue Kilonova (2017) Xu, Y. ; Hartmann, D. H. ; Nakar, E. ; [et al.] Melandri, A. ; Levan, A. J. ; Tanvir, N. R. ; D'Elia, V. ; Sakamoto, T. ; Even, W. P. ; Palmer, D. M. ; Sbarufatti, B. ; D'A, A. ; Barthelmy, S. D. ; Korobkin, O. ; O'Brien, P. T. ; Evans, P. A. ; ... and further 43 In: Other Sources Description: With the first direct detection of merging black holes in 2015, the era of gravitational wave (GW) astrophysics began. A complete picture of compact object mergers, however, requires the detection of an electromagnetic (EM) counterpart. We report ultraviolet (UV) and x-ray observations by Swift and the Nuclear Spectroscopic Telescope Array of the EM counter part of the binary neutron star merger GW170817.The bright, rapidly fading UV emission indicates a high mass (0.03 solar masses) wind-driven outflow with moderate electron fraction (Ye 0.27). Combined with the x-ray limits, we favor an observer viewing angle of 30 away from the orbital rotation axis, which avoids both obscuration from the heaviest elements in the orbital plane and a direct view of any ultra relativistic, highly collimated ejecta (a gamma-ray burst afterglow). Keywords: Astrophysics Type: GSFC-E-DAA-TN67019 , Science (ISSN 0036-8075) (e-ISSN 1095-9203); 358; 6370; 1565-1570 NASA TECHNICAL REPORTS Evans, P. A., Cenko, S. B., Kennea, J. A., Emery, S. W. K., Kuin, N. P. M., Korobkin, O., Wollaeger, R. T., Fryer, C. L., Madsen, K. K., Harrison, F. A., Xu, Y., Nakar, E., Hotokezaka, K., Lien, A., Campana, S., Oates, S. R., Troja, E., Breeveld, A. A., Marshall, F. E., Barthelmy, S. D., Beardmore, A. P., Burrows, D. N., Cusumano, G., DAi, A., DAvanzo, P., DElia, V., de Pasquale, M., Even, W. P., Fontes, C. J., Forster, K., Garcia, J., Giommi, P., Grefenstette, B., Gronwall, C., Hartmann, D. H., Heida, M., Hungerford, A. L., Kasliwal, M. M., Krimm, H. A., Levan, A. J., Malesani, D., Melandri, A., Miyasaka, H., Nousek, J. A., OBrien, P. T., Osborne, J. P., Pagani, C., Page, K. L., Palmer, D. M., Perri, M., Pike, S., Racusin, J. L., Rosswog, S., Siegel, M. H., Sakamoto, T., Sbarufatti, B., Tagliaferri, G., Tanvir, N. R., Tohuvavohu, A. American Association for the Advancement of Science (AAAS) In: Science Description: With the first direct detection of merging black holes in 2015, the era of gravitational wave (GW) astrophysics began. A complete picture of compact object mergers, however, requires the detection of an electromagnetic (EM) counterpart. We report ultraviolet (UV) and x-ray observations by Swift and the Nuclear Spectroscopic Telescope Array of the EM counterpart of the binary neutron star merger GW170817. The bright, rapidly fading UV emission indicates a high mass (0.03 solar masses) wind-driven outflow with moderate electron fraction ( Y e 0.27). Combined with the x-ray limits, we favor an observer viewing angle of 30° away from the orbital rotation axis, which avoids both obscuration from the heaviest elements in the orbital plane and a direct view of any ultrarelativistic, highly collimated ejecta (a -ray burst afterglow). Keywords: Astronomy Topics: Biology , Chemistry and Pharmacology , Geosciences , Computer Science , Medicine , Natural Sciences in General , Physics Published by American Association for the Advancement of Science (AAAS) PAPER CURRENT Erratum: Europium production: neutron star mergers versus core-collapse supernovae (2014) Matteucci, F., Romano, D., Arcones, A., Korobkin, O., Rosswog, S. In: Monthly Notices of the Royal Astronomical Society Published by Oxford University Press The runaway instability in general relativistic accretion discs (2013) Korobkin, O., Abdikamalov, E., Stergioulas, N., Schnetter, E., Zink, B., Rosswog, S., Ott, C. D. Description: When an accretion disc falls prey to the runaway instability, a large portion of its mass is devoured by the black hole within a few dynamical times. Despite decades of effort, it is still unclear under what conditions such an instability can occur. The technically most advanced relativistic simulations to date were unable to find a clear sign for the onset of the instability. In this work, we present three-dimensional relativistic hydrodynamics simulations of accretion discs around black holes in dynamical space–time. We focus on the configurations that are expected to be particularly prone to the development of this instability. We demonstrate, for the first time, that the fully self-consistent general relativistic evolution does indeed produce a runaway instability. The long-term evolution of neutron star merger remnants - I. The impact of r-process nucleosynthesis (2014) Rosswog, S., Korobkin, O., Arcones, A., Thielemann, F.- K., Piran, T. Description: We follow the long-term evolution of the dynamic ejecta of neutron star mergers for up to 100 years and over a density range of roughly 40 orders of magnitude. We include the nuclear energy input from the freshly synthesized, radioactively decaying nuclei in our simulations and study its effects on the remnant dynamics. Although the nuclear heating substantially alters the long-term evolution, we find that running nuclear networks over purely hydrodynamic simulations (i.e. without heating) yields actually acceptable nucleosynthesis results. The main dynamic effect of the radioactive heating is to quickly smooth out inhomogeneities in the initial mass distribution, subsequently the evolution proceeds self-similarly and after 100 years the remnant still carries the memory of the initial binary mass ratio. We also explore the nucleosynthetic yields for two mass ejection channels. The dynamic ejecta very robustly produce 'strong' r-process elements with A 〉 130 with a pattern that is essentially independent of the details of the merging system. From a simple model we find that neutrino-driven winds yield 'weak' r-process contributions with 50 〈 A 〈 130 whose abundance patterns vary substantially between different merger cases. This is because their electron fraction, set by the ratio of neutrino luminosities, varies considerably from case to case. Such winds do not produce any 56 Ni, but a range of radioactive isotopes that are long-lived enough to produce a second, radioactively powered electromagnetic transient in addition to the 'macronova' from the dynamic ejecta. While our wind model is very simple, it nevertheless demonstrates the potential of such neutrino-driven winds for electromagnetic transients and it motivates further, more detailed neutrino-hydrodynamic studies. The properties of the mentioned transients are discussed in more detail in a companion paper. Neutrino-driven winds from neutron star merger remnants (2014) Perego, A., Rosswog, S., Cabezon, R. M., Korobkin, O., Kappeli, R., Arcones, A., Liebendorfer, M. Description: We present a detailed, three-dimensional hydrodynamic study of the neutrino-driven winds emerging from the remnant of a neutron star merger. Our simulations are performed with the Newtonian, Eulerian code fish , augmented by a detailed, spectral neutrino leakage scheme that accounts for neutrino absorption. Consistent with earlier two-dimensional studies, a strong baryonic wind is blown out along the original binary rotation axis within 100 ms. From this model, we compute a lower limit on the expelled mass of 3.5 10 –3 M , relevant for heavy element nucleosynthesis. Because of stronger neutrino irradiation, the polar regions show substantially larger electron fractions than those at lower latitudes. The polar ejecta produce interesting r-process contributions from A 80 to about 130, while the more neutron-rich, lower latitude parts produce elements up to the third r-process peak near A 195. We calculate the properties of electromagnetic transients powered by the radioactivity in the wind, in addition to the 'macronova' transient stemming from the dynamic ejecta. The polar regions produce ultraviolet/optical transients reaching luminosities up to 10 41 erg s –1 , which peak around 1 d in optical and 0.3 d in bolometric luminosity. The lower latitude regions, due to their contamination with high-opacity heavy elements, produce dimmer and more red signals, peaking after ~2 d in optical and infrared. Europium production: neutron star mergers versus core-collapse supernovae (2014) Description: We have explored the Eu production in the Milky Way by means of a very detailed chemical evolution model. In particular, we have assumed that Eu is formed in merging neutron star (or neutron star-black hole) binaries as well as in Type II supernovae. We have tested the effects of several important parameters influencing the production of Eu during the merging of two neutron stars, such as (i) the time-scale of coalescence, (ii) the Eu yields and (iii) the range of initial masses for the progenitors of the neutron stars. The yields of Eu from Type II supernovae are very uncertain, more than those from coalescing neutron stars, so we have explored several possibilities. We have compared our model results with the observed rate of coalescence of neutron stars, the solar Eu abundance, the [Eu/Fe] versus [Fe/H] relation in the solar vicinity and the [Eu/H] gradient along the Galactic disc. Our main results can be summarized as follows: (i) neutron star mergers can be entirely responsible for the production of Eu in the Galaxy if the coalescence time-scale is no longer than 1 Myr for the bulk of binary systems, the Eu yield is around 3 x 10 –7 M and the mass range of progenitors of neutron stars is 9–50 M ; (ii) both Type II supernovae and merging neutron stars can produce the right amount of Eu if the neutron star mergers produce 2 x 10 –7 M per system and Type II supernovae, with progenitors in the range 20–50 M , produce yields of Eu of the order of 10 –8 –10 –9 M ; (iii) either models with only neutron stars producing Eu or mixed ones can reproduce the observed Eu abundance gradient along the Galactic disc. The long-term evolution of neutron star merger remnants - II. Radioactively powered transients (2014) Grossman, D., Korobkin, O., Rosswog, S., Piran, T. Description: We use 3D hydrodynamic simulations of the long-term evolution of neutron star merger ejecta to predict the light curves of electromagnetic transients that are powered by the decay of freshly produced r -process nuclei. For the dynamic ejecta that are launched by tidal and hydrodynamic interaction, we adopt grey opacities of 10 cm 2 g –1 , as suggested by recent studies. For our reference case of a 1.3–1.4 $\mathrm{{\rm M}}_{\odot }$ merger, we find a broad IR peak 2–4 d after the merger. The peak luminosity is 2 10 40 erg s –1 for an average orientation, but increased by up to a factor of 4 for more favourable binary parameters and viewing angles. These signals are rather weak and hardly detectable within the large error box (~100 deg 2 ) of a gravitational wave trigger. A second electromagnetic transient results from neutrino-driven winds. These winds produce 'weak' r -process material with 50 〈 A 〈 130 and abundance patterns that vary substantially between different merger cases. For an adopted opacity of 1 cm 2 g –1 , the resulting transients peak in the UV/optical about 6 h after the merger with a luminosity of 10 41 erg s –1 (for a wind of 0.01 $\mathrm{{\rm M}}_{\odot }$ ) These signals are marginally detectable in deep follow-up searches (e.g. using Hypersuprime camera on Subaru). A subsequent detection of the weaker but longer lasting IR signal would allow an identification of the merger event. We briefly discuss the implications of our results to the recent detection of a near infrared (nIR) transient accompanying GRB 130603B. Relativistic effects on tidal disruption kicks of solitary stars (2015) Gafton, E., Tejeda, E., Guillochon, J., Korobkin, O., Rosswog, S. Description: Solitary stars that wander too close to their galactic centres can become tidally disrupted, if the tidal forces due to the supermassive black hole residing there overcome the self-gravity of the star. If the star is only partially disrupted, so that a fraction survives as a self-bound object, this remaining core will experience a net gain in specific orbital energy, which translates into a velocity 'kick' of up to ~10 3 km s –1 . In this paper, we present the result of smoothed particle hydrodynamics simulations of such partial disruptions, and analyse the velocity kick imparted on the surviving core. We compare = 5/3 and = 4/3 polytropes disrupted in both a Newtonian potential, and a generalized potential that reproduces most relativistic effects around a Schwarzschild black hole either exactly or to excellent precision. For the Newtonian case, we confirm the results of previous studies that the kick velocity of the surviving core is virtually independent of the ratio of the black hole to stellar mass, and is a function of the impact parameter β alone, reaching at most the escape velocity of the original star. For a given β, relativistic effects become increasingly important for larger black hole masses. In particular, we find that the kick velocity increases with the black hole mass, making larger kicks more common than in the Newtonian case, as low-β encounters are statistically more likely than high-β encounters. The analysis of the tidal tensor for the generalized potential shows that our results are robust lower limits on the true relativistic kick velocities, and are generally in very good agreement with the exact results. On the astrophysical robustness of the neutron star merger r-process (2012) Korobkin, O. ; Rosswog, S. ; Arcones, A. ; [et al.] Winteler, C. In: Monthly Notices of the Royal Astronomical Society. 2012; 426(3): 1940-1949. Published 2012 Oct 08. doi: 10.1111/j.1365-2966.2012.21859.x. Published by Oxford University Press on behalf of The Royal Astronomical Society, RAS.
CommonCrawl
An efficient MIMO scheme with signal space diversity for future mobile communications Zhanji Wu1 & Xiang Gao1 An efficient wireless transmission scheme with the signal space diversity (SSD) is proposed to improve the performance of multiple-input multiple-output (MIMO) systems in fading channels. By introducing the rotated modulation and space-time component interleaver, the proposed scheme jointly optimizes channel coding, modulation, and MIMO and can improve the link reliability and energy efficiency. An optimum spatial component interleaver is proposed to maximize the MIMO achievable rate. Based on the average mutual information (AMI)-maximization criterion, the optimal rotation angles of real-valued signal and complex-valued QAM signal are investigated for the MIMO scheme. For the iterative demapping and decoding (ID) scheme, a simple genetic algorithm (GA) to search binary convolution code (BCC) is also put forward to match the rotated modulation. Simulation results show that the optimized BCC-coded MIMO scheme with SSD-ID outperforms the turbo-coded MIMO scheme with bit-interleaved coded modulation (BICM)-ID by 1.4 dB signal-to-noise ratio (SNR) gain, while the new scheme has much lower complexity. So, the proposed scheme is simple, efficient, and promising for future wireless communication systems. Wireless communications have made a great progress in the recent few years. By introducing more advanced technology, 5G will provide higher spectral efficiency, more spectrum resources, and more reliability to meet the growing demand for mobile traffic [1]. Bit-interleaved coded modulation (BICM) is a bandwidth-efficient coded modulation scheme which increases the time diversity in fading channels [2,3]. For its iterative version, BICM with iterative demapping and decoding (BICM-ID), the extrinsic information is transferred between the channel decoder and the soft-in-soft-out demapper, which is like the serial turbo decoder. Multiple-input multiple-output (MIMO) scheme is the extension of the coding theory on the space domain, so it is also named space-time coding (STC) [4]. Foschini proposed a layered space-time (LST) architecture to process multidimensional signals in the space domain [5]. The BICM-LST is a conventional spectral-efficient spatial multiplexing technology to deal with MIMO fading channels, and the BICM-threaded layered space-time (TLST) with a cyclic-shift spiral spatial interleaver is regarded as the most efficient method, because the cyclic-shift spatial interleaver introduces effective space diversity for the codeword on each layer [6]. In general, the BICM-LST can be viewed as the serial concatenation of the channel coding, modulation, and spatial layered multiplexing. Because BICM-LST exhibits a robust diversity performance on fading channels, it is widely deployed by wireless communication standards. As for the bandwidth-efficient quadrature amplitude modulation (QAM), uncoded rotated multidimensional modulation schemes over independent Rayleigh fading channels were studied in [7] for the single-input single-output (SISO) scheme. Different from the other well-known diversity (time, frequency, code, space), it has an intrinsic modulation diversity, which is named signal space diversity (SSD). Through the combination of constellation rotation and component interleaver, the schemes can achieve very high modulation diversity, and the error performance over fading channels can approach that on the additive white Gaussian noise (AWGN) channels. SSD schemes for SISO system have been extensively researched. In [8], SSD is introduced to the BICM by the means of modifications to the QAM constellation mapper and demapper so as to improve the BICM performance of QAM constellations for broadcasting applications. In [9], a LDPC-coded SSD scheme for multi-level modulation was presented. N.F. Kiyani and J.H. Weber studied the rotated-MPSK SISO BICM-ID system [10,12], which focused on two-dimensional multiphase shift keying (MPSK) scheme. In [11], the performance analysis of BICM-ID with SSD in fading channels is presented. In [13], the extension of BICM-SSD schemes with a non-binary code was proposed. We also proposed coded orthogonal frequency division multiplexing (OFDM) systems with SSD in [14,15]. In [16], the schemes combining SSD with SISO-coded BICM and BICM-ID systems were investigated. It provided a new criterion for determining the optimal rotation angle by maximizing the average mutual information (AMI). For the optimization of BICM-ID system, it proved that SSD can mitigate the different-slope problem of the demapper's extrinsic information transfer (EXIT) curve under different channels. However, finding well-matched channel codes for given labeling in BICM-ID system with SSD is still a big challenge. The combination of signal rotation and space-time coding in MIMO system can effectively improve the diversity gain. In order to achieve full diversity, the quasi-orthogonal space time block codes (QOSTBC) with rotating the constellations of half of the complex symbols has been widely discussed in [17-20]. Some specific optimal rotation angles and corresponding optimization criterions for QAM and phase shift keying (PSK) constellations are provided. A rotation-based method that aims at maximizing the minimum distance in the space-time constellation is proposed in [17]. The proposed scheme shows good improvement of the codes compared to their non-rotated counterparts. In [18], the authors considered the design of rotated QOSTBC for the MISO system. The code designs are based mainly on the rank and the determinant criteria, and the optimal rotation angle π/6 can provide full diversity and the optimal coding gain. In [19], the authors proposed to design the signal constellations properly to ensure that the resulting quasi-orthogonal STBCs can guarantee to achieve the full diversity. The optimal rotation angles are determined by maximizing the diversity product. A novel method to exactly derive the coding gain of QSTBC as a function of the rotation angle and the minimum Euclidean distance of two-dimensional constellations is proposed in [20]. A coded MIMO scheme for block-fading channels was proposed in [21], which consists of a channel code and a space-time code. The space-time code is designed based on SSD technique, which allows full spatial multiplexing MIMO transmission and achieves full space diversity. In [22], the uncoded SSD scheme was extended to V-BLAST MIMO systems in order to achieve the maximum diversity gain without additional power or bandwidth consumption. An improved turbo-coded SSD scheme was proposed for MIMO-OFDM BICM system in [23], and the linear minimum mean square error (LMMSE) equalization is utilized for the non-ID MIMO detection. In general, the research of SSD technique in coded MIMO systems is still on the original stage. There are still many open problems. For instance, the optimal rotation angles in current research mainly depend on the maximum product distance introduced in [7]. Unfortunately, this criterion is only valid for the SISO system in high signal-to-noise ratio (SNR) region. As for the coded MIMO scheme, when powerful forward error-correction codes (FECs) are considered, actual SNR can be quite low. Hence, the angle values applied to uncoded SISO system do not lead to the best error performance for the coded modulation MIMO scheme. What is more, current research works mainly focus on local optimizations. For example, most proposed MIMO systems with SSD are only an extension of SISO-SSD system, and all are based on the conventional non-precoding transmitter. The channel coding, QAM modulation, and STC are independent with each other, which is just a straightforward serial concatenation. Hence, the performance of the BICM-LST is still rather far away from the MIMO fading channel capacity. For example, a near-capacity BICM-LST scheme was proposed in [24], which allows the iterative processing of the list sphere detection (LSD) and turbo decoding, but simulation results indicate that the gaps to the MIMO capacity are still more than 2 dB. As each individual optimization becomes mature so far, from the philosophy, it is high time to optimize these key technical elements jointly so as to improve the overall performance. An improved coded MIMO system based on SSD is proposed for the jointly optimization of constellation rotation angle, spatial component interleaver, and the matching of channel coding and labeling, which is named joint coding and modulation diversity (JCMD), where the terminology 'coding' refers to both the channel coding and space-time coding. Firstly, in order to maximize the MIMO achievable rate, an optimum spatial component interleaver is proposed. Secondly, based on the AMI-maximization criterion, the optimal rotation angles of real-valued signal and complex-valued QAM signal are investigated for MIMO schemes, which are different from the SISO scheme. Thirdly, for the JCMD-ID scheme, a simple genetic algorithm (GA) to search binary convolution code (BCC) is put forward to match the rotated QAM modulation. Simulation results show that the optimized BCC-coded JCMD-ID MIMO scheme outperforms the turbo-coded BICM-ID MIMO scheme in [24] by 1.4 dB SNR gain, while the new scheme has much lower complexity. Throughout this paper, we use bold letters to represent vectors or matrices. (·)T and (·)H represent transposition and conjugate transposition, respectively. SNR =E s /N 0, where E s denotes the average symbol energy per receive antenna and N 0=2σ 2 denotes the variance of the complex Gaussian noise. The paper is organized as follows. An improved JCMD MIMO scheme is proposed in Section 2. Theoretical analysis about the achievable rate of JCMD-MIMO for rotated real-valued signals is given in Section 3. Based on the AMI analysis, the optimal rotation angles for JCMD and JCMD-ID MIMO systems are presented in Section 4. Section 5 introduces an outer convolutional code search method for the optimization of JCMD-ID system with optimal rotation angle. Simulation results are presented in Section 6 on fast fading channels. Concluding remarks are offered in Section 7. Based on the BICM-TLST scheme, a system model of N L -layer MIMO-JCMD-ID is shown in Figure 1. Perfect channel state information (CSI) is assumed to be known at both the transmitter and the receiver. In Figure 1, the iterative feedback processing is depicted in dashed lines. Without loss of generality, a rank- L N R ×N T MIMO system with L nonzero eigenvalues is assumed, where N R and N T are the number of receive and transmit antennas, respectively, and N L ≤L≤ min{N R ,N T }. In the transmitter, K information bits B=(b 1,b 2,…,b K )T are encoded and interleaved to yield the coded bit sequence C=(c 1,c 2,…,c N )T. Afterwards, m-tuple coded bits are mapped to a complex symbol x k =x k (I)+j·x k (Q), which is chosen from a 2m-ary rotated QAM constellation set \(\chi = \left \{{\hat x}_{1}, {\hat x}_{2},\ldots,{\hat x}_{2^{m}} \right \}\) according to some optimal angle. Each symbol x k has one Q-component x k (Q) and one I-component x k (I). The rotated mapped symbol sequence is first mapped onto N L layers in a round robin manner. Afterwards, the conventional cyclic-shift spatial interleaver of TLST is used for different symbols on N L layers to exploit both the space and time diversity as the following, because the cyclic-shift spatial interleaver allows the codewords to be distributed on all layers: $$ {w}_{k}^{l} = {x}_{k}^{i}, \ \ \ \ \ l = (i + k -2)\bmod {N_{L}} + 1, $$ ((1)) System model for MIMO-JCMD-ID with perfect CSI. The iterative feedback processing is depicted in dashed lines where \({w}_{k}^{l}\) denotes the kth (k∈N +) symbol at the lth (l∈[ 1,N L ]) layer after the spatial interleaver, and \({x}_{k}^{i}\) denotes the kth symbol at the ith (i∈ [ 1,N L ]) layer before the interleaver. Then, a spatial Q-component interleaver is applied for the Q components of N L symbols at the same instant as the following: $$ {z}_{k}^{n}(Q) = {w}_{k}^{l}(Q), \ \ \ \ \ n = {N_{L}} - l + 1, $$ where \({z}_{k}^{n}\) denotes the kth symbol at the nth (n∈ [ 1,N L ]) layer after the spatial Q interleaver. Thus, I components keep the same layer order as before, and just Q components change the layer order. The spatial Q-component interleaver is used to make the fading of I component and that of the Q component as uncorrelated as possible in the space domain. Thus, the modulation diversity of the proposed scheme is further extended to the spatial dimension. Actually, the Q-component spatial interleaver can be different from the reverse interleaver in Equation 2. For example, it can be a cyclic-shift interleaver as the following, $$ {z}_{k}^{n}(Q) = {w}_{k}^{l}(Q), \ \ \ \ \ n = (l\bmod {N_{L}}) + 1. $$ If perfect CSI is known, we prove that the reverse interleaver is better than other interleavers through the later theoretical analysis and computer simulations. If CSI is unknown at the transmitter, the cyclic-shift interleaver in Equation 3 can be used. In order to make the fading of I component and that of the Q component as uncorrelated as possible in the time domain, after the spatial Q interleaving, Q components of the mapped symbols in each layer are interleaved through a time-domain pseudo S-random interleaver to reconstruct a new symbol vector \({{\mathbf {s}}_{k}} = \left [s_{k}^{1} \cdots {{s}_{k}^{{N_{L}}}}\right ]^{T}\), where \({s}_{k}^{l}\) denotes the kth symbol at the lth layer after the component interleaving. Afterwards, the symbols are mapped onto N T transmit antennas via the spatial precoding and then transmitted. The ideal precoding matrix can be obtained by singular value decomposition (SVD), which can divide MIMO channel into parallel independent SISO channels. According to the SVD criterion, the N R ×N T MIMO channel matrix H k can be decomposed as $$ \mathbf{H}_{k} = \mathbf{U}_{k} \mathbf{D}_{k} {{\mathbf{V}}_{k}}^{H}, $$ where the N R ×N R matrix U k and the N T ×N T matrix V k are unitary matrices. D k is a N R ×N T non-negative diagonal matrix with N L nonzero descending-order singular values, \(\sqrt {{\rho _{1}}} \ge \sqrt {{\rho _{2}}} \ge \ldots \ge \sqrt {{\rho _{{N_{L}}}}} > 0\), where ρ i is the ith largest eigenvalue of H k ·H k H. Thus, the SVD-based linear precoding is performed as the following: $$ {{\mathbf{p}}_{k}} = {{\mathbf{V}}_{k}} \cdot {{\mathbf{s}}_{k}}. $$ In the receiver, the corresponding detection matrix is \({\mathbf {U}}_{k}^{H}\). The precoding and detection process can be expressed as linear transformations as shown in Equation 6. $$ \begin{aligned} {{\mathbf{r}}_{k}} &= {\mathbf{U}}_{k}^{H} \cdot \left({{{\mathbf{H}}_{k}} \cdot {{\mathbf{p}}_{k}} + {{\mathbf{n}}_{k}}} \right) \\ &= {\mathbf{U}}_{k}^{H} \cdot {{\mathbf{H}}_{k}} \cdot {{\mathbf{V}}_{k}} \cdot {{\mathbf{s}}_{k}} + {\mathbf{U}}_{k}^{H} \cdot {{\mathbf{n}}_{k}}\\ &= {{\mathbf{D}}_{k}} \cdot {{\mathbf{s}}_{k}} + {\mathbf{n}}^{\prime}_{k}, \end{aligned} $$ where \({{\mathbf {r}}_{k}} = {\left [{{r}_{k}^{1}} \cdots {{r}_{k}^{{N_{R}}}}\right ]^{T}}\) denotes the received symbol vector, n k and \({\mathbf {n}}^{\prime }_{k}\) are column vectors of N R complex Gaussian random variables with mean zero and variance \({\sigma ^{2}} = \frac {{{N_{0}}}}{2}\). Thus, due to SVD, the MIMO channel can be viewed as N L parallel fading channels, and for lth layer, the kth received symbol that corresponds to \({s_{k}^{l}}\) in the transmitter can be expressed as $$ {r}_{k}^{l} = \sqrt {{\rho_{l}}} \cdot {s}_{k}^{l} + {n'}_{k}^{l}. $$ After the corresponding Q-component de-interleaving in time domain for each layer and spatial Q-component de-interleaving, the kth received symbol on lth layer is reconstructed as \({y_{k}^{l}}\) that corresponds to \({x_{k}^{l}}\) in the transmitter. For \({y_{k}^{l}}\), the fading coefficients of I-component λ k (I) and Q-component λ k (Q) are different, which can be expressed as $$ \begin{aligned} {y_{k}^{l}} (I) &= {\lambda_{k}^{l}} (I){x_{k}^{l}} (I) + {n_{k}^{l}} (I) \\ {y_{k}^{l}} (Q) &= {\lambda_{k}^{l}} (Q){x_{k}^{l}} (Q) + {n_{k}^{l}} (Q). \\ \end{aligned} $$ Assuming that the Q interleaver is long enough, each coordinate of the symbol after the Q de-interleaving can be regarded as suffering from independent fading coefficients. The equivalent coded modulation (CM) channel for JCMD system before sending to the soft demapper can be modeled as $$ {Y_{k}^{l}} (\eta) = {\Lambda_{k}^{l}} (\eta){X_{k}^{l}} (\eta) + {N_{k}^{l}} (\eta),\eta \in \{ I,Q\},l \in\,[\!1,N_{L}], $$ where \({N_{k}^{l}} (I)\) and \({N_{k}^{l}} (Q)\) are identically independently distributed (i.i.d.) Gaussian noise random variables with zero mean and variance of \(\sigma ^{2} = \frac {{N_{0} }}{2}\). For MIMO fading channels, \({\Lambda _{k}^{l}} (I)\) and \({\Lambda _{k}^{l}} (Q)\) are singular values of corresponding sub-channels. This is in net contrast with respect to SISO scheme where the fading coefficients are Rayleigh distributed. That means the modulation diversity of proposed scheme is further extended to the spatial dimension. By denoting \({\mathbf {X}} = \left [ {{\mathbf {X}}_{1},\ldots,{\mathbf {X}}_{N_{L}}} \right ]^{T}\), \({\mathbf {Y}} = \left [ {{\mathbf {Y}}_{1},\ldots,{\mathbf {Y}}_{N_{L}}} \right ]^{T}, {\mathbf {N}} = \left [ {{\mathbf {N}}_{1},\ldots,{\mathbf {N}}_{N_{L}}} \right ]^{T}\), and \({\mathbf {\Lambda }} = {\text {diag}}\left ({{\mathbf {\Lambda }}_{1},\ldots,{\mathbf {\Lambda }}_{N_{L}}} \right)\) representing a (2N L ×2N L ) diagonal matrix, the channel model in Equation 9 can be written in the matrix form as Y=Λ X+N, where \({\mathbf {X}}_{l} = \left [ {{X_{k}^{l}} (I),{X_{k}^{l}} (Q)} \right ]\), \({\mathbf {Y}}_{l} = \left [ {{Y_{k}^{l}} (I),{Y_{k}^{l}} (Q)} \right ]\), \({\mathbf {N}}_{l} = \left [ {{N_{k}^{l}} (I),{N_{k}^{l}} (Q)} \right ]\), and \({\mathbf {\Lambda }}_{l} = {\text {diag}}\left ({{\Lambda _{k}^{l}} (I),{\Lambda _{k}^{l}} (Q)} \right)\). After that, symbols on multiple layers are reassembled as symbol streams y=[y 1,y 2⋯ ]T through the layer demapping. A serial concatenation of a soft-in-soft-out rotated symbol demapper and a channel decoder are employed to approach the maximum likelihood (ML) receiver performance. For JCMD-ID, the iterative demapping and decoding scheme is an application of the turbo decoder principle. The soft demapper calculates the extrinsic value E(c i,k ) of bit c i,k which corresponds to the ith (i=1,2,…,m) bit of the received symbol y k as follows: $$ E(c_{i,k}) = L(c_{i,k}) - A(c_{i,k}), $$ ((10)) where \(L(c_{i,k}) \buildrel \Delta \over = \ln \frac {{P\left ({c_{i,k} = 0|{y}_{k}} \right)}}{{P\left ({c_{i,k} = 1|{y}_{k}} \right)}}\) is the log-likelihood ratio (LLR) and \(A(c_{i,k}) \buildrel \Delta \over = \ln \frac {{P\left ({c_{i,k} = 0} \right)}}{{P\left ({c_{i,k} = 1} \right)}}\) is a priori L-value. Based on Bayes' theorem, we can write $$ {} E({c_{i,k}})= \ln \frac{{\sum\limits_{{{\hat x}} \in \chi, \atop {{\hat c}_{i}} = 0} {P({{{y}}_{k}}|{{{x}}_{k}} = {{\hat x}})\exp \left\{ {\sum\limits_{j = 1, \atop j \ne i }^{m} {\left[{{{\left({- 1} \right)}^{{{\hat c}_{j}}}}\frac{{A\left({{c_{j,k}}} \right)}}{2}} \right]}} \right\}}}} {{\sum\limits_{{{\hat x}} \in \chi, \atop \hat c = 1} {P({{{y}}_{k}}|{{{x}}_{k}} = {{\hat x}})\exp \left\{ {\sum\limits_{j = 1, \atop j \ne i }^{m} {\left[ {{{\left({ - 1} \right)}^{{{\hat c}_{j}}}}\frac{{A\left({{c_{j,k}}} \right)}}{2}} \right]}} \right\}}}}, $$ where \(\hat c_{i}\) is the ith bit corresponding to symbol \({{\hat x}}\). For the fading channel, the conditional probability is given by $$ {\small{\begin{aligned} {} P({{y}}_{k} |{{x}}_{k} \,=\, {{\hat x}}) \,=\, \frac{1}{{2\pi\sigma^{2} }}\exp \left({ - \frac{{\left({{y_{k}^{I}} - {\lambda_{k}^{I}} {{\hat x}}^{I}} \right)^{2} \,+\, \left({{y_{k}^{Q}} \,-\, {\lambda_{k}^{Q}} {{\hat x}}^{Q}} \right)^{2} }}{{2\sigma^{2} }}} \right). \end{aligned}}} $$ To the complexity, by applying Max-Log-MAP algorithm, Equation 11 can be simplified as $$\begin{array}{*{20}l}\begin{array}{c} E({c_{i,k}}) = \mathop {\max}\limits_{{{\hat x}} \in \chi, \hfill \atop {{\hat c}_{i,k}} = 0} \left\{ {{\Omega_{k}}\left({{{\hat x}}} \right) - \sum\limits_{j = 1, \hfill \atop j \ne i \hfill}^{m} {\left[ {{{\left({ - 1} \right)}^{{{\hat c}_{j,k}}}}\frac{{A\left({{c_{j,k}}} \right)}}{2}} \right]}} \right\} - \\ \mathop {\max }\limits_{{{\hat x}} \in \chi, \hfill \atop {{\hat c}_{i,k}} = 1 \hfill} \left\{ {{\Omega_{k}}\left({{{\hat x}}} \right) - \sum\limits_{j = 1, \hfill \atop j \ne i \hfill}^{m} {\left[ {{{\left({ - 1} \right)}^{{{\hat c}_{j,k}}}}\frac{{A\left({{c_{j,k}}} \right)}}{2}} \right]}} \right\}, \end{array}\end{array} $$ where \(\Omega _{k} \left ({{{\hat x}}} \right) = - \frac {{\left ({{y_{k}^{I}} - {\lambda _{k}^{I}} {{\hat x}}^{I}} \right)^{2} + \left ({{y_{k}^{Q}} - {\lambda _{k}^{Q}} {{\hat x}}^{Q}} \right)^{2} }}{{2\sigma ^{2} }}\) For the JCMD system without the iterative demapping and decoding, A(c i,k )=0. Finally, the decoder can utilize the extrinsic values to decode information bits. In the transmitter, compared with the conventional BICM, the JCMD scheme introduces extra constellation rotation and Q-component interleavers. Constellation rotation does not increase the complexity, because the rotated symbol mapping can be implemented through look-up table operations as the same as the conventional modulation without rotation. Q-component interleavers also can be implemented by the low-complexity index-based look-up table operations. In the receiver, the soft rotated demapping operation of JCMD system is the same as that of BICM system, which is shown in Equation 11. Q-component de-interleavers also can be implemented by the simple reverse index-based look-up table operations. Theoretical analysis of the achievable rate for rotated real-valued signals Firstly, since real-valued signals are elementary, we analyze the real-valued transmit signals in MIMO systems. Lemma 1. For any constant rank-2 MIMO fading channel with real-valued transmit signals, the constant achievable rate of JCMD-MIMO is not less than that of BICM-MIMO with the conventional uniform power allocation. If and only if the constant eigenvalues are identical, both of them are equal, otherwise the former is greater than the latter. A simple rank-2 MIMO case with two eigenvalues ρ 1 and ρ 2 is illustrated in Figure 2. Due to the well-known SVD, BICM-MIMO can be viewed as two parallel fading channels with two eigenvalues ρ 1 and ρ 2 for spatial layer 1 and layer 2, respectively. Two fading amplitude coefficients of layer 1 and layer 2 are the corresponding singular values \(\sqrt {\rho _{1}} \) and \(\sqrt {\rho _{2}} \), respectively, as shown in the left half of Figure 2. Thus, given the same transmit power \(\frac {P}{2}\) on each layer for one real-valued symbol, the received symbol power of layer 1 and that of layer 2 are \(\frac {{\rho _{1} P}}{2}\) and \(\frac {{\rho _{2} P}}{2}\), respectively, where P is the total transmit power for two layers. So, according to Shannon's theory, the achievable rate of BICM-MIMO is shown as the following: $$ \begin{aligned} C_{1} &= \frac{W}{2} \cdot \log_{2} \left[ \left({1 + \frac{{\rho_{1} P}}{{2\sigma^{2} }}} \right)\left({1 + \frac{{\rho_{2} P}}{{2\sigma^{2} }}} \right) \right]\\&= \frac{W}{2} \cdot \log_{2} \left({1 + \frac{{\rho_{1} + \rho_{2} }}{{2\sigma^{2} }}P + \frac{{\rho_{1} \rho_{2} }}{{4\sigma^{4} }}P^{2}} \right), \end{aligned} $$ Achievable rates of conventional BICM-MIMO and JCMD-MIMO system. (a) BICM-MIMO can be viewed as two parallel fading channels with two eigenvalues ρ 1 and ρ 2. (b) JCMD-MIMO can be viewed as two parallel fading channels with identical fading amplitude coefficient \(\sqrt {\frac {{\rho _{1} + \rho _{2} }}{2}} \) for both layers where W is the channel bandwidth, σ 2 is the variance of AWGN. As we know, in order to achieve the capacity in the Equation 14, the transmit signals should be Gaussian distributed. Note that the rotation does not change the achievable rate for the BICM-MIMO scheme without the I/Q-component interleaver. For the JCMD-MIMO scheme, we consider the \(\frac {\pi }{4}\)-rotated real-valued transmit signal, which is rotated by \(\frac {\pi }{4}\) compared with the conventional real-valued signal. Due to the orthogonal modulation, the transmit powers of I component and that of Q component on each layer are both \(\frac {P}{4}\). For JCMD-MIMO, after the spatial Q-component de-interleaver at the receiver, the fading amplitude coefficient of I component is different from that of Q component in one symbol, that is to say, one is \(\sqrt {\rho _{1}} \), and another is \(\sqrt {\rho _{2}} \). Thus, the received power of I component is also different from that of Q component, that is to say, one is \(\frac {{\rho _{1} P}}{4}\), and another is \(\frac {{\rho _{2} P}}{4}\). So, the total received symbol power is \(\frac {{(\rho _{1} + \rho _{2})P}}{4}\) both for layer 1 and layer 2. Thus, JCMD-MIMO can be viewed as two parallel fading channels with identical fading amplitude coefficient \(\sqrt {\frac {{\rho _{1} + \rho _{2} }}{2}} \) for both layer 1 and layer 2, as shown in the right half of Figure 2. In the receiver, after the phase compensation, the received signal also can be viewed as the real-valued signal with the power \(\frac {{(\rho _{1} + \rho _{2})P}}{4}\). Therefore, the achievable rate of JCMD-MIMO is shown as the following: $$ \begin{aligned} C_{2} &= \frac{W}{2} \cdot \log_{2} \left[ {1 + \frac{{(\rho_{1} + \rho_{2})}}{{4\sigma^{2} }}P} \right]^{2} \\&= \frac{W}{2} \cdot \log_{2} \left[ {1 + \frac{{\rho_{1} + \rho_{2} }}{{2\sigma^{2} }}P + \frac{{(\rho_{1} + \rho_{2})^{2} }}{{16\sigma^{4} }}P^{2}} \right]. \end{aligned} $$ So, comparing Equation 14 with 15, it is easy to come to the conclusion: C 2≥C 1 If and only if ρ 2=ρ 1, C 2=C 1. □ Furthermore, we will prove that \(\frac {\pi }{4}\) is the optimum rotation angle for real-valued transmit signals in the JCMD-MIMO scheme. Let us consider a general θ-rotated real-valued transmit signal, which is rotated by θ compared with the conventional real-valued signal. Thus, the transmit power of I component and that of Q component on layer 1 are \(\frac {P}{2}\cos ^{2} \theta \) and \(\frac {P}{2}\sin ^{2} \theta \), respectively, and the total transmit power on layer 1 is also \(\frac {P}{2}\). Therefore, the received power of I component and that of Q component are \(\frac {{\rho _{2} P}}{2}\cos ^{2} \theta \) and \(\frac {{\rho _{1} P}}{2}\sin ^{2} \theta \), respectively. So, the total received symbol power on layer 1 is \(\frac {{(\rho _{1} \sin ^{2} \theta + \rho _{2} \cos ^{2} \theta)}}{2}P\). Likewise, the total received symbol power on layer 2 is \(\frac {{(\rho _{2} \sin ^{2} \theta + \rho _{1} \cos ^{2} \theta)}}{2}P\). Thus, we can get the following achievable rate: $$ {\fontsize{7.8pt}{9.6pt}\selectfont{\begin{aligned} {} C(\theta) &\,=\, \frac{W}{2} \!\!\cdot\! {\log_{2}}\!\left\{ \!\left[\! {1 \,+\, \frac{{\left({\rho_{1}}{{\sin }^{2}}\theta \,+\, {\rho_{2}}{{\cos }^{2}}\theta \right)}}{{2{\sigma^{2}}}}P} \!\right] \!\!\cdot\!\! \left[\!\! {1 \,+\, \frac{{\left({\rho_{1}}{{\cos }^{2}}\theta \,+\, {\rho_{2}}{{\sin }^{2}}\theta \right)}}{{2{\sigma^{2}}}}P}\! \right]\! \right\} \\ &\,=\, \frac{W}{2} \cdot {\log_{2}}\left[1 \,+\, \frac{{{\rho_{1}} \!+\ {\rho_{2}}}}{{2{\sigma^{2}}}}P \,+\, \left({{\rho_{1}}{\rho_{2}} + \frac{{{{\sin }^{2}}(2\theta){{({\rho_{1}} - {\rho_{2}})}^{2}}}}{4}} \right)\frac{{{P^{2}}}}{{4{\sigma^{4}}}} \right]. \\ \end{aligned}}} $$ Obviously, when θ=0, the achievable rate is minimum as Equation 14; and when \(\theta = \frac {\pi }{4}\), the achievable rate is maximum as Equation 15. According to Lemma 1, we can come to the following theorem: For any actual rank-2 MIMO fading channel with real-valued transmit signals, the ergodic achievable rate of JCMD-MIMO is greater than that of BICM-MIMO with the conventional uniform power allocation. For any actual rank-2 MIMO fading channel, the ergodic achievable rate is the mathematical average expectation of the above constant achievable rate over all possible channel eigenvalue realization. Generally speaking, the equal eigenvalue realization is only a small probability event. The case that the channel eigenvalues are different must be existed. According to Lemma 1, the ergodic achievable rate of JCMD-MIMO is greater than that of BICM-MIMO with the conventional uniform power allocation. □ Generally speaking, assuming a rank-L MIMO with a descending-order eigenvalue vector \(\boldsymbol {\bar {\rho }} = \{\rho _{1},\rho _{2},\ldots,\rho _{L}\} \), Q-component interleaver only changes the order of Q components on L layers to another eigenvalue vector \(\boldsymbol {\bar {\zeta }} = \{ \zeta _{1},\zeta _{2},\ldots,\zeta _{L} \} \), where \(\boldsymbol {\bar {\zeta }}\) is just another arrangement order of \(\boldsymbol {\bar {\rho }}\) corresponding to the output order of Q-component spatial interleaver. Hence, due to the orthogonal modulation, JCMD-MIMO can be viewed as L parallel fading channels with an eigenvalue vector \(\boldsymbol {{\bar \upsilon }}=\frac {{{\boldsymbol {\bar \rho + \bar \zeta }}}}{{{2}}}\). So, the achievable rate of rank-L JCMD-MIMO is shown as the following: $$ \begin{aligned} C_{L} ({\boldsymbol{\bar \zeta}}) &= \frac{W}{2} \cdot \sum\limits_{i = 1}^{L} {\log_{2} \left[ {1 + \frac{{(\rho_{i} + \xi_{i})}}{{2\sigma^{2} L}}P} \right]} \\ &= \frac{W}{2} \cdot \log_{2} \left[ {\prod\limits_{i = 1}^{L} {\left({1 + \frac{{(\rho_{i} + \xi_{i})}}{{2\sigma^{2} L}}P} \right)}} \right], \end{aligned} $$ where P is the total transmit power for L layers. Hence, the optimum problem of the achievable rate is to find the optimum \({\boldsymbol {\bar \zeta }}\) so as to maximize C L . We can reach the following theorem: To maximize the achievable rate of rank-L JCMD-MIMO with a descending-order eigenvalue vector \({\boldsymbol {\bar \rho }} = \{ \rho _{1},\rho _{2},\ldots,\rho _{L} \} \), the optimum Q-component interleaver vector \(\overline {\boldsymbol {\omega }} = \{ \rho _{L},\rho _{L - 1},\ldots,\rho _{1} \}\), that is to say, \(\overline {\boldsymbol {\omega }} \) should be in ascending order, which is just in reverse order of \({\boldsymbol {\bar \rho }}\). Assuming someone claims to find a non-increasing-order vector \({\boldsymbol {\bar \rho ^{\prime }}}\) other than \(\overline {\boldsymbol {\omega }}\) to have maximum achievable rate \(C_{L} ({\boldsymbol {\bar \rho ^{\prime }}})\), we must can find a pair of \(\left \{ \rho ^{\prime }_{i},\rho ^{\prime }_{i + 1} \right \} \) from \({\boldsymbol {\bar \rho ^{\prime }}}\) to satisfy \(\rho ^{\prime }_{i} > \rho ^{\prime }_{i + 1} \). And then, we can construct a new vector \({\boldsymbol {\eta }} = \left \{ \rho ^{\prime }_{1},\rho ^{\prime }_{2},\ldots,\rho ^{\prime }_{i - 1},\rho ^{\prime }_{i + 1},\rho ^{\prime }_{i,} \rho ^{\prime }_{i + 2},\ldots,\rho ^{\prime }_{L} \right \}\), which only changes the order of \(\left \{ \rho ^{\prime }_{i},\rho ^{\prime }_{i + 1} \right \} \) from \({\boldsymbol {\bar \rho ^{\prime }}}\). So, we can compute the difference of \(\frac {{C_{L} ({\boldsymbol {\eta }}) - C_{L} ({\boldsymbol {\bar \rho ^{\prime }}})}}{{W/2}}\), $$ {\fontsize{7.6}{6}{\begin{aligned} {} \frac{{C_{L} ({\boldsymbol{\eta}}) \,-\, C_{L} ({\boldsymbol{\bar \rho^{\prime}}})}}{{{W/2}}} &= \sum\limits_{k = 1}^{L} {\log_{2}\! \left[\! {1 \,+\, \frac{{(\rho_{k} \,+\, \eta_{k})}}{{2\sigma^{2} L}}P} \right]} \,-\, \sum\limits_{k = 1}^{L} {\log_{2}\! \left[ {1 \,+\, \frac{{\left(\rho_{k} \,+\, \rho^{\prime}_{k} \right)}}{{2\sigma^{2} L}}P} \right]} \\ &= \log_{2} \left[ {1 + \frac{{(\rho_{i} + \eta_{i})}}{{2\sigma^{2} L}}P} \right] + \log_{2} \left[ {1 + \frac{{(\rho_{i + 1} + \eta_{i + 1})}}{{2\sigma^{2} L}}P} \right] \\ &\quad- \log_{2} \!\left[ {1 + \frac{{\left(\rho_{i} + \rho^{\prime}_{i}\right)}}{{2\sigma^{2} L}}P} \right] \!- \log_{2} \left[ {1 + \frac{{\left(\rho_{i + 1} + \rho^{\prime}_{i + 1} \right)}}{{2\sigma^{2} L}}P} \right] \\ &= \log_{2}\! \left[ {1 \,+\, \frac{{\left(\rho_{i} + \rho^{\prime}_{i + 1} \right)}}{{2\sigma^{2} L}}P} \right] + \log_{2} \left[ {1 + \frac{{\left(\rho_{i + 1} + \rho^{\prime}_{i} \right)}}{{2\sigma^{2} L}}P} \right] \\ &\quad- \log_{2}\! \left[ {1 \,+\, \frac{{\left(\rho_{i} + \rho^{\prime}_{i} \right)}}{{2\sigma^{2} L}}P} \right] \,-\, \log_{2} \left[ {1 + \frac{{\left(\rho_{i + 1} + \rho^{\prime}_{i + 1} \right)}}{{2\sigma^{2} L}}P} \right]. \\ \end{aligned}}} $$ Let a constant \(c = \frac {P}{{2\sigma ^{2} L}}\), and then, $$ \frac{{C_{L} ({\boldsymbol{\eta }}) - C_{L} ({\boldsymbol{\bar \rho^{\prime}}})}}{{{W/2}}} = \log_{2} \frac{{M_{1} }}{{M_{2}}}, $$ $$ \begin{aligned} M_{1} &= 1 + c\left(\rho_{i} + \rho^{\prime}_{i} + \rho_{i + 1} + \rho^{\prime}_{i + 1} \right)\\ &\quad+ c^{2} \left(\rho_{i} + \rho^{\prime}_{i + 1} \right)\left(\rho_{i + 1} + \rho^{\prime}_{i} \right) > 0, \end{aligned} $$ $$ \begin{aligned} M_{2} &= 1 + c\left(\rho_{i} + \rho^{\prime}_{i} + \rho_{i + 1} + \rho^{\prime}_{i + 1} \right)\\ &\quad+ c^{2} \left(\rho_{i} + \rho^{\prime}_{i} \right)\left(\rho_{i + 1} + \rho^{\prime}_{i + 1} \right) > 0. \end{aligned} $$ So, the difference $$ M_{1} - M_{2} = c^{2} (\rho_{i} - \rho_{i + 1})(\rho^{\prime}_{i} - \rho^{\prime}_{i + 1}). $$ Because ρ i >ρ i+1 and \({\rho ^{\prime }_{i} > \rho ^{\prime }_{i + 1} }\). So, M 1−M 2>0; Thus, \(\frac {{M_{1} }}{{M_{2} }} > 1\); and then, \(C_{L} ({\boldsymbol {\eta }}) - C_{L} ({\boldsymbol {\bar \rho ^{\prime }}}) > 0\). That is to say, we find a counter-example η to have more achievable rate than \(C_{L} ({\boldsymbol {\bar \rho ^{\prime }}})\), which contradicts the assumption of maximum achievable rate \(C_{L} ({\boldsymbol {\bar \rho ^{\prime }}})\). Therefore, \(\overline {\boldsymbol {\omega }} \) should be in ascending order, which is just in reverse order of \({\boldsymbol {\bar \rho }}\). □ So, in Section 2, the reverse Q-component spatial interleaver is applied as Equation 2 so as to have the maximum achievable rate. In fact, if L is an even number, the rank-L JCMD-MIMO with the reverse Q-component spatial interleaver can be viewed as \(\frac {L}{2}\) parallel pairs of rank-2 JCMD-MIMO with eigenvalues {ρ i ,ρ L+1−i }, where \(i \in \left [1,\frac {L}{2}\right ]\). Likewise, if L is an odd number, it can be viewed as the parallel combination of one SISO fading channel with eigenvalue \(\left \{ \rho _{\frac {{L + 1}}{2}} \right \} \) and \(\frac {{L - 1}}{2}\) pairs of rank-2 JCMD-MIMO with eigenvalues {ρ i ,ρ L+1−i }, where \(i \in \left [1,\frac {{L - 1}}{2}\right ]\). Thus, the largest eigenvalue layer couples with the smallest eigenvalue layer, the second largest eigenvalue layer couples with the second smallest eigenvalue layer, and so on. Actually, BICM-MIMO is also one special case of JCMD-MIMO when \(\bar \zeta = \bar \rho \). As for the achievable rate of a rank-L JCMD-MIMO with a descending-order eigenvalue vector \({\boldsymbol {\bar \rho }} = \{ \rho _{1},\rho _{2},\ldots,\rho _{L} \} \), the upper bound is \(C_{L} (\overline {\boldsymbol {\omega }})\), where \(\overline {\boldsymbol {\omega }}\) is in reverse order of \({\boldsymbol {\bar \rho }}\), and the lower bound is the BICM-MIMO achievable rate \(C_{L} ({\boldsymbol {\bar \rho }})\). According to Theorem 2, the maximum achievable rate is \(C_{L} ({\boldsymbol {\bar \omega }})\), we can get \(C \le C_{L} ({\boldsymbol {\bar \omega }})\). In addition, \(C_{L} ({\boldsymbol {\bar \rho }})\) is the minimum achievable rate, which can be proved by the similar math skill as follows. Assuming someone claims to find a non-descending-order vector \({\boldsymbol {\bar \rho ^{\prime }}}\) other than \(\overline {\boldsymbol {\rho }} \) to have minimum achievable rate \(C_{L} ({\boldsymbol {\bar \rho ^{\prime }}})\), we must can find a pair of \(\left \{ \rho ^{\prime }_{i},\rho ^{\prime }_{i + 1} \right \} \) from \({\boldsymbol {\bar \rho ^{\prime }}}\) to satisfy \(\rho ^{\prime }_{i} < \rho ^{\prime }_{i + 1} \). And then, we can construct a new vector \({\boldsymbol {\eta }} = \left \{ \rho ^{\prime }_{1},\rho ^{\prime }_{2},\ldots,\rho ^{\prime }_{i - 1},\rho ^{\prime }_{i + 1},\rho ^{\prime }_{i,} \rho ^{\prime }_{i + 2},\ldots,\rho ^{\prime }_{L} \right \}\), which only changes the order of \(\left \{ \rho ^{\prime }_{i},\rho ^{\prime }_{i + 1} \right \} \) from \({\boldsymbol {\bar \rho ^{\prime }}}\). So, we can compute the difference of \(\frac {{C_{L} ({\boldsymbol {\eta }}) - C_{L} ({\boldsymbol {\bar \rho ^{\prime }}})}}{{{W/2}}}\), $$ {\fontsize{7.8}{6}{\begin{aligned} {} \frac{{C_{L} ({\boldsymbol{\eta }}) \,-\, C_{L} ({\boldsymbol{\bar \rho^{\prime}}})}}{{{W/2}}} &= \sum\limits_{k = 1}^{L} {\log_{2}\! \left[\! {1 \,+\, \frac{{(\rho_{k} \,+\, \eta_{k})}}{{2\sigma^{2} L}}P} \right]} \,-\, \sum\limits_{k = 1}^{L} {\log_{2}\! \left[\! {1 \,+\, \frac{{\left(\rho_{k} \,+\, \rho^{\prime}_{k} \right)}}{{2\sigma^{2} L}}P} \right]}\\ &= \log_{2} \left[ {1 + \frac{{(\rho_{i} + \eta_{i})}}{{2\sigma^{2} L}}P} \right] \,+\, \log_{2} \left[ {1 + \frac{{\left(\rho_{i + 1} + \eta_{i + 1} \right)}}{{2\sigma^{2} L}}P} \right] \\ &\quad- \log_{2} \left[ {1 \,+\, \frac{{\left(\rho_{i} + \rho^{\prime}_{i} \right)}}{{2\sigma^{2} L}}P} \right] \,-\, \log_{2} \left[ {1 + \frac{{\left(\rho_{i + 1} + \rho^{\prime}_{i + 1} \right)}}{{2\sigma^{2} L}}P} \right]\\ &= \log_{2} \left[ {1 \,+\, \frac{{\left(\rho_{i} + \rho^{\prime}_{i + 1} \right)}}{{2\sigma^{2} L}}P} \right] \,+\, \log_{2} \left[ {1 + \frac{{\left(\rho_{i + 1} + \rho^{\prime}_{i} \right)}}{{2\sigma^{2} L}}P} \right] \\ &\quad- \log_{2} \left[ {1 \,+\, \frac{{\left(\rho_{i} \,+\, \rho^{\prime}_{i} \right)}}{{2\sigma^{2} L}}P} \right] \,-\, \log_{2} \left[ {1 \,+\, \frac{{\left(\rho_{i + 1} + \rho^{\prime}_{i + 1} \right)}}{{2\sigma^{2} L}}P} \right]. \\ \end{aligned}}} $$ $$ \frac{{C_{L} ({\boldsymbol{\eta }}) - C_{L} ({\boldsymbol{\bar \rho^{\prime}}})}}{{{W/2}}} = \log_{2} \frac{{M_{1} }}{{M_{2} }}, $$ $$ M_{1} - M_{2} = c^{2} (\rho_{i} - \rho_{i + 1})\left(\rho^{\prime}_{i} - \rho^{\prime}_{i + 1} \right). $$ Because ρ i >ρ i+1 and \({\rho ^{\prime }_{i} < \rho ^{\prime }_{i + 1} }\). So, M 1−M 2<0; Thus, \(\frac {{M_{1} }}{{M_{2} }} < 1\); and then, \(C_{L} ({\boldsymbol {\eta }}) - C_{L} ({\boldsymbol {\bar \rho ^{\prime }}}) < 0\). That is to say, we find a counter-example η to have small achievable rate than \(C_{L} ({\boldsymbol {\bar \rho ^{\prime }}})\), which contradicts the assumption of minimum achievable rate \(C_{L} ({\boldsymbol {\bar \rho ^{\prime }}})\). Therefore, \(C_{L} ({\boldsymbol {\bar \rho }})\) is the minimum achievable rate. □ AMI analysis for complex-valued QAM signals For practical communication systems, the transmit signal X usually belongs to a finite alphabet (constellation signal set), such as the complex-valued QAM signal. AMI also varies with the rotation angle of QAM signals in the MIMO system. Assuming equiprobable QAM constellation inputs and the independent Rayleigh fading channel, the AMI of the coded modulation (CM) MIMO system before the demapper is called CM-AMI, which is irrelevant to the labeling and defined as [2] $$ \begin{aligned} I_{{\text{CM}}} &= I\left({{\mathbf{X}};{\mathbf{Y}}|{\mathbf{\Lambda }}} \right) \\ &= \sum\limits_{l = 1}^{N_{L}} {\left\{ {m - E_{{\mathbf{x}}_{l},{\mathbf{y}}_{l},{\mathbf{\lambda }}_{l}} \left[ {\log_{2} \frac{{\sum\limits_{\hat{\mathbf{x}} \in \chi} {P\left({{\mathbf{y}}_{l}|\hat{\mathbf{x}},{\mathbf{\lambda }}_{l}} \right)} }}{{P({\mathbf{y}}_{l}|{\mathbf{x}}_{l},{\mathbf{\lambda }}_{l})}}} \right]} \right\}}. \\ \end{aligned} $$ The AMI after the demapper is called BICM-AMI, which is relevant to the labeling and defined as [2] $$ \begin{aligned} {} I_{{\text{BICM}}}&= \sum\limits_{k = 1}^{m} {I\left({C_{k} ;{\mathbf{Y}}|{\boldsymbol{\Lambda }}} \right)} \\ &= \sum\limits_{l = 1}^{N_{L}} \!{\left\{ {m \,-\, \sum\limits_{k = 1}^{m} {E_{c_{k},{\mathbf{y}}_{l},{\mathbf{\lambda}}_{l}}\! \left[\! {\log_{2}\! \frac{{\sum\limits_{\hat{\mathbf{x}} \in \chi} {p({\mathbf{y}}_{l}|\hat{\mathbf{ x}},{\mathbf{\lambda }}_{l})} }}{{\sum\limits_{\hat{\mathbf{x}} \in \chi,\hat c_{k} = c_{k}} {p({\mathbf{y}}_{l}|\hat{\mathbf{x}},{\mathbf{\lambda }}_{l})} }}} \!\right]}}\! \right\}}. \\ \end{aligned} $$ AMI analysis is an effective method to calculate its achievable rate. From Equation 28, because I(X;Y|Λ) is independent from labeling, it implies that the AMI of CM system is not related to the labeling. However, for the BICM system, the BICM-AMI strongly depends on the labeling. Extensive literature has proven that Gray labeling is optimal for the non-ID BICM system in the AWGN channel [25]. Monte Carlo simulation techniques are useful tools to get the expectation of a complex function by ergodicity of the random variables [26]. The expectation operations in Equations 28 and 29 can be evaluated by using the Monte Carlo simulation techniques. Based on Equation 9, we can get the received symbol before the soft demapper by generating the random coded bits c k (corresponding to the modulated symbol \({x_{k}^{l}}\) on each layer), the random fading coefficient \({\lambda _{k}^{l}}\) (SVD of i.i.d. Rayleigh-distributed random channel matrix H k ), and the i.i.d. Gaussian noise random variable \({n_{k}^{l}}\). Thus, by using the Monte Carlo simulation techniques, we can estimate the expectation values of Equations 28 and 29 by ergodicity of (c k , H k , \({n_{k}^{l}}\)), where P(y l |x l ,λ l ) can be calculated as Equation 12. As for complex-valued QAM signals, the optimum rotation angle usually is different from the value of the real-valued signals, and we can get it by maximizing AMI. Figure 3 shows how 2×2 MIMO CM-AMI varies with the rotation angle θ for real-valued binary phase shift keying (BPSK) symbols. Since I(X;Y|Λ)=I(C k ;Y|Λ), the I CM and I BICM are the same for BPSK. Both for low and high SNR, e.g., at SNR =−5 dB and SNR =−15 dB, the optimal rotation angles are always θ= 45°, which coincides with our analysis in Section 3. 2 × 2 MIMO CM-AMI I CM vs. rotation angle θ for BPSK over Rayleigh fading channels. (a) SNR =−5 dB. (b) SNR = 15 dB Figures 4 and 5 show how 4×4 MIMO CM-AMI and BICM-AMI vary with the rotation angle θ for QPSK and 16QAM, respectively. The AMIs of proposed systems with two kinds of spatial Q-component interleavers (reverse and cyclic-shift) are plotted in the same figures. Examples of low SNR and high SNR are presented. In Figures 4 and 5, for both spatial Q interleaving algorithms, the optimal rotation angles just have slight difference in high SNR. For CM-AMI, the optimal angle of QPSK at SNR =−3 dB is 45°, while the optimal angle at SNR = 11 dB is about 30° and 29° for the cyclic-shift interleaver and the reverse interleaver, respectively. For BICM-AMI, the optimal angle of QPSK at SNR =−3 dB is 0°, while the optimal angle at SNR = 11 dB is about 26° and 27° for the cyclic-shift interleaver and the reverse interleaver, respectively. 4 × 4 MIMO CM-AMI I CM and BICM-AMI I BICM vs. rotation angle θ for QPSK over Rayleigh fading channels. (a) SNR =−3 dB. (b) SNR = 11 dB 4 × 4 MIMO CM-AMI I CM and BICM-AMI I BICM vs. rotation angle θ for 16QAM over Rayleigh fading channels. (a) SNR =−3 dB. (b) SNR = 11 dB Based on the AMI maximization criterion, the optimal angle corresponds to the maximum AMI value. Figure 6 shows the CM-AMI and BICM-AMI curves that correspond to the optimal angles in Figures 4 and 5 for 4×4 MIMO systems. In Figure 6, CM-AMI and BICM-AMI for the reverse interleaver described in Equation 2 are always not less than that of the cyclic-shift interleaver described in Equation 3 both for QPSK and 16QAM. These observations concide well with Theorem 2. CM-AMI and BICM-AMI corresponding to the optimal angles for 4 × 4 MIMO Based on maximizing AMI, we can also obtain the relationship between optimal angle and SNR. The optimal angles for CM and BICM systems vary with SNR for QPSK and 16QAM, which is plotted in Figure 7. The optimal rotation angle of CM is always bigger than that of BICM. When SNR increases, the optimal angle of CM becomes smaller, but the optimal angle of BICM becomes bigger. Optimal angle θ vs. SNR for QPSK and 16QAM over 4 × 4 Rayleigh fading channels For a given modulation order m, according to the AMI value I corresponding to optimal rotation angle in Figure 6, the optimal code rate for a given SNR can be calculated as \(R= \frac {I}{{m \cdot N_{L} }}\). Therefore, we can get the relationship between the optimal code rate R and SNR, which is shown in Figure 8. Code rate vs. SNR for QPSK and 16QAM over 4 × 4 Rayleigh fading channels Thus, for a given operating SNR, we can obtain the optimum rotation angle in Figure 7 and the optimal code rate in Figure 8. Furthermore, we can get the relationship between code rate R and optimal rotation angle, which is shown in Figure 9 for QPSK and 16QAM. It provides a good reference to select an optimal rotation angle for a given code rate. For instance, for code rate = 0.5, the optimal angles for 4×4 MIMO-BICM QPSK and 16QAM are 18° and 0°, respectively. According to Figures 7 and 9, at low SNR or low code rate, non-rotation is the best for BICM, which implies that the channel coding dominates the BICM performance, while 45° rotation is the best for CM, which indicates that the rotation symmetry affects the CM performance. However, at high SNR or high code rate, the signal space diversity dominates the performance both for BICM and CM, so BICM-AMI can approach CM-AMI, and the optimum angle of BICM with the optimum reverse spatial interleaver is also close to that of CM with the same interleaver. Note that the rotation angle is just related to the code rate and modulation. So, in practice, the rotation symbol mapper can be implemented through a look-up table that is calculated and stored in advance for a given modulation order and code rate, which is the same as the conventional symbol mapper. Hence, it does not introduce additional processing complexity and delay. Optimal angle θ vs. code rate R for QPSK and 16QAM over 4 × 4 Rayleigh fading channels Code design for JCMD-ID system For the non-ID BICM system, Gray mapping has been proved to be optimal. Based on the optimal rotation angle obtained by maximizing BICM-AMI and Gray mapping, the non-ID JCMD system only needs to consider the optimization of channel codes to achieve excellent performance. For the JCMD-ID system, besides the optimal rotation angle, it is also very crucial to choose a pair of a well-matched labeling and an outer channel code by some joint optimization. The convergence behavior of the iterative demodulation and decoding can be analyzed by the EXIT chart method, which can describe the flow of extrinsic information between the demodulator and the decoder [27-29]. Several advantages of EXIT charts are summarized in [29]. The inputs to the demapper are the noise-corrupted channel observations and the a priori knowledge A(c k ) on the unmapped bits. The demapper outputs channel and extrinsic information E(c k ). According to Equations 12 and 19 in [27], the mutual information I A1=I(c k ;A(c k )) (0≤I A1≤1) between transmitted unmapped bits and the L-values A(c k ) can be written as $$ \begin{aligned} {I_{{A1}}} &= \frac{1}{2}\sum\limits_{x = - 1,1}^{} \int_{- \infty }^{\infty} {p_{A({c_{k}})}}\left({\xi |{c_{k}} = x} \right) \cdot {{\log }_{2}}\\ &\quad\times\frac{{2{p_{A({c_{k}})}}\left({\xi |{c_{k}} = x} \right)}}{{{p_{A({c_{k}})}}\left({\xi |{c_{k}} = - 1} \right) + {p_{A({c_{k}})}}\left({\xi |{c_{k}} = 1} \right)}} d\xi. \end{aligned} $$ The mutual information I E1=I(c k ;E(c k )) (0≤I E1≤1) can be written as $$ \begin{aligned} {I_{{E1}}} &= \frac{1}{2}\sum\limits_{x = - 1,1}^{} \int_{- \infty }^{\infty} {p_{E({c_{k}})}}\left({\xi |{c_{k}} = x} \right) \cdot {{\log }_{2}}\\ &\quad\times\frac{{2{p_{E({c_{k}})}}\left({\xi |{c_{k}} = x} \right)}}{{{p_{E({c_{k}})}}\left({\xi |{c_{k}} = - 1} \right) + {p_{E({c_{k}})}}\left({\xi |{c_{k}} = 1} \right)}} d\xi. \end{aligned} $$ In [27], it turns out that the a priori input A(c k ) is almost Gaussian distributed. Additionally, large interleavers keep the a priori L-values A(c k ) fairly uncorrelated over many iterations. Hence, it seems appropriate to model the a priori input A(c k ) by applying an independent Gaussian random variable with variance \(\sigma _{A1}^{2}\) and mean zero (see Equations 1 to 10 in [27]). Thus, the conditional probability density function can be written as $$ {p_{A({c_{k}})}}\left({\xi |{c_{k}} = x} \right)=\frac{1}{{\sqrt {2\pi} {\sigma_{A1}}}}\exp \left[ { - \frac{{{{\left({\xi - \frac{{\sigma_{A1}^{2}}}{2}x} \right)}^{2}}}}{{2\sigma_{A1}^{2}}}} \right] $$ The mutual information I E1=I(c k ;E(c k )) can be viewed as a function of I A1, SNR, and the rotation angle θ, i.e, $$ I_{E_{1}} = T_{1} (I_{A_{1} },SNR,\theta). $$ The function T 1 can not be expressed in a closed form. For a given value of the input mutual information I A1=I(c k ;A(c k )) (0≤I A1≤1) to the demodulator, to compute \(T_{1} (I_{A_{1} },SNR,\theta)\phantom {\dot {i}\!}\), the distributions \({p_{E({c_{k}})}}\left ({\xi |{c_{k}} = x} \right)\) are most conveniently determined by the Monte Carlo simulation (histogram measurements) proposed in [27,28]. The convenient method has been verified to allow an accurate prediction of the SNR-decoding threshold with low complexity [27]. On the other hand, the extrinsic transfer characteristics of the decoder can describe the input/output relationship between the input I A2 and the output I E2, which is independent of SNR value. It can be computed by assuming the a priori input to be Gaussian distributed and applying the same method as used for the demapper characteristic T 1. Therefore, the transfer characteristic of the decoder is denoted by $$ I_{E_{2}} = T_{2} (I_{A_{2}}). $$ Qiuliang et al. [16] verified that the SSD technique can affect the demapper's EXIT curve for the SISO BICM-ID SSD system. For the MIMO system, we analyze the effect of the SSD technique to different demapper labelings. Different 16QAM labelings (natural, Gray, and reference [16]) are shown in Figure 10. For the 1/2-rate coded 16QAM 4×4 MIMO system, the corresponding 16QAM EXIT curves with 45° rotation and non-rotation are depicted in Figure 11. The doping technique is used for error-floor removal [30] and the doping rate P=100. For Natural and Gray labelings, the demappers' EXIT curves with rotation have a larger slope than that without rotation. For the referenced labeling, the demapper's EXIT curve with rotation is always above the curve without rotation. 16QAM labelings. (a) Gray labeling. (b) Natural labeling. (c) The reference labeling in [16] EXIT chart analysis for 16QAM at SNR = 7.8 dB Demapper-matched code design is very crucial for the JCMD-ID MIMO system with rotation. In order to approach the capacity, based on EXIT chart, we propose an optimization method of outer channel codes to match with a given demapper for the JCMD-ID system. We choose the BCC as the channel code. For a given SNR, if two EXIT curves of the demapper and the decoder do not intersect, the iterative decoding can converge, otherwise it cannot converge. Thus, the SNR which makes the two EXIT curves tangent is the SNR convergence threshold, which is also called pinch-off SNR. The objective of JCMD-ID optimization is to find the outer channel code that has the lowest SNR convergence threshold to match with the demapper. We define the optimization function as follows. $$ \begin{aligned} {} \alpha ({\mathbf{G}}(N_{\text{Reg}}),SNR)\,=\, \mathop {\min }\limits_{I_{n} \in [0,1],n = 1,2,\ldots,\text{Num}} \left[{T_{1} (I_{n}) \,-\, T_{2}^{- 1} (I_{n})} \right], \\ \end{aligned} $$ $$ \begin{aligned} \overline {\text{SNR}} = \mathop {\arg \min }\limits_{\text{SNR}} \left[ {\alpha \left({\textbf{{G}}\left({N_{\text{Reg}}} \right),\text{SNR}} \right) > 0} \right], \end{aligned} $$ where G(N Reg) denotes the generator polynomial of BCC with N Reg registers. Num is the number of selected statistical samples. \(\overline {\text {SNR}}\) is the pinch-off SNR. For \(\frac {1}{N_{\text {out}}}\) rate BCC, the objective is to find the optimum G Opt.(N Reg) with the lowest pinch-off SNR from \(\left ({2^{N_{\text {Reg}} + 1} - 1} \right)^{N_{\text {out}}}\) generator polynomial candidates. However, the exhaustive searching of a channel code to well match the demapper's EXIT curve is trivial, especially for N Reg is large. GA is an efficient optimization algorithm, which is stochastic search techniques based on the mechanism of natural selection and natural genetics [31]. In GA, a genetic representation is required for the individuals in a population. Generator polynomials of BCC \({\mathbf {G}}(N_{\text {Reg}})=\left [{\mathbf {g}}_{1},{\mathbf {g}}_{2},\ldots,{\mathbf {g}}_{N_{\text {out}}}\right ]_{2}\) inherently provides a (N Reg+1)×N out-length binary string \({\mathbf {S}}_{g}=<{\mathbf {g}}_{1},{\mathbf {g}}_{2},\ldots,{\textbf {g}}_{N_{{\text {out}}}}>\phantom {\dot {i}\!}\). g i is the (N Reg+1)-length binary generator polynomial corresponding to ith (1≤i≤N out) output. Based on the genetic algorithm, an optimization method is proposed as follows. Step 1: Initial population. Set the current iterative number (number of generations) N pop=0, the number of candidate generator polynomials (population size) N g , the maximum iterative number N max, crossover probability P c , and mutation probability P m . N g binary strings \(\mathbf {S}_{g}^{i}=< \mathbf {g}_{1}^{i},\mathbf {g}_{2}^{i},\ldots,\mathbf {g}_{N_{\text {out}}}^{i}>\phantom {\dot {i}\!}\) that correspond to the candidate polynomials are randomly initialized, which are denoted by a set \({{\mathbf {C}}^{N_{\text {pop}}}}\phantom {\dot {i}\!}\), where \(\mathbf {g}_{1}^{i},\mathbf {g}_{2}^{i},\ldots,\mathbf {g}_{N_{\text {out}}}^{i} \in \left [ {1,{2^{{N_{\text {Reg}}} + 1}}} \right)\), 1≤i≤N g . Step 2: Selection. Reduce the SNR by small steps Δ SNR (e.g., 0.1 dB) and compute the pinch-off SNRs \(\overline {\text {SNR}_{i}}\) of all the candidate generator polynomials \({\mathbf {G}}_{i}(N_{\text {Reg}})=\left [{\mathbf {g}}_{1}^{i},{\mathbf {g}}_{2}^{i},\ldots,{\mathbf {g}}_{N_{\text {out}}}^{i}\right ]_{2}\) in population, where \(<{\mathbf {g}}_{1}^{i},{\mathbf {g}}_{2}^{i},\ldots,{\mathbf {g}}_{N_{{\text {out}}}}^{i}> \in {{\mathbf {C}}^{N_{\text {pop}}}}, 1 \le i \le N_{g}\). We use the pinch-off SNR to measure the fitness. The fitness function is associated with the maximum pinch-off SNR \({\overline {\text {SNR}}_{\max }} = \mathop {\max }\limits _{1 \le i \le {N_{g}}} \left \{ {{{\overline {\text {SNR}}_{i}}}} \right \}\), shown as $$ f\left({{\mathbf{S}}_{g}^{i}} \right) = \overline{\text{SNR}}_{\max}\text{(dB)} + 0.1 - {\overline {\text{SNR}}_{i}}\text{(dB)}. $$ N g individuals are selected to breed a new generation with probability proportional to the fitness value. The probability that \({{\mathbf {S}}_{g}^{i}}\) is selected is \(P(i) = \frac {{f\left ({{\mathbf {S}}_{g}^{i}} \right)}}{{\sum \limits _{k = 1}^{{N_{g}}} {f\left ({{\mathbf {S}}_{g}^{k}} \right)} }}\). Based on roulette wheel selection (RWS) algorithm, the tth (1≤t≤N g ) individual selection follows the steps below. A. Generate a uniform random number χ(t), χ(t)∈[0,1]. B. If \(\sum \limits _{i = 0}^{k - 1} {P(i)} \le \chi (t) < \sum \limits _{i = 1}^{k} {P(i)} (1 \le k \le {N_{g}})\), P(0)=0, \({{\mathbf {S}}_{g}^{k}}\) is selected. Step 3: Crossover. For two adjacent selected individuals, a random number κ c from the range [0,1] is generated. Only when κ c <P c , the crossover operator is carried out. The crossover point is selected randomly. All bits beyond that point in either string are swapped between the two parent individuals, and then two children individuals are obtained. Step 4: Mutation. For each individuals, a random number κ m from the range [ 0,1] is generated. Only when κ m <P m , the mutation operator is carried out through one bit flip at random mutation position. Step 5: Judgment. N pop=N pop+1. After step 2∼4, a new population \({{\mathbf {C}}^{N_{\text {pop}}+1}}\phantom {\dot {i}\!}\) is formed. If N pop<N max, then go to step 2, otherwise stop. The generator polynomial with the lowest pinch-off SNR in \({{\mathbf {C}}^{N_{\text {pop}}+1}}\phantom {\dot {i}\!}\) is chosen as the optimum one. Using the method above, a rate-half BCC code with N Reg=5 is optimized for the natural labeling 16QAM JCMD-ID system. Based on the AMI analysis in Section 4, the optimal rotation angle for 0.5 rate 16QAM JCMD-ID system is 45°. The optimal generator polynomial is [63,32] 8. As shown in Figure 11, the natural labeling demapper's EXIT curve with 45° rotation keeps a narrow open tunnel with that of BCC decoder, while its non-rotation curve intersects with the decoder, which shows the effect of rotation to the ID system. The other two labeling demappers' EXIT curves with and without rotation always intersects with the decoder, so the Gray and reference labelings are not suitable for the BCC decoder. Therefore, the natural labeling with 45° rotation matches well with the [63,32] 8 BCC code and has the best performance. Simulation result Results of non-ID JCMD system on fast fading channels For the non-ID JCMD MIMO system, the optimal Gray labeling and powerful DVB-T2 LDPC coding are used to achieve excellent performance [32]. The optimal rotation angle is obtained by maximizing BICM-AMI in Figure 9. The size of coded block N is 64,800 bits. For the LDPC decoder, the log-belief propagation (BP) algorithm with 30 maximum iterations is utilized. In order to ensure the fairness of the comparison, the SVD precoding is implemented for both the conventional BICM MIMO system and proposed JCMD MIMO system. Figure 12 shows the bit error rate (BER) performance comparisons of rate-1/2 LDPC-coded BPSK MIMO systems on the independent fast Rayleigh fading channel. The channel fading coefficients on each symbol are independent identical distributed Rayleigh random variables with the variance 1. BPSK is also a simple real-valued signal, so the optimal rotation angle is chosen as 45°. For 4×4 MIMO systems, JCMD with 45° rotation obtains significant SNR gains as compared with JCMD without rotation. For the target BER =10−5, it can achieve 2.2 dB SNR gain. Note that the spatial interleaver, spatial Q-component interleaver, and time Q-component interleaver are all implemented in both JCMD scheme with and without rotation. That means the performance gain of JCMD does not exactly come from the interleavers compared with BICM scheme. We can still obtain significant gain through the optimal rotation and the matching of channel coding and labeling. Furthermore, the 4×4 JCMD MIMO system with the reverse spatial Q interleaver can obtain about 0.6 dB SNR gain compared with that with the cyclic-shift spatial Q interleaver. The results coincide well with the above analysis in Section 3. BER performance of rate-1/2 DVB-T2 LDPC coded, BPSK 4 × 4 JCMD MIMO systems on fast fading channels For the QPSK 4×4 MIMO system, the optimal angles for 1/2 and 3/4 rate are 18° and 25°, respectively. As shown in Figure 13, JCMD systems with the optimal rotation angle can obtain 0.6 and 3.9 dB SNR gains compared with the conventional BICM systems employing the ideal SVD precoding for low rate 1/2 and high rate 3/4, respectively. Given the optimal rotation angle, JCMD-MIMO with the reverse interleaver obtains significant 0.3 and 1.3 dB SNR gains at BER =10−5 compared to that with the cyclic-shift interleaver for the code rate 1/2 and 3/4, respectively. Meanwhile, the proposed 1/2 rate JCMD system with the reverse interleaver is only about 1.1 dB SNR gap to the JCMD limit for 4×4 MIMO. For the 3/4 rate JCMD MIMO, the gap to the JCMD limit is reduced to 0.7 dB for 4×4 MIMO. SNR gains and gaps are summarized in Table 1. From the results, SNR gain becomes bigger for higher code rate. BER performance of rate-1/2 and 3/4 DVB-T2 LDPC coded, QPSK 4 × 4 JCMD MIMO systems on fast fading channels Table 1 SNR gains and gaps to the capacity for JCMD 4×4 MIMO systems Results of JCMD-ID system on fast fading channels For the JCMD-ID 16QAM MIMO system, the optimal rotation angle is obtained by maximizing CM-AMI in Figure 9, and it is 45° at 1/2 rate for 4×4 MIMO systems. In order to confirm our optimization method for JCMD-ID MIMO system, simulations are carried out with proposed [63,32] 8 BCC-coded JCMD-ID scheme on fast fading channels for 4×4 MIMO systems. The size of coded block N=64,800 bits and the maximal global iterative number is 30. The powerful 1/2 rate DVB-T2 LDPC-coded Gray-labeling BICM and BICM-ID schemes with the same block size are also simulated as the reference, and the same ideal SVD precoding method is used for the conventional BICM and BICM-ID schemes and the proposed JCMD-ID scheme. The BER performance comparisons are shown in Figure 14. BER performance of rate-1/2, 16QAM 4 × 4 JCMD-ID MIMO systems on fast fading channels For the 4×4 MIMO system with natural labeling, the optimized BCC [63,32] 8 coded JCMD-ID scheme with 45° rotation and reverse Q interleaver exhibits excellent performance, which is only 1.3 and 1.1 dB away from the Gaussian-input capacity and JCMD-ID limit, respectively. It can obtain about 2.9 dB SNR gain compared with the BCC-coded BICM-ID scheme and JCMD-ID without rotation, which coincides with the above EXIT analysis. In addition, the scheme with the reverse interleaver obtains 0.4 dB SNR gain at BER =10−5 compared with that with cyclic-shift Q interleaver, which also proves the above analysis. Furthermore, the optimized JCMD-ID scheme also outperforms the DVB-T2 LDPC-coded BICM-ID scheme and the turbo-coded BICM-ID scheme in [24] by 0.9 and 1.4 dB gains, respectively. For the JCMD-ID and conventional BICM-ID schemes, the main complexity lies in the channel decoding. The Max-Log-MAP algorithm is a simplified algorithm for the decoding of turbo code and BCC. For each decoding iteration, it requires \(10 \times 2^{N_{\text {Reg}}}+11 \phantom {\dot {i}\!}\) additions and \(5 \times 2^{N_{\text {Reg}}}-2 \phantom {\dot {i}\!}\) maximum operations [33]. The BCC decoding only needs one iteration (one Max-Log-MAP operation), while turbo decoding needs eight iterations (16 Max-Log-MAP operations). For the Log-BP algorithm of LDPC decoding, each iteration requires M d c (d c −1)+N d v (d v −1)+N d c additions and \(M{d_{c}^{2}}\) look-up table operations, where M is the number of the parity bits, and d c and d v are the average degree distributions of check nodes and variable nodes, respectively [34]. LDPC decoding requires 30 iterations. Thus, we can obtain the average decoding complexity comparison for each information bit, as shown in Table 2. Assuming the equal complexity of the three operations (addition, max, look-up), the decoding complexity of BCC is just is 44.3% and 28.2% of turbo and LDPC decoding, respectively. Therefore, the optimized BCC has much lower complexity compared with the DVB-T2 LDPC and turbo codes. This indicates that the optimized scheme with BCC coding obtains much better performance with much lower complexity. Table 2 Average decoding complexity for each information bit A high-spectral-efficient JCMD scheme over MIMO fading channels is proposed. By jointly optimizing the component interleaver, the rotation modulation, and the BCC code, this scheme exhibits excellent performance. An optimum spatial component interleaver is proposed to maximize the achievable rate. For real-valued signals, we prove that the achievable rate of JCMD MIMO is greater than that of the conventional BICM MIMO scheme and \(\frac {\pi }{4}\) is the optimal rotation angle. For the rotated QAM, the optimal rotation angles are investigated for the MIMO system according to the maximizing AMI criterion. For the JCMD-ID MIMO system, a simple GA-based search algorithm of BCC generator polynomials is also proposed to match the rotation modulation. Simulation results prove that this new scheme can significantly outperform the conventional turbo-coded BICM-ID scheme over MIMO fading channels by 1.4 dB SNR gain, while it has much lower complexity. In a word, the proposed JCMD scheme is simple, efficient, and robust for the future wireless communication systems. C Felita, M Suryanegara, in Proceedings of International Conference on QiR (Quality in Research): 25-28 June 2013; Yogyakarta. 5 g key technologies: identifying innovation opportunity (IEEE,Piscataway, 2013), pp. 235–238. G Caire, G Taricco, E Biglieri, Bit-interleaved coded modulation. IEEE Trans. Inf. Theory. 44(3), 927–946 (1998). X Li, JA Ritcey, Bit-interleaved coded modulation with iterative decoding. IEEE Commun. Lett. 1(6), 169–171 (1997). B Vucetic, J Yuan, Space-Time Coding (John Wiley & Sons, Inc., England, 2003). G Foschini, Layered space-time architecture for wireless communication in a fading environment when using multi-element antennas. Bell Labs Tech. J. 1(2), 41–59 (1996). HE Gamal, AR Hammons, The layered space-time architecture: a new perspective. IEEE Trans. Inf. Theory. 1, 2321–2334 (2001). J Boutros, E Viterbo, Signal space diversity: a power and bandwidth efficient diversity technique for the Rayleigh fading channel. IEEE Trans. Inf. Theory. 44(4), 1453–1467 (1998). CA Nour, C Douillard, in Proceedings of 5th International Symposium on Turbo Codes and Related Topics: 1-5 Sept. 2008; Lausanne. Improving BICM performance of QAM constellations for broadcasting applications (IEEE,Piscataway, 2008), pp. 55–60. NF Kiyani, UH Rizvi, JH Weber, GJM Janssen, in Proceedings of IEEE Wireless Communications and Networking Conference: 11-15 March 2007; Kowloon. Optimized rotations for LDPC-coded MPSK constellations with signal space diversity (IEEE,Piscataway, 2007), pp. 677–681. NF Kiyani, JH Weber, in Proceedings of IEEE Symposium on Communications and Vehicular Technology in the Benelux: 15-15 Nov. 2007; Delft. OFDM with BICM-ID and rotated MPSK constellations and signal space diversity (IEEE,Piscataway, NJ, USA, 2007), pp. 1–4. NF Kiyani, JH Weber, Exit chart analysis of iterative demodulation and decoding of MPSK constellations with signal space diversity. J. Commun. 3(3), 43–50 (2008). NH Tran, HH Nguyen, T Le-Ngoc, Performance of BICM-ID with signal space diversity. IEEE Trans. Wireless Commun. 6(5), 1732–1742 (2007). M Zhenzhou, S Zhiping, Z Chong, Z Zhongpei, in Proceedings of International Conference on Communications, Circuits and Systems: 25-27 May 2008; Fujian. Design of signal space diversity based on non-binary LDPC code (IEEE,Piscataway, 2008), pp. 31–34. W Zhanji, W Wenbo, Improved coding-rotated-modulation orthogonal frequency division multiplexing system. IET Commun. 6(3), 272–280 (2012). W Zhanji, W Wenbo, in Proceedings of 2010 Global Mobile Congress: 18-19 Oct. 2010; Shanghai. A novel joint-coding-modulation-diversity OFDM system (IEEE,Piscataway, 2010), pp. 1–6. X Qiuliang, S Jian, P Kewu, Y Fang, W Zhaocheng, Coded modulation with signal space diversity. IEEE Trans. Wireless Commun. 10(2), 660–668 (2011). N Sharma, CB Papadias, Improved quasi-orthogonal codes through constellation rotation. IEEE Trans. Wireless Commun. 51(3), 332–335 (2003). W An Zhong, Z Jian Kang, in Proceedings of 11th Canadian Workshop on Information Theory (CWIT): 13-15 May 2009; Ottawa. Novel rotation angle for quasi-orthogonal space-time block codes (IEEE,Piscataway, 2009), pp. 213–216. W Su, X-G Xia, Signal constellations for quasi-orthogonal space time block codes with full diversity. IEEE Trans. Inf. Theory. 50(10), 2331–2347 (2004). D Dung Ngoc, C Tellambura, in Proceedings of IEEE Global Telecommunications Conference (GLOBECOM): Dec. 2005; St. Louis. Optimal rotations for quasi-orthogonal STBC with two-dimensional constellations (IEEE,Piscataway, 2005), pp. 2317–2321. L Yueqian, M Salehi, in Proceedings of 46th Annual Conference on Information Sciences and Systems (CISS): 21-23 March 2012; Princeton. Coded MIMO systems with modulation diversity for block-fading channels (IEEE,Piscataway, 2012), pp. 1–5. H Lee, A Paulraj, MIMO systems based on modulation diversity. IEEE Trans. Wireless Commun. 58(12), 3045–3049 (2010). H Soon Up, C Jinyong, J Sungho, R Ho Jin, S Jongsoo, in Proceedings of IEEE International Symposium on Broadband Multimedia Systems and Broadcasting: 13-15 May 2009; Bilbao. Performance evaluation of MIMO-OFDM with signal space diversity over frequency selective channels (IEEE,Piscataway, NJ, USA, 2009), pp. 1–5. BM Hochwald, S ten Brink, Achieving near-capacity on a multiple-antenna channel. IEEE Trans. Commun. 51(3), 389–399 (2003). SY Goff, Signal constellation for bit-interleaved coded modulation. IEEE Trans. Inf. Theory. 49(1), 307–313 (2003). FM Gardner, JD Baker, Simulation Techniques: Models of Communication Signal and Processes (Wiley, New York, 1995). S ten Brink, Convergence behavior of iteratively decoded parallel concatenated codes. IEEE Transa. Commun. 49(10), 1727–1737 (2001). S ten Brink, Designing iterative decoding schemes with the extrinsic information transfer chart. AEU Int. J. Electron. Commun. 54(6), 389–398 (2000). A Ashikhmin, G Kramer, S ten Brink, Extrinsic information transfer functions: model and erasure channel properties. IEEE Trans. Inf. Theory. 50(11), 2657–2673 (2004). S Pfletschinger, F Sanzi, Error floor removal for bit-interleaved coded modulation with iterative detection. IEEE Trans. Wireless Commun. 5(11), 3174–3181 (2006). P Guo, W Xuezhi, H Yingshi, in Proceedings of 3rd International Conference on Biomedical Engineering and Informatics: 16-18 Oct. 2010; Yantai. The enhanced genetic algorithms for the optimization design (IEEE,Piscataway, 2010), pp. 2990–2994. Frame structure channel coding and modulation for a second generation digital terrestrial television broadcasting system (DVB-T2). ETSI Std. DVB Document A122 [S] (June 2008). P Robertson, E Villebrun, P Hoeher, in Proceedings of IEEE International Conference on Communications: 18-22 Jun 1995; Seattle. A comparison of optimal and sub-optimal map decoding algorithms operating in the log domain, (1995), pp. 1009–1013. C Jinghu, A Dholakia, E Eleftheriou, MPC Fossorier, Reduced-complexity decoding of LDPC codes. IEEE Trans. Commun. 53(8), 1288–1299 (2005). This work is sponsored by the National Natural Science Fund (61171101), National Great Science Specific Project (2009ZX03003-011-03) of People's Republic of China and 2014 Doctorial Innovation Fund of BUPT (CX201426) and the Fundamental Research Funds for the Central Universities. School of Telecommunication Engineering, Beijing University of Posts and Telecommunications, Xitucheng Road 10, Beijing, China Zhanji Wu & Xiang Gao Zhanji Wu Xiang Gao Correspondence to Zhanji Wu. Wu, Z., Gao, X. An efficient MIMO scheme with signal space diversity for future mobile communications. J Wireless Com Network 2015, 87 (2015). https://doi.org/10.1186/s13638-015-0301-x Quadrature amplitude modulation (QAM) Signal space diversity (SSD) Multiple-input multiple-output (MIMO) Iterative demodulation and decoding (ID) Extrinsic information transfer chart (EXIT) 5G Wireless Mobile Technologies
CommonCrawl
Modelling transmission and control of the COVID-19 pandemic in Australia Sheryl L. Chang, Nathan Harding, … Mikhail Prokopenko A modelling analysis of the effectiveness of second wave COVID-19 response strategies in Australia George J. Milne, Simon Xie, … David Whyatt COVID-19 infection data encode a dynamic reproduction number in response to policy decisions with secondary wave implications Michael A. Rowland, Todd M. Swannack, … Todd Bridges Forecasting the spread of COVID-19 under different reopening strategies Meng Liu, Raphael Thomadsen & Song Yao Management strategies in a SEIR-type model of COVID 19 community spread Anca Rǎdulescu, Cassandra Williams & Kieran Cavanagh Addressing the COVID-19 transmission in inner Brazil by a mathematical model G. B. Almeida, T. N. Vilches, … C. M. C. B. Fortaleza The impact of travel and timing in eliminating COVID-19 Alexander F. Siegenfeld & Yaneer Bar-Yam One year of modeling and forecasting COVID-19 transmission to support policymakers in Connecticut Olga Morozova, Zehang Richard Li & Forrest W. Crawford Shi Chen1, Qin Li1, Song Gao2, Yuhao Kang2 & Xun Shi3 Socioeconomic scenarios Most models of the COVID-19 pandemic in the United States do not consider geographic variation and spatial interaction. In this research, we developed a travel-network-based susceptible-exposed-infectious-removed (SEIR) mathematical compartmental model system that characterizes infections by state and incorporates inflows and outflows of interstate travelers. Modeling reveals that curbing interstate travel when the disease is already widespread will make little difference. Meanwhile, increased testing capacity (facilitating early identification of infected people and quick isolation) and strict social-distancing and self-quarantine rules are most effective in abating the outbreak. The modeling has also produced state-specific information. For example, for New York and Michigan, isolation of persons exposed to the virus needs to be imposed within 2 days to prevent a broad outbreak, whereas for other states this period can be 3.6 days. This model could be used to determine resources needed before safely lifting state policies on social distancing. The Coronavirus disease (COVID-19) is an ongoing pandemic that poses a global threat. As of March 26, 2020, more than 520,000 cases of COVID-19 have been reported in over 200 countries and territories, resulting in approximately 23,500 deaths1,2,3,4,5,6,7,8,9. In the United States, the first known positive case was identified in Washington state on January 20, 202010. By March 26, the epidemic had been rapidly spreading across many communities and present in all 50 states, plus the District of Columbia; the total number of confirmed cases in the United States rose to 78,786 with 1137 deaths. To combat the spread of COVID-19, the government has taken actions in various dimensions, including banning or discouraging domestic and international travels, announcing stay-at-home orders to curb non-essential interactions for reducing transmission rate, and urging commercial laboratories to increase test capacity. To curb traveling, on January 31, the United States government announced travel restrictions on travelers from China; on February 29, it announced travel ban against Iran and advised travel with caution to Europe11 ; on March 11, it announced travel restrictions on most of European countries. To reduce human-interactions, on March 13, a national emergency was declared; as of March 28, 39 states had issued either statewide or regionally stay-at-home or shelter-in-place order, requiring residents to stay indoors except for essential activities. To increase test capacities, on February 4, the United States Food and Drug Administration (FDA) approved the United States Centers for Disease Control and Prevention (CDC)'s test, which was later to be proved inconclusive12; on February 29, the FDA relaxed its rules for some laboratories, allowing them to start testing before the agency granting its approvals; on March 27, FDA issued an Emergency Use Authorization to a medical device maker, the Abbott Labs, for the use of a coronavirus test that delivers quick testing results13. So far, since there is no treatment or vaccine for SARS-COV-2 available, these actions have been taken largely based on classic non-pharmaceutical epidemic controls. Works on evaluating similar measures in other countries, especially China, started to emerge7,14,15. For example, the effect of travel restriction on delaying the virus spread in China has been reported5,16. However, it is still unclear what control and intervention measures would have actual effect, especially to what extent, on abating the spread of COVID-19 in the United States. As the United States has very different political, administrative, social, pubic health and medical systems, as well as culture from China, this remains to be a critical question to address, especially considering that some measures and policies come with extremely high economic and societal costs. There have been numerous modeling works projecting or predicting the trend of the COVID-19 pandemic regionally or globally17,18. Most of the works apply a global model to the entire study area, either a region, a country, or the entire globe. Rarely the variation of different parts within one area and the interactions among those parts are taken into consideration. However, a country like the United States features diversity in all aspects. On the one hand, the overall situation of the entire country is a result emerging from local situations and their interactions, and thus, ignoring the local interactions can hardly lead to a high-quality overall model; on the other hand, as all interventions and policies finally have to be adapted to the local situation, a localized modeling will be much more relevant to the real-world practices. Spatially and network-related epidemic models can describe the geographical spread of viral dynamics7,19,20,21. Recent studies have shown the importance of incorporating timely human mobility patterns derived from mobile phone big data and global flight networks into the epidemiology modeling process and in public health studies5,7,22,23,24,25,26,27,28,29,30. Without accurate models that incorporate human mobility patterns and spatial interactions26,27, it is rather challenging to quantify the sensitivity of parameters, and using the linkage to real practices to make sensible policy suggestions. Accordingly, the core of the study is twofold. First, to localize the modeling, we developed a travel-network-based susceptible-exposed-infectious-removed (SEIR) mathematical compartmental model system that simultaneously characterizes the spatiotemporal dynamics of infections in 51 areas (50 states and the District of Columbia). Each state or district has its own model, and all models simultaneously take into account inflows and outflows of interstate travelers. Second, to improve the practical relevance, we chose to use three parameters that can directly correspond to possible practical means to discover, combat, and control the spread of the disease, and quantify their impact on the final output of the model. The three parameters include: (1) the transmission rate b, which corresponds to the local social-distancing enforcement, e.g., the stay-home order; (2) the detection and reporting rate r, which corresponds to the testing capacity; and (3) the travel ratio \(\alpha _t\), which corresponds to the ratio of interstate travel volume compared to that of 2019 during the same period. The modeling is a dynamic projection process (see the 'methods' section). We employed daily and state-specific historical data to incrementally calibrate the model, and then used the calibrated model to predict future scenarios under different non-pharmaceutical control and intervention measures. During this process, we ran data assimilation methods to identify parameter values that optimally fit the current situation (see more details in the methods and supplementary material). To project into the future, we set different values for the parameters to create different control and intervention scenarios, and then ran the simulation to see their impact on the model results. The final output of the model is the total number of confirmed cases in a state on a particular day. The current strategy in the United States is to isolate people who have the symptoms of COVID-19. An ideal scenario is to have an \(100\%\) reporting rate, i.e., every infected case gets confirmed and thus isolated quickly. Another ideal setting is to have everyone who was in contact with the infected gets identified and isolated quickly as well. Our model incorporated these considerations and examined such direct isolation of the exposed compartment in detail. We particularly investigated the impact of quickness of such actions through mathematical modeling and scenario analysis. A notable result from our modeling is that the impact of interstate travel restriction on the model output is modest. This can be explained by that when the disease has already widespread in all states, the relatively small number of cases in the travelers will cause little difference to the local situation, compared with the effects of local social-distancing and isolation rules and the increase of testing capacity. Figure 1 shows the effect on spatiotemporal dynamics of infectious population across states by setting the coefficients at different configurations. An interactive map-based scenario simulation web dashboard is also available at https://geods.geography.wisc.edu/covid19/us_model. We set \(r = 1-\alpha _r(1-r_0)\) and \(b=\alpha _bb_0\), where \(r_0\) and \(b_0\) are the report and transmission rate as of March 20, 2020 using data assimilation fitting result. By decreasing \(\alpha _r\) from 1 to 0, we increase the report rate from the original \(r_0\) to 1, and by decreasing \(\alpha _b\) we decrease the transmission rate. Most states, except a few such as NY, MI, and CA, see drastic improvement when the transmission rate is decreased and the testing(reporting) rate is increased, but the reduction of interstate traffic alone is not as effective. Our modelling reveals that once the epidemic in an area has reached a certain stage, the difference that can be caused to the local situation by the relatively small number of imported cases due to the interstate travel is insignificant. According to our modeling, all states in the United States have reached that stage. Therefore, as long as those travelers follow the social-distancing rules and the local government provides sufficient testing capacity, there is no apparent urge to curb interstate travel. This is in line with the finding in16,28, in which the authors projected the pick up of the spreading in other parts of China outside of Wuhan with about 3 days delay, and in the world outside China within a 2–3 weeks of delay, assuming no further screening is in place. Different from China where the city of Wuhan is clearly the epicenter of the COVID-19 outbreak and the travel ban quickly gets the rest of China under control, most of the states in the United States have already had signs of community spread by March 20, 202031, and banning other states will hardly make much difference to the local situation. In addition, Fig. 2 shows the corresponding prediction time series of infectious population in top 15 states under two scenarios (see also Supplementary Fig. S14): (A) the reported rate and the transmission rate remained unchanged as of March 20, 2020, with \(\alpha _r = \alpha _b = 1\), in which most states will continue their exponential growth before reaching their peak; (B) with \(\alpha _r = \alpha _b=0.1\), that is, when the transmission rate b is much smaller and the reported rate r is much higher (closer to 1), we can "flatten the curve" on the virus (i.e., reducing the spread of the virus). We further investigate the effect of increased testing capacity and report rate. As shown in Fig. 3a, most states see drastic improvement when the report rate increases. All states, by April 29, see monotonically exponential reduction of infections. The impact is strong in states such as MA, AZ, FL, and OR, but relatively weak in states such as NY, MI and IL. In Fig.3b, we study the effect of \(\alpha _r\) and \(\alpha _b\) on the basic reproduction rate \(R_e\) in NY (see other states in Supplementary Fig. S15). It can be seen that merely raising the report rate cannot fully make \(R_e<1\). To mitigate the spread of COVID-19 in these states, a proactive approach needs to be taken, and quick detection and isolation of the exposed population need to be in place instead of being delayed until the onset of the symptoms. This measure can prevent the exposed population from potentially infecting other susceptible people. In Fig. 3c, we plot the increase of infections in terms of \(D_q\) (i.e., the temporal lag in putting a person into quarantine) for the states that are sensitive to change of \(D_q\), including NY, NJ, IL, GA, MI, CO, WI, LA, TX, PA, MA, and TN. The longer one waits to inform and isolate the exposed population, the more infected people one observes. For example, there is a sharp transition for NY and MI. If the average detection and isolation time is more than 2 days, the total number of infections will significantly increase. The results again showed the importance of sufficient testing and strong transmission-intervention measures such as social distancing and self-quarantine policy32. These policies can help quickly identify the source of infection and isolate them before they infect the remaining population. This measure presumably comes with a lower economical cost. We finally investigate the stability of our statements on the parameters chosen in the model. There are a number of parameters in the model that are determined according to medical studies and thus necessarily contain ambiguity. One parameter, \(\gamma\), is especially hard to be set at a particular value due to the lack of medical evidence. This parameter reflects the level of infectiousness of the "exposed" compartment, a population that is presymptomatic. Recent studies indicate that presymptomatic patients seem to be more infectious than patients who have symptoms on site33. We therefore run our model with different values of \(\gamma\) to identify the significance of this particular parameter. Our numerical result suggests that within a moderate range of \(\gamma\), our conclusions still stand true. In particular, as shown in Fig. 4, by setting the "exposed" compartment being more infectious than the "infected" compartment, the numerical solution shows the same trend. We still observe that, with a higher report rate, the number of non-infected population exponentially increases (i.e., less people would get infected), and when a proactive approach is taken, meaning that the "exposed" compartment gets quickly separated from the rest of the population, the non-infected population drastically increases as \(D_q\), the delay of the separation time, gets shortened. This means that the dependence of our conclusion on the parameter \(\gamma\) is stable, and the above statements are consistent. We should emphasize that in our simulation, we do not differentiate patients with severe or mild symptoms. A more dedicated numerical experiment that separates the two categories could potentially give more detailed information. For example, in another agent-based modeling study34, researchers consider patients with mild to severe symptoms to evaluate the impacts of the timing of social distancing and adherence level on COVID-19 confirmed cases. Modeling and analyzing the spread of COVID-19, and assessing the effect of various policies could be instrumental to national and international agencies for health response planning5,8,15,16,17,32. We show that the effect of interstate travel reduction is at most modest in the United States when the outbreak has already widespread in all states. On the other hand, we need to impose strong transmission-reduction intervention and increased testing capacity and report rate to contain the spread of virus. The result is based on mathematical and statistical analyses of transmission control measures and in agreement with previous findings2,3,5,14,15,16, suggesting that the effect of travel ban at a later stage of the outbreak is rather modest. This is also in line with the fact that the outbreaks still occurred in Europe even upon the strong travel ban on the earlier epicenter of Wuhan and its surrounding cities in China. We also quantitatively show that the transmission-reduction intervention such as policies on the social-distancing and shelter-in-place rules, and the increase of testing rate, which facilitates immediate isolation upon exposure, will significantly reduce the total infected population. Such effect is mostly visible for the states of NY, NJ, MI, and IL. Particularly, our modeling results show that for states such as NY and MI, to achieve an optimal infection reduction, a more proactive approach needs to be taken to quickly identify the exposed population and isolate them within two days of exposure in order to ensure the infection reduction. The result is in agreement with previous findings7,8. We do need to emphasize that the model itself does not distinguish different ways of traveling across states. Indeed, if the interstate travel is conducted mostly through transiting through busy airports and train stations, and the social-distancing policy is not strictly imposed, then the high population density at these places will bring up the transmission rate b locally in space and time, leading to a higher infection rate. This is a severe consequence, but it should not be counted as the direct result of relaxing travel restrictions. Moving forward, we estimate that the decline in travel has a modest effect on the mitigation of the pandemic. We need a stronger transmission-reduction intervention and increased detection and report rate in place to prevent the further spread of the virus. The results could potentially be used to design an optimal containment scheme for mitigating and controlling the spread of COVID-19 in the United States. The mathematical model that simulates the spatiotemporal dynamics of state-level infections in the United States is a modified travel-network-based SEIR compartmental model in epidemiology by taking into account the variation of the 51 administrative units and their interactions14,35,36,37. It consists of 51 ordinary differential equation (ODE) systems, with each one characterizing the evolution of susceptible (S), exposed (E), reported (I), unreported (U) and removed (R) cases per state (Supplementary Fig. S1 and see more details in the supplementary material). The 51 ODE systems are then coupled through the state-to-state travel network flows (see Supplementary Fig. S2) that were extracted from the aggregated SafeGraph mobility data and weighted by \(\alpha _t\)38,39. Unlike most other models, we also incorporate the potential asymptomatic transmission. This makes the derivation of the basic reproduction number \(R_0\) different. Besides, each ODE system also includes two unknown parameters: the transmission rate (b) and the report rate for each state (r). The unknown parameters are inferred based on the total number of confirmed cases in each state for the period of March 1–March 20, 2020. The source of infection case data is the Center For Systems Science and Engineering at the Johns Hopkins University9. The parameters and model specification are defined as follows: $$\begin{aligned} \left\{ \begin{aligned}&\frac{\mathrm {d} S_i}{\mathrm {d} t} = -\frac{b_i S_i(U_i+\gamma E_i)}{P_i} + \sum _{j\ne i}\alpha _t n_{ij}\frac{S_j}{P_j} -\sum _{j\ne i}\alpha _t n_{ji}\frac{S_i}{P_i} \\&\frac{\mathrm {d} E_i}{\mathrm {d} t} = \frac{b_i S_i(U_i+\gamma E_i)}{P_i} - \frac{E_i}{D_e} + \sum _{j\ne i}\alpha _t n_{ij}\frac{E_j}{P_j} -\sum _{j\ne i}\alpha _t n_{ji}\frac{E_i}{P_i} \\&\frac{\mathrm {d} I_i}{\mathrm {d} t} = r_i \frac{E_i}{D_e} - c_{I}\frac{I_i}{D_{c}} - (1-c_{I})\frac{I_i}{D_{l}} \\&\frac{\mathrm {d} U_i}{\mathrm {d} t} = (1-r_i)\frac{E_i}{D_e} - c_U\frac{U_i}{D_{c}} - (1-c_U)\frac{U_i}{D_{l}} + \sum _{j\ne i}\alpha _t n_{ij}\frac{U_j}{P_j} -\sum _{j\ne i}\alpha _t n_{ji}\frac{U_i}{P_i} \\&\frac{\mathrm {d} R_i}{\mathrm {d} t} = c_{I}\frac{I_i}{D_{c}} + (1-c_{I})\frac{I_i}{D_{l}} + c_U\frac{U_i}{D_{c}} + (1-c_U)\frac{U_i}{D_{l}} \end{aligned}\right. \,. \end{aligned}$$ The ODE system is equipped with the following initial data (\(t=0\) standing for March 1, 2020): $$\begin{aligned} S_i(0) = N_i - E_{i0} - U_{i0}-I_{i0}\,, \quad E_i(0) = E_{i0}\,, \quad I_i(0) = I_{i0}\,, \quad U_i(0) = U_{i0}\,,\quad R_i(0) = 0. \end{aligned}$$ In the equation, the unit for t is one day. \(N_i(t)\) is the total population of state i at time t, and \(P_i=S_i+E_i+U_i\) is the free population. \(n_{ij}\) is the number of inflow from state j to state i. \(b_i\) and \(r_i\) are the transmission rate and reporting rate of state i. \(c_I\) (\(c_U\), resp.) is the proportion of positive cases that show critical condition for I (unreported cases U, resp.). \(D_e\) is the latent period. \(D_{c}\) and \(D_{l}\) are the infectious periods of critical cases and mild cases. \(\alpha _t\) is a parameter to tune the traffic flow. We emphasize two main differences in modeling compared with existing literature. In7, the authors study the inter-city traffic and its impact on the spreading of COVID-19 in China. The situation in China and that in the US are very different. In China, the epicenter is clear: the city of Wuhan, Hubei province, and the outbreak starts mid-January, 2020. The COVID-19 outbreak in the US, however, is multi-sourced. The consequence is that in the model in7, the initial condition for cities excepts Wuhan is clear: the latent, the reported and the unreported cases are all zero. In this model, however, the initial conditions \(E_{i0}\) are unclear for all states; Another big difference is, according to clinical findings, the latent cases also have the potential of transmitting the virus, and thus we add the interaction of \(E_i\) with \(S_i\) into the increment of \(E_i\)7,40,41. The unknown parameters and state variables in the equation set are \(*\): \(b_i\): the transmission rate with non-informative prior range [1, 1.5]; \(r_i\): the report rate with non-informative prior range [0.1, 0.3]; \(E_{i0}\): the data for the latent population with non-informative prior range [0, 500]. \(U_{i0}\): the initial data for the unreported population with non-informative prior range [0, 200]. \(S_{i0}\): the initial data for the susceptible population defined by \(N_i-E_{i0}-I_{i0}-A_{i0}\). Other parameters are: \(\gamma\):: the transmission ratio between unreported and latent. In the simulation we set it to be 0.5; \(D_c\):: the average duration of infection for critical cases. We assume \(D_c = 2.3\) days42. \(D_e\):: the average latent period. According to43, \(D_e = 5.2\) days. \(D_l\):: the average duration of infection for mild cases. We assume \(D_l = 6\) days. \(\alpha _t\):: the ratio of interstate travel volume compared to that of 2019 during the same period. The travel flow information \(n_{ij}\) was extracted from the SafeGraph mobility data, and we set \(\alpha _t=0.5\) to represent the travel reduction situation observed in the year of 2020. \(c_{I}\):: proportion of critical cases among all reported cases. We choose \(c_{I} = 0.1\). \(c_{U}\):: proportion of critical cases among all unreported cases. We assume \(c_{A} = 0.2\). There is an essential assumption made in the model: the homogeneity in the population. It means that the traffic flow is a good representation of the total population without considering their demographic and socioeconomic characteristics. The susceptible, exposed, and unreported move in and out of states at the same rate. This explains the \(\frac{S_i}{P_i}\), \(\frac{E_i}{P_i}\) and \(\frac{U_i}{P_i}\) terms in the \(S_i/E_i/U_i\) equation. The effective reproductive number \(R_e\) could be computed as $$\begin{aligned} R_e = \frac{b}{E+U}\left[ \gamma D_e E + \frac{D_c D_l U}{c_U D_l + (1-c_U)D_c}\right] \,. \end{aligned}$$ \(R_e\) depends on time due to the time dependence of E and U. The COVID-19 transmission dynamics (the ODE system) was simulated using the Forward Euler method, with each day discretized into 24 smaller time periods to ensure the numerical stability (see Supplementary Fig. S3). The parameter fitting was conducted under the Bayesian formulation that combines the effect of the underlying dynamics governed by the ODE system, serving as the prior knowledge, and the collected data, appearing in the likelihood function, to generate the posterior distribution that characterized the behavior of the state variables, including S, E, I, U, R, as well as the two unknown parameters, b and r. For this classical data assimilation problem, we employed the Ensemble Kalman Filter method that was derived from the Kalman filter and tailored to deal with problems with high-dimensional state variables44,45. The method proves to be effective when the measuring operator is linear and the underlying dynamics is Gaussian-like. It has been applied to a vast of problems that do not strictly satisfy the Gaussianity requirement. To apply this method, we generated 2000 samples according to the prior distribution, and evolve the samples through the dynamics of the ODE system. The samples were then rectified at the end of each day, using the announced number of confirmed cases, for tuning the two unknown parameters b and r. At the beginning of the simulation, March 1, only a few states had non-zero confirmed cases. The true numbers of exposed people and unreported cases on that day, however, are unknown. These two numbers are also the state variables that need to be inferred to using the collected infection data. On March 1, we put a non-informative prior with range [0, 500] and [0, 200] over the exposed latent population and unreported infectious population in each state, respectively. Supplementary Figs. S4–S13 show the data assimilation results for different states including the number of people in different compartmental groups and their temporal changes with \(95\%\) credible intervals. The average reporting rate r over all states is 0.2266 at the end of March 20 through the data assimilation method. For forecasting (in supplementary material), we performed scenario studies of two types. First, we ran the mathematical model by applying the initial data obtained as of March 20 into the future for the next 40 days, but with different configurations of \((b,r,\alpha _t)\). The simulation results out of this setting were then compared with those from the setting that the three parameters remained unchanged for each state. To quantify and visualize the difference, we compared the increase of the percentage of the non-affected population when the measures of stay-at-home, increasing test rate, and travel bans were enacted. The second scenario was about a more ideal situation: every confirmed case would get isolated immediately, as well as those who had been exposed to those confirmed cases, no matter if those who had been exposed had started to show symptoms or not. We built a new mathematical model that incorporated such isolations to study the effect of them. A new quarantined compartment (Q) was introduced into the model. Through the simulation, we examined the correlation between the average action-taking time (i.e., temporal lag in putting a person into quarantine denoted by \(D_q\)) and the increase of non-infected population. In both scenario studies, the simulation was run with the Forward Euler ODE solver, during which each day was divided into 24 intervals to achieve a numerical stability. As a SEIR-type epidemic model, this model describes the dynamics of different compartments of the population, and assumes homogeneity within each compartment. However, we should note that this assumption may not be valid in real-world scenarios with heterogeneous populations and infections. Indeed, when an individual contracts the disease, the status could be either mild or severe. In our model, this is absorbed by the report rate \(r_i\) but is not explicitly differentiated in the model. A more sophisticated model should have the heterogeneities included, but that would pose a significant higher computational demand and more detailed empirical or clinical data support. We leave that to future research efforts. The spatiotemporal distribution of predicted infected population (in natural logarithm scale) across all states under different simulation scenarios: (A) \(\alpha _r = 1\) and \(\alpha _b = 1\), i.e., all parameters took the values of the initial configuration, obtained through data assimilation method using the numbers of confirmed cases during March 1 – March 20, 2020; (B) the travel flow was reduced to \(\alpha _t = 0.05\), while other parameters values remained unchanged; (C) \(\alpha _r = 0.1\) and \(\alpha _b=1\); (D) \(\alpha _r = 1\) and \(\alpha _b=0.1\); (E) \(\alpha _r = 0.1\), \(\alpha _b=0.1\). In the simulations, the transmission rate was set to be \(b = \alpha _b b_0\) and the reporting rate was set to be \(r = 1-\alpha _r(1-r_0)\). Where \(r_0\) and \(b_0\) were the reporting rate and the transmission rate on March 20, 2020, which are inferred from the data assimilation step (Note: The maps are created using Esri's ArcGIS 10.7 software). The prediction time series of the total infected population in the 15 most affected states under two scenarios: (A) \(\alpha _r = \alpha _b = 1\), i.e., both the reported rate and the transmission rate remained unchanged; (B) \(\alpha _r = \alpha _b=0.1\), i.e., the transmission rate b was smaller and the reported rate r was larger (closer to 1) as \(r = 1-\alpha _r(1-r_0)\). (A) Susceptible population (S) on April 29, 2020 as a function of \(\alpha _r\). \(S(\alpha _r = 1)\) is the susceptible population on April 29 computed with the report rate set as the original report rate inferred from the data assimilation step. In all states, S increases as \(\alpha _r\) decreases, meaning that more people stay unaffected when a higher report is enacted. (B) \(R_e\), the basic reproduction number, on April 29 for different \(\alpha _b\) and \(\alpha _r\) in NY. The red line is the level set \(R_e = 1\). It can be seen that increasing the reported rate helps diminish the reproductive number, but cannot reduce \(R_e\) under 1 if the original transmission rate \(b_0\) is applied; (C) Susceptible population on April 29 for different \(D_q\). \(S(\alpha _r = 1)\) is the same as in (A). S significantly depends on the period from expose to quarantine. (A) Susceptible population (S) on April 4, 2020 as a function of \(\alpha _r\). The panels on the left and on the right are results from \(\gamma = 0.5\) and \(\gamma = 1.5\), respectively. For both \(\gamma\), S increases as \(\alpha _r\) decreases, meaning that more people stay unaffected when a higher report is enacted. (B) Susceptible population on April 4, 2020 for different \(D_q\) and different values of \(\gamma\). For those states whose susceptible population is much smaller than their total population due to a high infection rate (such as in NY), S significantly depends on \(D_q\) for both \(\gamma <1\) and \(\gamma >1\). The epidemiological data were retrieved from an open source project: Novel Coronavirus (COVID-19) Cases, developed by the Center For Systems Science and Engineering at the Johns Hopkins University (https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data). In addition, we collected millions of points of interest (POIs) with their foot-traffic and anonymous mobile phone users' travel patterns in the United States from SafeGraph. The data for academic research can be requested at https://www.safegraph.com. The code used for modeling and analysis in this paper is available in the GitHub repository: https://github.com/GeoDS/Travel-Network-SEIR. Drake, J. M., Chew, S. K. & Ma, S. Societal learning in epidemics: Intervention effectiveness during the 2003 SARS outbreak in singapore. PLoS ONE 1, e20 (2006). Wu, J. T., Leung, K. & Leung, G. M. Nowcasting and forecasting the potential domestic and international spread of the 2019-nCoV outbreak originating in Wuhan, China: a modelling study. The Lancet 395, 689–697 (2020). Du, Z. et al. Risk for transportation of coronavirus disease from Wuhan to other cities in China. Emerg. Infect. Dis. 26, 1049–1052 (2020). Tian, H. et al. An investigation of transmission control measures during the first 50 days of the COVID-19 epidemic in China. Science 368, 638–642 (2020). Chinazzi, M. et al. The effect of travel restrictions on the spread of the 2019 novel coronavirus (COVID-19) outbreak. Science 368(6489), 395-400 (2020). Lipsitch, M. et al. Transmission dynamics and control of severe acute respiratory syndrome. Science 300, 1966–1970 (2003). Li, R. et al. Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV2). Science 368(6490), 489-493 (2020). Maier, B. F. & Brockmann, D. Effective containment explains subexponential growth in recent confirmed COVID-19 cases in China. Science 368, 742–746 (2020). Dong, E., Du, H. & Gardner, L. An interactive web-based dashboard to track COVID-19 in real time. Lancet Infect. Dis. 20(5), 533–534 (2020). Holshue, M. L. et al. First case of 2019 novel coronavirus in the United States. New Engl. J. Med. 10(382), 929–936 (2020). Vox news, available at https://www.vox.com/policy-and-politics/2020/2/29/21159273/coronavirus-death-trump-health-officials-travel-ban-iran. New Yorks Times Report, available at https://www.nytimes.com/2020/02/12/health/coronavirus-test-kits-cdc.html. USA Today Report, available at https://www.usatoday.com/story/news/health/2020/03/28/coronavirus-fda-authorizes-abbott-labs-fast-portable-covid-test/2932766001/. Lai, S. et al. Effect of non-pharmaceutical interventions for containing the COVID-19 outbreak: an observational and modelling study. medRxiv. https://doi.org/10.1101/2020.03.03.20029843 (2020). Ferretti, L. et al. Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing. Science 368, eabb6936 (2020). Tian, H. et al. An investigation of transmission control measures during the first 50 days of the COVID-19 epidemic in China. Science 368(6491), 638–642 (2020). Kucharski, A. J. et al. Early dynamics of transmission and control of COVID-19: A mathematical modelling study. Lancet Infect. Dis. 20(5), 553–558 (2020). Hellewell, J. et al. Feasibility of controlling COVID-19 outbreaks by isolation of cases and contacts. Lancet Glob. Heal. 8(4), e488–e496 (2020). Mollison, D. Spatial contact models for ecological and epidemic spread. J. R. Stat. Soc. Ser. B (Methodological) 39, 283–313 (1977). MathSciNet MATH Google Scholar Lloyd, A. L. & May, R. M. Spatial heterogeneity in epidemic models. J. Theor. Biol. 179, 1–11 (1996). Tuckwell, H. C., Toubiana, L. & Vibert, J.-F. Spatial epidemic network models with viral dynamics. Phys. Rev. E 57, 2163 (1998). Meloni, S. et al. Modeling human mobility responses to the large-scale spreading of infectious diseases. Sci. Rep. 1, 62 (2011). Richardson, D. B. et al. Spatial turn in health research. Science 339, 1390–1392 (2013). Brockmann, D. & Helbing, D. The hidden geometry of complex, network-driven contagion phenomena. Science 342, 1337–1342 (2013). Lai, S. et al. Assessing spread risk of Wuhan novel coronavirus within and beyond China, January–April 2020: a travel network-based modelling study. medRxiv. https://doi.org/10.1101/2020.02.04.20020479 (2020). Zhu, X. et al. Spatially explicit modeling of 2019-nCoV epidemic trend based on mobile phone data in Mainland China. medRxiv. https://doi.org/10.1101/2020.02.09.20021360 (2020). Buckee, C. O. et al. Aggregated mobility data could help fight COVID-19. Science 368(6487), 145–146 (2020) ADS PubMed Google Scholar Kraemer, M. U. et al. The effect of human mobility and control measures on the COVID-19 epidemic in China. Science 368, 493-497 (2020). Zhou, C. et al. COVID-19: Challenges to GIS with big data. Geogr. Sustain. 1, 77-87 (2020). Grasselli, G., Pesenti, A. & Cecconi, M. Critical care utilization for the COVID-19 outbreak in Lombardy, Italy: Early experience and forecast during an emergency response. JAMA 323(16), 1545–1546 (2020). USA Today Report, available at https://www.cdc.gov/coronavirus/2019-ncov/cases-updates/cases-in-us.html. Wang, J., Tang, K., Feng, K. & Lv, W. When is the COVID-19 pandemic over? Evidence from the stay-at-home policy execution in 106 Chinese cities. Available at SSRN: https://ssrn.com/abstract=3561491106 (2020). Arons, M. M. et al. Presymptomatic SARS-CoV-2 infections and transmission in a skilled nursing facility. N. Engl. J. Med. 382(22), 2081-2090 (2020). Alagoz, O., Sethi, A., Patterson, B., Churpek, M. & Safdar, N. Impact of timing of and adherence to social distancing measures on COVID-19 burden in the US: A simulation modeling approach. medRxiv. https://doi.org/10.1101/2020.06.07.20124859 (2020). Kermack, W. O. & McKendrick, A. G. A contribution to the mathematical theory of epidemics. Proc. R. Soc. Lond. Ser. A Contain. Papers Math. Phys. Charact. 115, 700–721 (1927). Hethcote, H. W. The mathematics of infectious diseases. SIAM Rev. 42, 599–653 (2000). Brauer, F. Compartmental models in epidemiology. In Mathematical Epidemiology, 19–79 (Springer, Berlin, 2008). Prestby, T., App, J., Kang, Y. & Gao, S. Understanding neighborhood isolation through spatial interaction network analysis using location big data. Environ. Plan. A: Econ. Space 52, 1027-1031 (2020). Liang, Y., Gao, S., Cai, Y., Foutz, N. Z. & Wu, L. Calibrating the dynamic huff model for business analysis using location big data. Trans. GIS 24(3), 681–703 (2020). Leung, N. H. et al. Respiratory virus shedding in exhaled breath and efficacy of face masks. Nat. Med. 26(5), 676–680 (2020). CNN Report, Infected people without symptoms might be driving the spread of coronavirus more than we realized, available at https://www.cnn.com/2020/03/14/health/coronavirus-asymptomatic-spread/index.html. Guan, W.-J. et al. Clinical characteristics of coronavirus disease 2019 in China. New Engl. J. Med. 382, 1708–1720 (2020). Pan, A. et al. Association of public health interventions with the epidemiology of the COVID-19 outbreak in Wuhan, China. JAMA 323, 1915-1923 (2020). Evensen, G. The ensemble Kalman filter for combined state and parameter estimation. IEEE Control. Syst. Mag. 29, 83–104 (2009). Reich, S. & Cotter, C. Probabilistic Forecasting and Bayesian Data Assimilation (Cambridge University Press, Cambridge, 2015). We would like to thank the SafeGraph Inc. for providing the anonymous and aggregated human mobility and place visit data. We would also like to thank all individuals and organizations for collecting and updating the COVID-19 epidemiological data and reports. S.G. and Q.L. acknowledge the funding support provided by the National Science Foundation (Award No. BCS-2027375). Q.L. and S.C. acknowledge the Data Science Initiative of UW-Madison. X.S. acknowledges the Scholarly Innovation and Advancement Awards of Dartmouth College. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Department of Mathematics, University of Wisconsin-Madison, Madison, WI, 53706, USA Shi Chen & Qin Li GeoDS Lab, Department of Geography, University of Wisconsin-Madison, Madison, WI, 53706, USA Song Gao & Yuhao Kang Department of Geography, Dartmouth College, Hanover, NH, 03755, USA Xun Shi Shi Chen Qin Li Song Gao Yuhao Kang Research design and conceptualization: Q.L., S.C., S.G.; Data collection and processing: S.C., S.G., Y.H.K.; Mathematical model implementation: Q.L., S.C.; Result analysis: Q.L., S.G., X.S.; Visualization: S.C., S.G., Y.H.K.; Project administration: Q.L. S.G., X.S.; Writing: all authors. Correspondence to Qin Li or Song Gao. Chen, S., Li, Q., Gao, S. et al. State-specific projection of COVID-19 infection in the United States and evaluation of three major control measures. Sci Rep 10, 22429 (2020). https://doi.org/10.1038/s41598-020-80044-3 Impact of government policies on the COVID-19 pandemic unraveled by mathematical modelling Agata Małgorzata Wilk Krzysztof Łakomiec Krzysztof Fujarewicz Statistical inference using GLEaM model with spatial heterogeneity and correlation between regions Yixuan Tan Yuan Zhang Xiao-Hua Zhou Journal Top 100
CommonCrawl
Primary Menu Search Contact IJQF An online forum for exploring the conceptual foundations of quantum mechanics, quantum field theory and quantum gravity You are here: Home ∼ Weekly Papers on Quantum Foundations (11) Weekly Papers on Quantum Foundations (11) Published by editor on March 14, 2015 This is a list of this week's papers on quantum foundations published in the various journals or uploaded to the preprint servers such as arxiv.org and PhilSci Archive. The Metaphysics of D-CTCs: On the Underlying Assumptions of Deutsch's Quantum Solution to the Paradoxes of Time Travel PhilSci-Archive: No conditions. Results ordered -Date Deposited. on 2015-3-14 3:16am GMT Dunlap, Lucas (2015) The Metaphysics of D-CTCs: On the Underlying Assumptions of Deutsch's Quantum Solution to the Paradoxes of Time Travel. [Preprint] On the Common Structure of the Primitive Ontology Approach and the Information-Theoretic Interpretation of Quantum Theory Dunlap, Lucas (2015) On the Common Structure of the Primitive Ontology Approach and the Information-Theoretic Interpretation of Quantum Theory. [Preprint] The Philosophy of Quantum Field Theory Baker, David John (2015) The Philosophy of Quantum Field Theory. [Preprint] Violation of unitarity by Hawking radiation does not violate energy-momentum conservation. (arXiv:1502.04324v2 [hep-th] UPDATED) gr-qc updates on arXiv.org Authors: H. Nikolic An argument by Banks, Susskind and Peskin (BSP), according to which violation of unitarity would violate either locality or energy-momentum conservation, is widely believed to be a strong argument against non-unitarity of Hawking radiation. We find that the whole BSP argument rests on the crucial assumption that the Hamiltonian is not highly degenerate, and point out that this assumption is not satisfied for systems with many degrees of freedom. Using Lindblad equation, we show that high degeneracy of the Hamiltonian allows local non-unitary evolution without violating energy-momentum conservation. Moreover, since energy-momentum is the source of gravity, we argue that energy-momentum is necessarily conserved for a large class of non-unitary systems with gravity. Finally, we explicitly calculate the Lindblad operators for non-unitary Hawking radiation and show that they conserve energy-momentum. Bohr-like black holes. (arXiv:1503.03474v1 [gr-qc]) Authors: Christian Corda The idea that black holes (BHs) result in highly excited states representing both the "hydrogen atom" and the "quasi-thermal emission" in quantum gravity is today an intuitive but general conviction. In this paper it will be shown that such an intuitive picture is more than a picture. In fact, we will discuss a model of quantum BH somewhat similar to the historical semi-classical model of the structure of a hydrogen atom introduced by Bohr in 1913. The model is completely consistent with existing results in the literature, starting from the celebrated result of Bekenstein on the area quantization. Short-time quantum propagator and Bohmian trajectories ScienceDirect Publication: Physics Letters A on 2015-3-12 10:41am GMT Publication date: 6 December 2013 Source:Physics Letters A, Volume 377, Issue 42 Author(s): Maurice de Gosson , Basil Hiley We begin by giving correct expressions for the short-time action following the work Makri–Miller. We use these estimates to derive an accurate expression modulo Δ t 2 for the quantum propagator and we show that the quantum potential is negligible modulo Δ t 2 for a point source, thus justifying an unfortunately largely ignored observation of Holland made twenty years ago. We finally prove that this implies that the quantum motion is classical for very short times. Quantum Mechanics and Paradigm Shifts on 2015-3-11 6:58pm GMT Allori, Valia (2015) Quantum Mechanics and Paradigm Shifts. [Published Article] Radiation from a collapsing object is manifestly unitary. (arXiv:1503.01487v1 [gr-qc] CROSS LISTED) hep-th updates on arXiv.org Authors: Anshul Saini, Dejan Stojkovic The process of gravitational collapse excites the fields propagating in the background geometry and gives rise to thermal radiation. We demonstrate by explicit calculations that the density matrix corresponding to such radiation actually describes a pure state. While Hawking's leading order density matrix contains only the diagonal terms, we calculate the off-diagonal correlation terms. These correlations start very small, but then grow in time. The cumulative effect is that the correlations become comparable to the leading order terms and significantly modify the density matrix. While the trace of the Hawking's density matrix squared goes from unity to zero during the evolution, the trace of the total density matrix squared remains unity at all times and all frequencies. This implies that the process of radiation from a collapsing object is unitary. The Other de Broglie Wave. (arXiv:1503.02534v1 [physics.hist-ph]) physics.hist-ph updates on arXiv.org Authors: Daniel Shanahan In his famous doctoral dissertation, de Broglie assumed that a massive particle is surrounded in its rest frame by a standing wave. He argued that as observed from another inertial frame this wave becomes the superluminal wave now known as the de Broglie wave. It is shown here that under a Lorentz transformation, such a standing wave becomes, not the de Broglie wave, but a modulated wave moving at the velocity of the particle. It is the modulation that has the superluminal velocity of the de Broglie wave and should be recognized as the true de Broglie wave. De Broglie's demonstrations relied, variously, on his "theorem of the harmony of phases", on a mechanical model, and on a spacetime diagram. It is shown that in each case the underlying wave was inadvertently suppressed. Identified as a modulation, the de Broglie wave acquires a physically reasonable ontology, avoiding the awkward device of recovering the particle velocity from a superposition of such waves. The deeper wave structure implied by this de Broglie wave must also impinge on such issues in quantum mechanics as the meaning of the wave function and the nature of wave-particle duality. A Matter of Principle: The Principles of Quantum Theory, Dirac's Equation, and Quantum Information. (arXiv:1503.02229v1 [physics.hist-ph]) Authors: Arkady Plotnitsky This article is concerned with the role of fundamental principles in theoretical physics, especially quantum theory. The fundamental principles of relativity will be be addressed as well in view of their role in quantum electrodynamics and quantum field theory, specifically Dirac's work, which, in particular Dirac's derivation of his relativistic equation for the electron from the principles of relativity and quantum theory, is the main focus of this article. I shall, however, also consider Heisenberg's derivation of quantum mechanics, which inspired Dirac. I argue that Heisenberg's and Dirac's work alike was guided by their adherence to and confidence in the fundamental principles of quantum theory. The final section of the article discusses the recent work by G. M. D' Ariano and his coworkers on the principles of quantum information theory, which extends quantum theory and its principles in a new direction. This extension enabled them to offer a new derivation of Dirac's equation from these principles alone, without using the principles of relativity. Quantum Information Biology: from information interpretation of quantum mechanics to applications in molecular biology and cognitive psychology. (arXiv:1503.02515v1 [quant-ph]) quant-ph updates on arXiv.org Authors: Masanari Asano, Irina Basieva, Andrei Khrennikov, Masanori Ohya, Yoshiharu Tanaka, Ichiro Yamato We discuss foundational issues of quantum information biology (QIB) — one of the most successful applications of the quantum formalism outside of physics. QIB provides a multi-scale model of information processing in bio-systems: from proteins and cells to cognitive and social systems. This theory has to be sharply distinguished from "traditional quantum biophysics". The latter is about quantum bio-physical processes, e.g., in cells or brains. QIB models the dynamics of information states of bio-systems. It is based on the quantum-like paradigm: complex bio-systems process information in accordance with the laws of quantum information and probability. This paradigm is supported by plenty of statistical bio-data collected at all scales, from molecular biology and genetics/epigenetics to cognitive psychology and behavioral economics. We argue that the information interpretation of quantum mechanics (its various forms were elaborated by Zeilinger and Brukner, Fuchs and Mermin, and D' Ariano) is the most natural interpretation of QIB. We also point out that QBIsm (Quantum Bayesianism) can serve to find a proper interpretation of bio-quantum probabilities. Biologically QIB is based on two principles: a) adaptivity; b) openness (bio-systems are fundamentally open). These principles are mathematically represented in the framework of a novel formalism — quantum adaptive dynamics which, in particular, contains the standard theory of open quantum systems as a special case of adaptivity (to environment). Macroscopic quantum resonators (MAQRO): 2015 Update. (arXiv:1503.02640v1 [quant-ph]) Authors: Rainer Kaltenbaek, Markus Arndt, Markus Aspelmeyer, Peter F. Barker, Angelo Bassi, James Bateman, Kai Bongs, Sougato Bose, Claus Braxmaier, Časlav Brukner, Bruno Christophe, Michael Chwalla, Pierre-François Cohadon, Adrian M. Cruise, Catalina Curceanu, Kishan Dholakia, Klaus Döringshoff, Wolfgang Ertmer, Jan Gieseler, Norman Gürlebeck, Gerald Hechenblaikner, Antoine Heidmann, Sven Herrmann, Sabine Hossenfelder, Ulrich Johann, Nikolai Kiesel, Myungshik Kim, Claus Lämmerzahl, Astrid Lambrecht, Michael Mazilu, Gerard J. Milburn, Holger Müller, Lukas Novotny, Mauro Paternostro, Achim Peters, Igor Pikovski,André Pilan-Zanoni, Ernst M. Rasel, Serge Reynaud, C. Jess Riedel, Manuel Rodrigues, Loïc Rondin, Albert Roura, Wolfgang P. Schleich, Jörg Schmiedmayer, et al. (7 additional authors not shown) Do the laws of quantum physics still hold for macroscopic objects – this is at the heart of Schr\"odinger's cat paradox – or do gravitation or yet unknown effects set a limit for massive particles? What is the fundamental relation between quantum physics and gravity? Ground-based experiments addressing these questions may soon face limitations due to limited free-fall times and the quality of vacuum and microgravity. The proposed mission MAQRO may overcome these limitations and allow addressing those fundamental questions. MAQRO harnesses recent developments in quantum optomechanics, high-mass matter-wave interferometry as well as state-of-the-art space technology to push macroscopic quantum experiments towards their ultimate performance limits and to open new horizons for applying quantum technology in space. The main scientific goal of MAQRO is to probe the vastly unexplored "quantum-classical" transition for increasingly massive objects, testing the predictions of quantum theory for truly macroscopic objects in a size and mass regime unachievable in ground-based experiments. The hardware for the mission will largely be based on available space technology. Here, we present the MAQRO proposal submitted in response to the (M4) Cosmic Vision call of the European Space Agency for a medium-size mission opportunity with a possible launch in 2025. Proof of a Conjecture on Contextuality in Cyclic Systems with Binary Variables. (arXiv:1503.02181v1 [quant-ph]) Authors: Janne V. Kujala, Ehtibar N. Dzhafarov We present a proof for a conjecture previously formulated by Dzhafarov, Kujala, and Larsson (Foundations of Physics, in press,arXiv:1411.2244). The conjecture specifies a measure for degree of contextuality and a criterion (necessary and sufficient condition) for contextuality in a broad class of quantum systems. This class includes Leggett-Garg, EPR/Bell, and Klyachko-Can-Binicioglu-Shumovsky type systems as special cases. In a system of this class certain physical properties $q_{1},\ldots,q_{n}$ are measured in pairs $\left(q_{i},q_{j}\right)$; every property enters in precisely two such pairs; and each measurement outcome is a binary random variable. Denoting the measurement outcomes for a property $q_{i}$ in the two pairs it enters by $V_{i}$ and $W_{i}$, the pair of measurement outcomes for $\left(q_{i},q_{j}\right)$ is $\left(V_{i},W_{j}\right)$. Contextuality is defined as follows: one computes the minimal possible value $\Delta_{0}$ for the sum of $\Pr\left[V_{i}\not=W_{i}\right]$ (over $i=1,\ldots,n$) that is allowed by the individual distributions of $V_{i}$ and $W_{i}$; one computes the minimal possible value $\Delta_{\min}$ for the sum of $\Pr\left[V_{i}\not=W_{i}\right]$ across all possible couplings of (i.e., joint distributions imposed on) the entire set of random variables $V_{1},W_{1},\ldots,V_{n},W_{n}$ in the system; and the systems is considered contextual if $\Delta_{\min}>\Delta_{0}$ (otherwise $\Delta_{\min}=\Delta_{0}$). This definition has its justification in the general approach dubbed Contextuality-by-Default, and it allows for measurement errors and signaling among the measured properties. The conjecture proved in this paper specifies the value of $\Delta_{\min}-\Delta_{0}$ in terms of the distributions of the measurement outcomes $\left(V_{i},W_{j}\right)$. Localization by Dissipative Disorder: a Deterministic Approach to Position Measurements. (arXiv:1503.02494v1 [cond-mat.quant-gas]) Authors: Giovanni Barontini, Vera Guarrera We propose an approach to position measurements based on the hypothesis that the action of a position detector on a quantum system can be effectively described by a dissipative disordered potential. We show that such kind of potential is able, via the dissipation-induced Anderson localization, to contemporary localize the wavefunction of the system and to dissipate information to modes bounded to the detector. By imposing a diabaticity condition we demonstrate that the dissipative dynamics between the modes of the system leads to a localized energy exchange between the detector and the rest of the environment -the "click" of the detector- thus providing a complete deterministic description of a position measurement. We finally numerically demonstrate that our approach is consistent with the Born probability rule. The equivalent emergence of time dependence in classical and quantum mechanics. (arXiv:1503.02146v1 [quant-ph]) Authors: John S. Briggs Beginning with the principle that a closed mechanical composite system is timeless, time can be de?ned by the regular changes in a suitable position coordinate (clock) in the observing part, when one part of the closed composite observes another part. Translating this scenario into both classical and quantum mechanics allows a transition to be made from a time-independent mechanics for the closed composite to a time-dependent description of the observed part alone. The use of Hamilton- Jacobi theory yields a very close parallel between the derivations in classical and quantum mechanics. The time-dependent equations, Hamilton-Jacobi or Schrodinger, appear as approximations since no observed system is truly closed. The quantum case has an additional feature in the condition that the observing environment must become classical in order to de?ne a real classical time variable. This condition leads to a removal of entanglement engendered by the interaction between the observed system and the observing environment. Comparison is made to the similar emergence of time in quantum gravity theory Primitive ontology and quantum field theory on 2015-3-09 11:48pm GMT Lam, Vincent (2015) Primitive ontology and quantum field theory. [Preprint] Posted in @all Weekly Papers Article written by editor Volume 5, Issue 1, January 2019 Volume 5, Issue 2, April 2019 Volume 5, Issue 4, October 2019 IJQF Supplement IJQF Forums 2015 International Workshop on Quantum Foundations 2016 International Workshop on Quantum Observers 2017 International Workshop: Collapse of the Wave Function 2018 Workshop on Wigner's Friend 2019 International Workshop: Beyond Bell's theorem First iWorkshop on the Meaning of the Wave Function 2014 John Bell Workshop 2014 A RE-EXAMINATION OF THE EPR ARGUMENT AND BELL'S THEOREM by Federico Comparsi Testing the reality of Wigner's friend's experience by Howard Wiseman Mysteries of QM and SR Share a Common Origin: No Preferred Reference Frame by Mark Stuckey Bell inequalities based on the logic of plausible reasoning by Ilja Schmelzer Quantum nonlocality using Aristotle's notion of time by Charlie Beil Beyond Bell? by Richard Healey Quantum theory is incompatible with relativity: A proof beyond Bell's theorem by editor Mysteries of QM result from the fact that this theory is not yet finished by Nikolay L. Chuprikov Lorentz-invariant, retrocausal, and deterministic hidden variables by Aurelien Drezet In defense of a "single-world" interpretation of quantum mechanics by Jeffrey Bub On Testing the Simulation Theory (3,201) 2019 International Workshop: Beyond Bell's theorem (2,266) Latest Papers on Quantum Foundations (1,447) International Journal of Quantum Foundations (802) Non-Relativistic Limit of the Dirac Equation (738) Review of "Foundations of Quantum Mechanics: An Exploration of the Physical Meaning of Quantum Theory" (567) First iWorkshop on the Meaning of the Wave Function (538) Quantum Nonlocality Explained (Open Review Paper) (469) The Meaning of the Wave Function: In Search of the Ontology of Quantum Mechanics (457) Taking Heisenberg's Potentia Seriously (380) Papers Worldwide Quantum theory is incompatible with relativity: A new proof beyond Bell's theorem and a test of unitary quantum theories June 29, 2019 Copyright © 2020 International Journal of Quantum Foundations. Powered by WordPress and Path. Back to Top
CommonCrawl
Complex Analysis and Dynamics Seminar Feb 7: Alistair Fletcher (Northern Illinois University) Poincare Linearizers in Higher Dimensions It is well known that the behaviour of a holomorphic function near a fixed point is determined by its derivative there. In the case of a repelling fixed point $z_0$, the function can be conjugated to $z \mapsto f'(z_0) z$ and the class of functions which do the conjugating are called Poincare linearizers. We will discuss extending this idea to the setting of quasiregular mappings in higher dimensions, and in particular exploring the dynamics of quasiregular Poincare linearizers. Feb 21: Frederick Gardiner (Brooklyn College and Graduate Center of CUNY) The Quasidisc Cocycle Outline of the talk with some diagrams Feb 28: Linda Keen (Lehman College and Graduate Center of CUNY) Discreteness and the Hyperbolic Geometry of Hexagons Deciding if two matrices in PSL$(2,{\mathbb C})$ generate a discrete group is a hard question. The generators determine a hexagon in ${\mathbb H}^3$. In many cases, it is possible to determine from the hexagon if the group discrete. Combining earlier work on enumerating sequences of pairs of generators using Farey sequences of rationals with a study of the geometry of right angled hexagons in ${\mathbb H}^3$, we have a conjecture on how to determine a sequence of rationals and hexagons, with their corresponding pairs of generators, so that either one the generators will converge to the identity, the hexagons will become degenerate and the group is non-discrete, or the hexagons will converge in ${\mathbb H}^3$ and the group is discrete. This talk is based on joint work with Jane Gilman. Mar 7: Andrew Sanders (University of Illinois at Chicago) A New Proof of Bowen's Theorem on Hausdorff Dimension of Quasi-circles A quasi-Fuchsian group is a discrete group of Mobius transformations of the Riemann sphere which is isomorphic to the fundamental group of a compact surface and acts properly on the complement of a Jordan curve: the limit set. In 1979, Bowen proved a remarkable rigidity theorem on the Hausdorff dimension of the limit set of a quasi-Fuchsian group: it is equal to 1 if and only if the limit set is a round circle. This theorem now has many generalizations. We will present a new proof of Bowen's result as a by-product of a new lower bound on the Hausdorff dimension of the limit set of a quasi-Fuchsian group. This lower bound is in terms of the differential geometric data of an immersed, incompressible minimal surface in the quotient manifold. If time permits, generalizations of this result to other convex-co-compact surface groups will be presented. Mar 14: Gabriele Mondello (University of Rome) On the Cohomological Dimension of the Moduli Space of Riemann Surfaces The moduli space of compact Riemann surfaces of fixed genus has a finite étale cover which is a complex manifold. Thus it makes sense to speak of its de Rham or cohomology of coherent sheaves. Its de Rham cohomological dimension was determined by Harer in the 1980's. Conjectural vanishing of its coherent cohomology in high degree would shed new light on many vanishing theorems for the tautological classes and for de Rham cohomology. In this talk, we will give an estimate of coherent cohomological dimension of the moduli space of Riemann surfaces, which works in every genus though it is not optimal. In the proof, flat surfaces will come into play. Mar 21: Araceli Bonifant (University of Rhode Island) Fjords in a Parameter Space for Antipode Preserving Cubic Maps This talk will describe the topological properties of the "fjords" that appear in the parameter space for antipode preserving cubic maps with a critical fixed point. Mar 28: Sergiy Merenkov (University of Illinois at Urbana-Champaign and City College of New York) Quasisymmetric maps between Sierpinski carpet Julia sets I will discuss recent rigidity results for quasisymmetric maps between Sierpinski carpets that are Julia sets of postcritically finite rational maps. This is joint work with M. Bonk and M. Lyubich. Apr 4: Patrick Hooper (City College and Graduate Center of CUNY) Topologizing the Space of all Translation Surfaces A translation surface is a surface equipped with an atlas of charts to the plane where the transition functions are translations. We do not insist that the inherited metric on the surface be complete. There is an intimate connection between dynamics on these surfaces (straight-line or geodesic flows), and dynamics on spaces of translation surfaces (Teichmuller flow) through renormalization. These ideas are most developed in the finite genus case, but recent work has shown that the connection persists in infinite genus. Making this relationship concrete requires topologizing spaces of translation surfaces. I will explain how to do this, and discuss some of the connections to dynamics. Apr 11: Giulio Tiozzo (ICERM and Yale University) An Entropic Tour Along the Mandelbrot Set The notion of topological entropy, arising from information theory, is a fundamental tool to understand the complexity of a dynamical system. When the dynamical system varies in a family, the natural question arises of how the entropy changes with the parameter. Recently, W. Thurston has introduced these ideas in the context of complex dynamics by defining the "core entropy" of a quadratic polynomial as the entropy of a certain forward-invariant subset of the Julia set called the Hubbard tree. As we shall see, the core entropy is a purely topological / combinatorial quantity which nonetheless captures the richness of the fractal structure of the Mandelbrot set. In particular, we shall see how to relate the variation of such a function to the geometry of the Mandelbrot set. Apr 25: Qiongling Li (Rice University) Asymptotics of Certain Families of Higgs Bundles in the Hitchin Component In this talk, I will first go through Higgs bundles, basic construction of Hitchin components inside the moduli space of Higgs bundles. I will then introduce recent work with Brian Collier on the asymptotic behavior of certain families in Hitchin components. Namely, in the family of Higgs bundles $(\mathcal{E},t{\phi})$, we try to analyze the asymptotic behavior of the corresponding representation $\rho_t$ as $t\rightarrow \infty$ in two special cases. May 2: Kealey Dias (BCC of CUNY) On Parameter Space of Single-variable Complex Polynomial Vector Fields The space of single-variable complex polynomial vector fields $\Xi_d \simeq \mathbb{C}^{d-1}$ can be decomposed into loci $\mathcal{C}$ in which vector fields share a combinatorial invariant (topological equivalence with a labeling). The main result of this talk aims to prove that such a class is homeomorphic to $\mathbb{H}_+^s \times \mathbb{R}_+^h$, which corresponds to the set of the so-called analytic invariants associated to the class. The construction in the proof of this theorem, which utilizes holomorphic dependence on parameters in the Measurable Riemann Mapping Theorem, will pave the way for understanding a class of bifurcations of these vector fields. May 9: Jon Chaika (University of Utah) The Limit Set in PMF of Some Teichmuller Geodesics Teichmuller space is topologically an open ball which has numerous compactifications. In joint work with H. Masur and M. Wolf, we show that there are Abelian differentials with minimal but not uniquely ergodic vertical foliations so that their limit set in Thurston's compactification, PMF, a) is a unique point; b) is a line segment; c) an ergodic (but not uniquely ergodic) minimal Abelian differential which has a line segment as its limit set; d) an ergodic (but not uniquely ergodic) minimal Abelian differential which has a unique point as its limit set. These examples arise from Veech's example of minimal and not uniquely ergodic $\mathbb{Z}_2$ skew products of rotations which are related to two tori glued along a slit. Masur proved that the geodesic defined by a quadratic differential with uniquely ergodic vertical foliation has a (unique) limit in PMF and that it was what one would expect. Lenzhen constructed an example of a non-minimal quadratic differential that did not have a limit in PMF (the limit set was a line segment). This talk will focus on some motivating examples. Back to the seminar page
CommonCrawl
Transmission of trisomy decreases with maternal age in mouse models of Down syndrome, mirroring a phenomenon in human Down syndrome mothers Shani Stern1, David Biron2 & Elisha Moses3 BMC Geneticsvolume 17, Article number: 105 (2016) | Download Citation Down syndrome incidence in humans increases dramatically with maternal age. This is mainly the result of increased meiotic errors, but factors such as differences in abortion rate may play a role as well. Since the meiotic error rate increases almost exponentially after a certain age, its contribution to the overall incidence aneuploidy may mask the contribution of other processes. To focus on such selection mechanisms we investigated transmission in trisomic females, using data from mouse models and from Down syndrome humans. In trisomic females the a-priori probability for trisomy is independent of meiotic errors and thus approximately constant in the early embryo. Despite this, the rate of transmission of the extra chromosome decreases with age in females of the Ts65Dn and, as we show, for the Tc1 mouse models for Down syndrome. Evaluating progeny of 73 Tc1 births and 112 Ts65Dn births from females aged 130 days to 250 days old showed that both models exhibit a 3-fold reduction of the probability to transmit the trisomy with increased maternal ageing. This is concurrent with a 2-fold reduction of litter size with maternal ageing. Furthermore, analysis of previously reported 30 births in Down syndrome women shows a similar tendency with an almost three fold reduction in the probability to have a Down syndrome child between a 20 and 30 years old Down syndrome woman. In the two types of mice models for Down syndrome that were used for this study, and in human Down syndrome, older females have significantly lower probability to transmit the trisomy to the offspring. Our findings, taken together with previous reports of decreased supportive environment of the older uterus, add support to the notion that an older uterus negatively selects the less fit trisomic embryos. Down Syndrome (DS) is the most abundant nonlethal chromosomal abnormality in humans, in which all or part of chromosome 21 appears in three copies instead of two. This gene imbalance results in cognitive impairment as well as in moderate to severe negative impacts on physical health [1, 2]. Since ~90 % of children with DS receive their extra chromosome from their mother [3, 4], the rates of DS and of other chromosomal abnormalities are known to increase with maternal age in humans [5, 6]. In humans, there is a reduction of survival of DS babies as compared to healthy ones primarily due to congenital heart disease [7–9]. In fact, most DS embryos will die in the uterus – previous studies report that the incidence of aneuploidy in spontaneous abortions is larger than 35 % [3, 10, 11]. They also reported that of the trisomies that can survive to birth, trisomy 21 was the most frequent one with a 2.3 % occurrence rate. However, in live births the occurrence rate of aneuploidy was only 0.3 %, of which trisomy 21 accounted for 0.13 %. Increase of the rate of aneuploidy with maternal age thus enhances the need for uterine selection (both positive and negative). The aging of the uterine may be playing a role in this selection process, as indicated by the approximately two-fold increase in the abortion rate with age [12–17]. It is particularly important to understand uterine selection because the ability to conceive is being extended to women over 50 years of age, with possible implication for IVF and genetic pre-screening [12–14]. At a high maternal age women are encouraged to monitor the health of the fetus through amniocentesis, despite the increase of risk for an abortion. This recommendation is linked to the reported exponential increase in the rate of chromosomal abnormalities with maternal age [10], but the possibility of higher selection against aneuploidy in older uteruses may affect the risk/benefit analysis of invasive screening procedures. The origin of aneuploidy is still not fully understood, and several mechanisms play a role [18–20]. The relaxed selection hypothesis postulates that production of aneuploid gametes remains constant with maternal age, but that in utero selection against trisomic embryos decreases [21–25]. This hypothesis was challenged by findings indicating an increase in the likelihood of miscarriages of trisomic embryos with the age of healthy mothers [26–29]. Nevertheless, as the maternal age increases, the rate of aneuploidies in neonatal increases dramatically [30]. This results from the increment in the probability of meiotic errors with maternal age, which increases the rate of trisomic embryos. Thus, in healthy mothers, the dependence of negative selection against aneuploidies on age may be masked by meiotic errors. A model system in which the a priori probability for aneuploidy is fixed would enable to examine how selection is affected by maternal age. Observing progeny of mothers with aneuploidy (trisomy 21 in our case) fixes the a-priori probability of the aneuploidy at approximately the Mendelian ratio, i.e., at 50 %. We therefore use data from trisomic mothers, mice and human, to address this question. While DS occurs in humans and not in rodents, several mouse models for DS have been developed for studying this disorder. In mice models for DS some of the genes orthologous to the genes of Homo sapiens (HSA) 21 are present at 3 copies. The Tc1 mouse model [31] contains a freely segregating human chromosome 21 with about 83 % of the known HSA 21 genes, and it is mosaic, with about 50 % of the cells containing the extra chromosome. The Ts65Dn mouse model [32] contains an extra translocation chromosome, containing genes from chromosome 16 and 17 of the mouse, and a total of ~65 % of DS genes in trisomy [33]. Both mouse models exhibit heart defects like human DS [34, 35]. Both mouse models were shown to perform poorly on various cognitive tasks as Morris water maze [31, 36]. Both have reduced long-term potentiation [31, 37]. The Ts65Dn mouse was shown to exhibit increase in inhibition [37, 38] and developmental delays, and both mouse models have been shown to have alterations in several ionic channel conductions [38]. In the Tc1 mouse model, the currently reported rate of transmission of the extra chromosome is approximately 40 % [31]. In the Ts65Dn mouse model an extra translocation chromosome is inherited. It was shown in [39] that while the extra chromosome in Ts65Dn is transmitted in the expected ratio of 50 % immediately after conception, there is a disproportionate loss of trisomic offspring in late gestation and after birth, similar to human DS [11]. It was also shown in [39] that as the Ts65Dn female ages, her litter size decreases and the transmission rate of the trisomy decrease concurrently. We report here that in the Tc1 mouse model the ratio of trisomic to non-trisomic offspring diminishes with the age of the mother in a very similar distribution to Ts65Dn mothers. By extrapolation to embryonic stage, Tc1 females would also have a similar distribution of the Ts65Dn female, i.e. a 50 % (Mendelian) ratio for transmitting the extra (human) chromosome immediately after conception. Importantly, when we compare to previously reported cases of deliveries of women with DS, we find that a similar phenomenon appears and the probability of a child with DS diminishes with the age of the mother. The dependence of the fraction of trisomic progeny on the age of the mother is thus hypothesized to reflect the reduced fitness of trisomic embryos, who cannot survive the less supportive conditions of the older uterus. Our findings suggest that the observation of this phenomenon in trisomic mice is reproducible, mirrors the trend seen in human pregnancies, and can in the future be studied in these two independent mouse models. All procedures were approved by the Weizmann Institutional Animal Care and Use Committee. Grouping of sample sets Genotyping was performed at one of three times. One group was genotyped at embryonic stage E17 (n = 28, 44 respectively for Tc1, Ts65Dn), another group immediately after birth at P0 (n = 10, 11 respectively for Tc1, Ts65Dn) and the rest after weaning (n = 35, 57 respectively for Tc1, Ts65Dn). In the latter group males were not always kept (n = 16, 35 respectively for Tc1, Ts65Dn), and these litters were not used for the analysis of litter size. During gestation females were typically weighed daily and their pregnancy was monitored. In two pregnancies of young Ts65Dn females, the entire progeny was lost after day 17. One young Tc1 and two young Ts65Dn mothers did not wean and the entire progeny was lost. In addition, two Tc1 and two Ts65Dn did not wean some of the pups, and consequently lost 10–20 % of their litter. In total these cases account for less than 6 % of the data and therefore could not have appreciably affected our results. For transmission of trisomy, analysis was performed on two sets of data: 1) litters where the entire progeny was genotyped 2) litters where we know the genotype of the females only (because the males were not kept, the ratio was therefore calculated as the number of trisomic females divided by the number of females in the litter), assuming that trisomy rate has a similar incidence in both male and female population. These two analyses were compared. Deoxyribonucleic acid (DNA) was extracted using the Extract-N-Amp Tissue polymerase chain reaction (PCR) kit from Sigma (http://www.sigmaaldrich.com/life-science/molecular-biology/dna-and-rna-purification/extract-n-amp.html), for each ~5 mg piece of brain or ~0.5 cm mouse tail. Genotyping of Ts65Dn was done according to the protocol described in [40]. For Tc1 mice we followed the protocol supplied courtesy of the Fisher lab, which is currently available in the Jackson homepage for genotyping of Tc1 [41]. An example of genotyping of Tc1 mice can be found in Additional file 1: Figure S1. The statistics for the human DS females was calculated by dividing the females into 2 age groups (below and above 25). The average age for the first group was 20, and for the second group it was 30. In each age group a score of '1' was given to a DS baby, and a '0' score to a healthy baby. The mean transmission rate for the trisomy 21 for each age group is then the mean of these scores. The standard deviation is similarly the standard deviation of these scores per each age group. The t-test was then performed for statistical significance using: $$ t=\frac{\overline{x_1}-\overline{x_2}}{\sqrt{\left(\frac{\left({n}_1-1\right){s}_1^2+\left({n}_2-1\right){s}_2^2}{n_1+\left[{n}_2-2\right]}\right)\left(\frac{1}{n_1}+\frac{1}{n_2}\right)}} $$ And a p value evaluated using n 1 + n 2 – 2 degrees of freedom. In the mice population, the females were divided into 2 age groups: below and above 200 days. The average maternal age for the first groups was approximately 130 days and for the second it was 250 days. The transmission probability of each progeny was calculated as the ratio of the number of trisomic pups and the litter size. The statistics for the mice population was performed using a two-sample t-test on the two sets (young and old mothers) of transmission of trisomy probabilities. Rejection of the null hypothesis indicates that the means of the two sets are different from each other. Since the older females have lower fertility, some of the graphs presented were obtained with small n's. [42, 43] have shown that the t-test holds with a good approximation for n's as low as four. Similarly we performed a t-test between the two maternal age groups on the litter size. The probability of trisomic pups in trisomic Tc1 DS model females decreases with age We genotyped progeny of 75 Tc1 DS model females and analyzed the rate of the trisomy with respect to the female's age. Maternal age was grouped in one of two groups: below or above 200. In a subset of the data (57 out of the 75 progenies), where the genotypes of all male and female pups were identified, the average probabilities for having a trisomic pup in the two groups were 0.36 ± 0.04 (n = 46) for younger females and 0.05 ± 0.05 (n = 11) for older females (Total 57 = 46 + 11). The mean maternal age of the two groups was 131 and 253 days (Fig. 1a). Thus, the probability to transmit the extra chromosome was significantly lower in the group of older trisomic mothers (p = 0.0011). When analyzing the entire data set (including the cases in which the genotypes of the male pups were not assayed, total of 75, see methods), the rates of transmission of trisomy were similar: 0.37 ± 0.03 (n = 59) and 0.04 ± 0.04 (n = 16) for the younger and older trisomic mother groups, respectively (p = 0.00055) (Total 75 = 59 + 16). The mean maternal age was 128 days and 254 days (Fig. 1b). Transmission of trisomy reduces with maternal age in Tc1 model mice. a Probability of a trisomic pup for females younger than 200 days (n = 46) is ~ 10 fold higher than for females older than 200 days (n = 11, p = 0.0011). The analysis was performed only for litters where the genotype for the entire progeny is known. b Probability of a trisomic pup for females younger than 200 days (n = 59) is ~ 10 fold higher than for females older than 200 days (n = 16, p = 0.000055). The analysis was performed for the entire dataset. For some of the dataset only female information exists (see methods) Litter size of trisomic Tc1 DS model females decreases with age We genotyped progeny of 57 Tc1 DS model females and analyzed the litter size as a function of maternal age (the subset of the data out of the entire 75 for which we had the genotyping of both the male and female pups). Maternal age was grouped in one of two groups: Below or above 200 days with average ages of 131 and 253 days. The average litter size for the younger and older maternal age groups was 4.8 ± 0.4 (n = 46) and 2.5 ± 0.4 (n = 11), respectively (Fig. 2). Thus, the litter size of the older trisomic mothers age group was significantly smaller than that of the younger group (p = 0.008). Litter size decreases with maternal age in DS model mice. Litter size of females younger than 200 days (n = 46) is ~ 2 fold higher than for females older than 200 days (n = 11, p = 0.0081). The analysis was performed only for litters where the genotype for the entire progeny is known The probability to have trisomic pups in trisomic Ts65Dn DS model females decreases with age We genotyped progeny of 112 Ts65Dn DS model females and analyzed the rate of the trisomy with respect to the female's age. Maternal age was grouped in one of two groups: below or above 200. In a subset of the data (77 out of the 112 progenies), where the genotypes of all male and female pups were identified, the average probabilities for having a trisomic pup in two groups were 0.46 ± 0.03 (n = 73) for younger females and 0.37 ± 0.14 (n = 4) for older females (p = 0.52) (Fig. 3a) (Total 77 = 73 + 4). The average maternal age was 109 days for the younger group and 226 days for the older females. When analyzing the entire data set (including the cases in which the genotypes of the male pups was not assayed, total of 112, see methods), the rates of transmission of trisomy probability for the younger trisomic mother were 0.42 ± 0.03 (n = 101) for the younger females and 0.14 ± 0.07 (n = 11) for the older females, p = 0.0029 (Total 112 = 101 + 11). The average maternal age was 110 days and 281 days in the two groups (Fig. 3b). Transmission of trisomy reduces with maternal age in Ts65Dn model mice. a Probability of a trisomic pup for females younger than 200 days (n = 73) and for females older than 200 days (n = 4, p = 0.52). The analysis was performed only for litters where the genotype for the entire progeny is known. b Probability of a trisomic pup for females younger than 200 days (n = 101) is ~ 3 fold higher than for females older than 200 days (n = 11, p = 0.0029). The analysis was performed for the entire dataset. For some of the dataset only female information exists (see Methods) Litter size in trisomic Ts65Dn DS model females decrease with age We genotyped progeny of 77 Ts65Dn DS model females and analyzed the litter size as a function of maternal age (the subset of the data out of the entire 112, for which we had the genotyping of both the male and female pups). Maternal age was grouped in one of two age groups: below or above 200 days with average ages of 109 and 226 days. The average litter sizes for the younger and older maternal age groups were 5 ± 0.3 (n = 73) and 2.8 ± 0.3 (n = 4), respectively (Fig. 4). Thus, the older age group of trisomic mothers exhibited significantly smaller litter sizes (p = 0.0213). Litter size decreases with maternal age in DS model mice. Litter size of females younger than 200 days (n = 73) is ~ 2 fold higher than for females older than 200 days (n = 4, p = 0.0213). The analysis was performed only for litters where the genotype for the entire progeny is known Women with DS become less likely to deliver a DS child with increased age We calculated the rate of DS (trisomy 21) babies as a function of maternal age in n = 30 reported cases of DS mother deliveries [44–50]. The dataset we analyzed included DS mothers of ages 17–35 years. We divided these cases to two age groups: less than 25 and 25 and above. Figure 5 shows the frequency of trisomy 21 babies in each of the age groups (with average maternal ages of 20 and 30 years). The transmission rates in the younger and older age groups were 0.47 ± 0.12 (n = 17) and 0.17 ± 0.11 (n = 13), respectively, p value = 0.042. Our analysis suggests that the probability of delivering a DS trisomic baby by a DS mother decreases with maternal age by approximately 3 fold. Analysis of previously reported cases of Down syndrome women reveals a lower probability for a baby with DS as the mother ages. N = 30 cases of Down syndrome mothers were analyzed in two age groups: above and below 25 years of age. For DS mothers with an average age of 20 the probability for having a DS child is approximately 3 times higher than for women with an average age of 30 (p = 0.042) Mouse vs. human mothers Figure 6 compares data from human DS mothers and mouse DS models, divided to 4 and 7 age groups, respectively. Figure 6a shows that the trisomy transmission rate falls similarly for both DS mice models from 0.43 ± 0.06 in 75 days old females to ~0 for females older than 300 days. Figure 6b shows that the data from human DS mothers mirrors this trend. The frequency of trisomy 21 babies as a function of maternal age falls from 0.4 ± 0.19 in the 17.5 years age group to 0 in the 32.5 years age group. We therefore suggest the two DS model mice as a model organism in which the trend of in-uterus selection against trisomy 21 embryos can be studied. This presents a new toolset with which this question can be addressed. A gradual decrease in probability to transmit trisomy to offspring with maternal age is seen both in DS model mice and in DS women. a Maternal age in DS model mice was grouped into bins of 45 days. A gradual and similar decrease in the probability to transmit the trisomy with maternal age is seen for both Tc1 and Ts65Dn model mice. b Maternal age in human DS was grouped into 4 bins of 5 years groups. Similar to the mice, a gradual decrease in the probability to transmit trisomy with maternal age is evident Both Ts65Dn and Tc1 DS mouse models have reduced fertility when compared to WT mice. The rate of conceptions to mating reduces drastically with maternal age. We approximate the rate to be ~1:3 in younger females (~3 months old) and as the female ages the rate reduces to approximately 1:10 (in a female ~8 months old). The litter size is smaller than the WT litter size and as shown here also reduces with maternal age. We show that older females with DS exhibit a significantly reduced chance of giving birth to offspring with DS. Litter size in mice is known to decrease with maternal age. For example, in [51, 52] it was reported that in Bj44Gy mice a young female has an average litter size of ~7 pups, while a 9–10 months old female has an average litter size of less than 3. One possible cause for the decrease in litter size is an increased rate of mutations with increasing maternal age, which enhances the chances of producing unviable embryos. Another possibility is that the uterus of older females provides a less supportive environment. The answer provided in [51] leans towards the second possibility, showing that although the anomalous embryo rate does go up from 4.8 % in young females (2–7 months old) to 12.1 % in old ones (8–12 months old), this still does not explain the drastic reduction in litter size. Finn [13] also supports the reduction in progeny size to be related to lack of supportive environment in the old uterus. Transplantation of embryos into young or old hosts was used [52] to show that only 14 % of transplantation in old females survived to term while 48 % survived in young females, suggesting that older females do indeed have a less favorable uterine environment. These results point at an interesting balance of competing pressures that together determine the ratio of DS births in healthy human females as they age. On one hand the number of aneuploidies in the gamete rises with age [5, 11, 53]. On the other hand, healthy gametes and embryos are strongly selected [10, 11]. Less favorable conditions in the older uterus may negatively impact the survival of less fit embryos, but this was not directly demonstrated before. Observing the progeny of DS women and of DS model female mice allows the isolation of the negative selection of the older uterus on the survival of DS babies or trisomic pups. When the female is healthy there is an a-priori higher chance for trisomy as the female ages due to meiotic errors, but the use of DS trisomic females (both mice and women) gives a fixed a-priory chance of ~50 % for a trisomic embryo irrespective of age. Thus the rate of trisomy may predominantly depend on the survival of these embryos in the uterus. In addition to uterine selection, one should consider the possibility that the increase in rate of chromosomal abnormalities with maternal age may preferentially impact the survival of trisomic embryos. However, the change in spontaneous rate for anomalous mouse embryo for older females is less than 10 % [53]. In humans we can estimate an upper bound for the possible effect by noting that the rate of aneuploidy is reported to increase from about 3–15 % in young mothers to about 30 % in older ones [10, 54, 55]. Of these genetic anomalies more than 90 % are lethal even on a normal genetic background [54, 56]. To the best of our knowledge, these additional abnormalities are statistically independent of the inherited genetic background of the embryo. Thus, when considering how the fraction of live DS offspring changes with the age of a DS mother, even under the strictest of assumptions the maximal effect of an additional aneuploidy is minuscule (about 20 fold smaller than the three fold change we observe). Indeed, the design of our experiment ensures that the dominant genetic pathology in the offspring is the DS one. These considerations lead us to favor the possibility of uterine selection against DS offspring over an effect of additional aneuploidy. Our study, however, does not identify the time point of gestation at which the DS offspring are lost. Collectively, previous findings and ours demonstrate that in rodents and in humans age is negatively correlated with the survival of trisomic progeny. In Tc1 and Ts65Dn DS model mice females, both the litter size and the probability to produce trisomic pups decrease with age. Interestingly, E9.5 Ts65Dn embryos exhibit the expected Mendelian ratio of DS positive embryos [20], but older DS positive embryos and pups are lost at a higher rate than euploid ones. The similar distributions we show for the rate of trisomy as a function of maternal age (Fig. 6a), suggest that the extrapolation to embryonic phase would give a similar distribution for Tc1 females as was shown in [39] for Ts65Dn females, i.e. a Mendelian distribution for inheritance of the extra chromosome. Comparing to Fig. 6b for women suggests that these rates can be used to model the corresponding rates in a human population. In particular, DS women likely have an approximately Mendelian distribution of transmitting the extra human 21 chromosome to their babies. The similarities between the mice and human distributions offer a tool for measurement of selective pressure against trisomic progeny. We have shown that in two completely genetically different Down syndrome mouse models, the trisomic female has a significantly reduced probability of having viable Down syndrome pups with increasing age. An analysis of reported case studies of Down syndrome human mothers shows similarly a significantly reduced probability of delivering a baby with Down syndrome as the mother ages. These results are a strong indication of increased in-utero selection against Down syndrome offspring with increased maternal age. The study of trisomic mothers allows observation of this selection mechanism against the trisomic offspring, since for diploid females, it is masked by the large increase in meiotic errors with maternal age. DNA, deoxyribonucleic acid; DS, down syndrome; HSA, homo sapiens; PCR, polymerase chain reaction Smith DS. Health care management of adults with Down syndrome. Am Fam Physician. 2001;64(6):1031–8. van Allen MI, Fung J, Jurenka SB. Health care concerns and guidelines for adults with Down syndrome. Am J Med Genet. 1999;89(2):100–10. Hassold T, Hunt P. To err (meiotically) is human: the genesis of human aneuploidy. Nat Rev Genet. 2001;2(4):280–91. Serra A, Neri G. Trisomy 21: conference report and 1990 update. Am J Med Genet Suppl. 1990;7:11–9. Gaulden ME. Maternal age effect: the enigma of Down syndrome and other trisomic conditions. Mutat Res. 1992;296(1–2):69–88. Nicolaides KH. Nuchal translucency and other first-trimester sonographic markers of chromosomal abnormalities. Am J Obstet Gynecol. 2004;191(1):45–67. Frid C, Drott P, Otterblad Olausson P, Sundelin C, Anneren G. Maternal and neonatal factors and mortality in children with Down syndrome born in 1973–1980 and 1995–1998. Acta Paediatr. 2004;93(1):106–12. Halliday JL, Watson LF, Lumley J, Danks DM, Sheffield LJ. New estimates of Down syndrome risks at chorionic villus sampling, amniocentesis, and livebirth in women of advanced maternal age from a uniquely defined population. Prenat Diagn. 1995;15(5):455–65. Morris JK, Wald NJ, Watt HC. Fetal loss in Down syndrome pregnancies. Prenat Diagn. 1999;19(2):142–5. Hassold T, Abruzzo M, Adkins K, Griffin D, Merrill M, Millie E, Saker D, Shen J, Zaragoza M. Human aneuploidy: incidence, origin, and etiology. Environ Mol Mutagen. 1996;28(3):167–75. Nagaoka SI, Hassold TJ, Hunt PA. Human aneuploidy: mechanisms and new insights into an age-old problem. Nat Rev Genet. 2012;13(7):493–504. Kong S, Zhang S, Chen Y, Wang W, Wang B, Chen Q, Duan E, Wang H. Determinants of uterine aging: lessons from rodent models. Sci China Life Sci. 2012;55(8):687–93. Finn CA. Embryonic death in aged mice. Nature. 1962;194:499–500. Brosens JJ, Salker MS, Teklenburg G, Nautiyal J, Salter S, Lucas ES, Steel JH, Christian M, Chan YW, Boomsma CM et al. Uterine selection of human embryos at implantation. Sci Rep. 2014;4:3894. Macklon NS, Brosens JJ. The human endometrium as a sensor of embryo quality. Biol Reprod. 2014;91(4):98. Nybo Andersen AM, Wohlfahrt J, Christens P, Olsen J, Melbye M. Maternal age and fetal loss: population based register linkage study. BMJ. 2000;320(7251):1708–12. Menken J, Trussell J, Larsen U. Age and infertility. Science. 1986;233(4771):1389–94. Allen EG, Freeman SB, Druschel C, Hobbs CA, O'Leary LA, Romitti PA, Royle MH, Torfs CP, Sherman SL. Maternal age and risk for trisomy 21 assessed by the origin of chromosome nondisjunction: a report from the Atlanta and National Down Syndrome Projects. Hum Genet. 2009;125(1):41–52. Petersen MB, Mikkelsen M. Nondisjunction in trisomy 21: origin and mechanisms. Cytogenet Cell Genet. 2000;91(1–4):199–203. Rowsey R, Kashevarova A, Murdoch B, Dickenson C, Woodruff T, Cheng E, Hunt P, Hassold T. Germline mosaicism does not explain the maternal age effect on trisomy. Am J Med Genet A. 2013;161A(10):2493–503. Ayme S, Lippman-Hand A. Maternal-age effect in aneuploidy: does altered embryonic selection play a role? Am J Hum Genet. 1982;34(4):558–65. Kline J, Stein Z, Susser M, Warburton D. Induced abortion and the chromosomal characteristics of subsequent miscarriages (spontaneous abortions). Am J Epidemiol. 1986;123(6):1066–79. Stein Z, Susser M, Warburton D, Wittes J, Kline J. Spontaneous abortion as a screening device. The effect of fetal survival on the incidence of birth defects. Am J Epidemiol. 1975;102(4):275–90. Drugan A, Yaron Y, Zamir R, Ebrahim SA, Johnson MP, Evans MI. Differential effect of advanced maternal age on prenatal diagnosis of trisomies 13, 18 and 21. Fetal Diagn Ther. 1999;14(3):181–4. Neuhauser M, Krackow S. Adaptive-filtering of trisomy 21: risk of Down syndrome depends on family size and age of previous child. Naturwissenschaften. 2007;94(2):117–21. Spandorfer SD, Davis OK, Barmat LI, Chung PH, Rosenwaks Z. Relationship between maternal age and aneuploidy in in vitro fertilization pregnancy loss. Fertil Steril. 2004;81(5):1265–9. Warburton D. The effect of maternal age on the frequency of trisomy: change in meiosis or in utero selection? Prog Clin Biol Res. 1989;311:165–81. Hook EB. Down syndrome rates and relaxed selection at older maternal ages. Am J Hum Genet. 1983;35(6):1307–13. Fragouli E, Wells D, Whalley KM, Mills JA, Faed MJ, Delhanty JD. Increased susceptibility to maternal aneuploidy demonstrated by comparative genomic hybridization analysis of human MII oocytes and first polar bodies. Cytogenet Genome Res. 2006;114(1):30–8. Chiang T, Schultz RM, Lampson MA. Meiotic origins of maternal age-related aneuploidy. Biol Reprod. 2012;86(1):1–7. O'Doherty A, Ruf S, Mulligan C, Hildreth V, Errington ML, Cooke S, Sesay A, Modino S, Vanes L, Hernandez D et al. An aneuploid mouse strain carrying human chromosome 21 with Down syndrome phenotypes. Science. 2005;309(5743):2033–7. Davisson MT, Schmidt C, Reeves RH, Irving NG, Akeson EC, Harris BS, Bronson RT. Segmental trisomy as a mouse model for Down syndrome. Prog Clin Biol Res. 1993;384:117–33. Akeson EC, Lambert JP, Narayanswami S, Gardiner K, Bechtel LJ, Davisson MT. Ts65Dn -- localization of the translocation breakpoint and trisomic gene content in a mouse model for Down syndrome. Cytogenet Cell Genet. 2001;93(3–4):270–6. Dunlevy L, Bennett M, Slender A, Lana-Elola E, Tybulewicz VL, Fisher EM, Mohun T. Down's syndrome-like cardiac developmental defects in embryos of the transchromosomic Tc1 mouse. Cardiovasc Res. 2010;88(2):287–95. Williams AD, Mjaatvedt CH, Moore CS. Characterization of the cardiac phenotype in neonatal Ts65Dn mice. Dev Dyn. 2008;237(2):426–35. Reeves RH, Irving NG, Moran TH, Wohn A, Kitt C, Sisodia SS, Schmidt C, Bronson RT, Davisson MT. A mouse model for Down syndrome exhibits learning and behaviour deficits. Nat Genet. 1995;11(2):177–84. Kleschevnikov AM, Belichenko PV, Villar AJ, Epstein CJ, Malenka RC, Mobley WC. Hippocampal long-term potentiation suppressed by increased inhibition in the Ts65Dn mouse, a genetic model of Down syndrome. J Neurosci. 2004;24(37):8153–60. Stern S, Segal M, Moses E. Involvement of Potassium and Cation Channels in Hippocampal Abnormalities of Embryonic Ts65Dn and Tc1 Trisomic Mice. EBioMedicine. 2015;2(9):1048–62. Roper RJ, St John HK, Philip J, Lawler A, Reeves RH. Perinatal loss of Ts65Dn Down syndrome mice. Genetics. 2006;172(1):437–43. Reinholdt LG, Ding Y, Gilbert GJ, Czechanski A, Solzak JP, Roper RJ, Johnson MT, Donahue LR, Lutz C, Davisson MT. Molecular characterization of the translocation breakpoints in the Down syndrome mouse model Ts65Dn. Mamm Genome. 2011;22(11–12):685–91. Jackson. Protocol for genotyping TC1 mice. 2010. Student. The Probable Error of a Mean. Biometrika. 1908;6(1):1–25. Winter JCF. Using the Student's t-test with extremely small sample sizes. Practical Assessment, Research and Evaluation. 2013;18(10):1-12. Bovicelli L, Orsini LF, Rizzo N, Montacuti V, Bacchetta M. Reproduction in Down syndrome. Obstet Gynecol. 1982;59(6 Suppl):13S–7S. Kaushal MBA, Kadi P, Karandae J, Baxi D. Woman with Down Syndrome delivered a Normal Child. Int J Infertil Fetal Med. 2010;1:45–7. Kristesashvili DI. Offspring of patients with Down syndrome. Genetika. 1988;24(9):1704–6. Liu XY, Jiang YT, Wang RX, Luo LL, Liu YH, Liu RZ. Inheritance of balanced translocation t(17; 22) from a Down syndrome mother to a phenotypically normal daughter. Genet Mol Res. 2015;14(3):10267–72. Santo LMAMLDE. Marriage and reproduction in a woman with Down syndrome. International Medical Review on Down's Syndrome. 2013;17(3):39–42. Scharrer S, Stengel-Rutkowski S, Rodewald-Rudescu A, Erdlen E, Zang KD. Reproduction in a female patient with Down's syndrome. Case report of a 46, XY child showing slight phenotypical anomalies, born to a 47, XX, + 21 mother. Humangenetik. 1975;26(3):207–14. Shobha Rani A, Jyothi A, Reddy PP, Reddy OS. Reproduction in Down's syndrome. Int J Gynaecol Obstet. 1990;31(1):81–6. Gosden RG. Chromosomal anomalies of preimplantation mouse embryos in relation to maternal age. J Reproduction Fertility. 1973;35(2):351–4. Talbert GB, Krohn PL. Effect of maternal age on viability of ova and uterine support of pregnancy in mice. J Reprod Fertil. 1966;11(3):399–406. Dailey T, Dale B, Cohen J, Munne S. Association between nondisjunction and maternal age in meiosis-II human oocytes. Am J Hum Genet. 1996;59(1):176–84. Bahce M, Cohen J, Munne S. Preimplantation genetic diagnosis of aneuploidy: were we looking at the wrong chromosomes? J Assist Reprod Genet. 1999;16(4):176–81. Rabinowitz M, Ryan A, Gemelos G, Hill M, Baner J, Cinnioglu C, Banjevic M, Potter D, Petrov DA, Demko Z. Origins and rates of aneuploidy in human blastomeres. Fertil Steril. 2012;97(2):395–401. Munne S, Bahce M, Sandalinas M, Escudero T, Marquez C, Velilla E, Colls P, Oter M, Alikani M, Cohen J. Differences in chromosome susceptibility to aneuploidy and survival to first trimester. Reproductive biomedicine online. 2004; 8(1):81-90. The authors thank Elizabeth Fisher for supplying the Tc1 mice and for many helpful suggestions and remarks, and Menahem Segal for very helpful discussions. This work was supported by the Israel Science Foundation grant 1415/12, the Minerva Foundation, Germany and by the Clore Center for Biological Physics. The dataset is available at: https://figshare.com/s/285253ff94f8fde092bf. SS carried out genotyping of the mice. SS carried out analysis of mice and human data. SS participated in the design of the study and in writing of the manuscript. DB carried out analysis of human data and participated in writing of the manuscript. EM carried out dissections and design of the study. EM participated in writing of the manuscript. All authors read and approved the final manuscript. Laboratory of Genetics, Salk Institute for Biological Studies, 10010 N. Torrey Pines Rd., La Jolla, CA, 92037, USA Shani Stern Department of Physics, James Franck Institute and the Institute for Biophysical Dynamics, University of Chicago, 929 E. 57th St GCIS E139F, Chicago, IL, 60637, USA David Biron Department of Physics of Complex Systems, Weizmann Institute of Science, P.O. Box 26, Rehovot, 76100, Israel Elisha Moses Search for Shani Stern in: Search for David Biron in: Search for Elisha Moses in: Correspondence to Elisha Moses. Additional file 1: Figure S1. Genotyping Tc1. An example picture of a gel used during genotyping. Two lines refer to a Tc1 positive trisomic pup. One line refers to a disomic pup. (EPS 1781 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Litter Size Extra Chromosome Ts65Dn Mouse Average Litter Size
CommonCrawl
The Basics of Emitter-Coupled Logic July 12, 2018 by Steve Arar This article will review the operation of a basic ECL inverter/buffer, and then we'll look at some of the most important features of this logic family. Emitter-coupled logic (ECL) is a BJT-based logic family which is generally considered as the fastest logic available. ECL achieves its high-speed operation by employing a relatively small voltage swing and preventing the transistors from entering the saturation region. In the late 1960s, when the standard TTL family offered 20 ns gate delay and the CMOS 4000 family had delays of 100 ns or more, ECL offered an incredible delay of only 1 ns! Emitter-Coupled Logic Emitter-coupled logic is a high-speed bipolar logic family. To get familiar with this logic, let's examine an ECL inverter/buffer as shown in Figure 1. In this figure, $$V_{in}$$ is the input of the gate, $$V_{out-}$$ is the inverted version of $$V_{in}$$ and $$V_{out+}$$ is the complement of $$V_{out-}$$. In this particular example, $$V_{out+}$$ can be considered as the buffered version of the input. Moreover, $$V_{BB}$$ is an appropriate voltage (4V in Figure 1). Let's define the logic high and logic low as 4.4 V and 3.6 V, respectively, and examine the operation of the circuit in Figure 1. Figure 1. An ECL inverter/buffer Assume that $$V_{in}$$ is logic high (4.4 V), hence the emitter of Q1 will be about 4.4-0.6=3.8 V. Therefore, the base-emitter voltage of Q2 will be 0.2 V. This base-emitter voltage is not sufficient to turn Q2 on. Hence, the resistor R2 will pull the collector of Q2 up to Vcc=5 V. To calculate the collector voltage $$V_{c1}$$, we should note that the current flowing through R3, which is $$\tfrac{3.8V}{1.3k \Omega}=2.92mA$$, will go through Q1. Hence, we obtain $$V_{c1} = 5V-300 \Omega \times 2.92mA=4.12V$$ (to simplify the calculations, we've assumed that the collector current is equal to the emitter current). The emitter followers Q3 and Q4 will act as buffers to pass the (DC level shifted) collector voltages of Q1 and Q2 to the final outputs of the ECL gate, $$V_{out-}$$ and $$V_{out+}$$. Assuming a base-emitter voltage of 0.6V for Q3 and Q4, we obtain $$V_{out+}$$=4.4V and $$V_{out-}$$=3.52V. As you can see, applying logic high to the input gives a logic high at $$V_{out+}$$ and a voltage level very close to the defined logic low (3.6 V) at $$V_{out-}$$. Hence, the circuit of Figure 1 serves as an inverter/buffer. If we apply the logic-low voltage (3.6V) to the input of the gate, Q2 will turn on and Q1 will be off. This will lead to a logic high at $$V_{out-}$$ and a voltage level very close to the logic low (3.61 V) at $$V_{out+}$$. Now that you're familiar with the ECL inverter/buffer, you should be able to verify that the circuit of Figure 2 implements an OR function of a and b or a NOR function of a and b, depending on how the positive and negative outputs are used. Low Voltage Swing As you can see, the voltage difference between logic high and low of an ECL gate is much less than that of a CMOS or a TTL logic gate. This low voltage difference reduces the time required to make a transition from logic high to logic low or vice versa. As a result, ECL logic offers higher-frequency operation. Avoiding Saturation In addition to the low voltage difference between the logic levels, there's another mechanism that significantly contributes to the high speed operation of an ECL gate. The trick is to prevent bipolar transistors from entering the saturation region. Turning off a saturated bipolar transistor requires removing or recombining some carriers generated in the transistor base region. If we apply a high to low transition to the input of a saturated BJT, the transistor output won't change until the charge in the base is removed. This introduces an extra delay, called storage time, to the operation of a BJT employed as a switch. After the storage time, the transistor comes out of saturation and the output of the transistor starts to respond to the input. If appropriate resistor values are chosen, ECL logic prevents transistors from entering saturation. For example, in Figure 1, R1, R2, and R3 are chosen such that the collector voltage of Q1 and Q2 cannot be less than about 4.1 V. Based on the above discussion, the maximum emitter voltage of Q1 and Q2 is about 3.8 V. Hence, the collector-emitter voltage of these two transistors is always more than $$V_{C(min)}-V_{E(max)}$$=4.1 V-3.8 V=0.3 V. This is larger than the collector-emitter saturation voltage which is about 0.2 V. Therefore, Q1 and Q2 cannot enter the saturation region. As discussed above, ECL avoids the storage-time problem by properly choosing the resistor values. Since the storage time can account for a significant portion of the propagation delay in other logic families, there are several other methods to reduce this undesired effect. Positive-Referenced ECL It's worth mentioning that old ECL families used a negative supply voltage, as shown in Figure 3. That's why an ECL gate such as Figure 1, which uses a positive supply voltage, is referred to as positive-referenced ECL or PECL (pronounced "peckle"). Noise immunity was the main reason for using a negative power supply with the early ECL gates. As the analysis of the ECL inverter/buffer shows, the output voltages of an ECL gate depend on the value of $$V_{CC}$$. For example, the logic high is equal to $$V_{CC}-V_{BE}$$, where $$V_{BE}$$ is the base-emitter voltage drop of the emitter followers. The logic low is $$V_{CC}-V_{BE}-V_{gate}$$, where $$V_{gate}$$ is the voltage difference between logic high and low, which is determined by the value of the resistors. Therefore, any noise on $$V_{CC}$$ will directly affect the ECL gate's output voltages. It is generally easier to achieve a stable, low-noise ground node than a stable, low-noise power-supply voltage. The early ECL families used a negative supply, and ground was used as the reference for the gate's output voltages; this led to better noise immunity. However, PECL became popular because it interfaces more easily to other logic families such as TTL. If a negative power supply is used, a clean ground needs to be distributed throughout the ECL-based portion of the design. The same considerations should be applied to power supply distribution when using positive-referenced ECL. For example, if both TTL and ECL are used in the system, it is recommended to use separate power planes for the two logic families so that the TTL switching transients don't affect ECL operation. In Figure 1, we saw that changing the logic state of the input makes the current flow through either Q1 or Q2. However, it should be noted that the total current flowing through Q1 and Q2 is almost the same for a logic-high input as it is for a logic-low input. As a result, the power dissipation of the first stage of the ECL circuit is almost constant. During voltage transitions, CMOS logic gates cause transient disturbances in the power-supply voltage. A major advantage of ECL is that the current-steering behavior of the input stage (i.e., Q1 and Q2) does not cause disturbances in the way that CMOS switching does. However, this noise performance is achieved at the cost of burning more static power. Note that a CMOS gate consumes power only during voltage transitions, whereas the differential pair formed by Q1 and Q2 (see Figure 1) almost always draws about $$\tfrac{4V}{1.3k \Omega} \approx 3mA$$ from $$V_{CC}$$. If we focus on static power consumption, ECL is a high-power logic family. However, if we consider dynamic power consumption, ECL can be more efficient than CMOS, especially as the frequency of operation increases. This is shown in Figure 4. Figure 4. Image courtesy of ON Semiconductor. Below 20 MHz, ECL draws more supply current than CMOS, but as we go beyond this frequency, ECL becomes more efficient. This is why ECL is an attractive solution for high-frequency clock distribution. As a final note, the emitter followers (see Figure 1) must provide large output currents to charge load capacitances, and consequently they can cause significant transient deviations in the supply voltage. Thus, in some cases it is advisable to use two separate power supply lines: one for the input stage and one for the emitter followers. This can prevent the power-supply disturbances generated by the emitter followers from contaminating the ECL differential pair. ECL is considered to be a very high-speed logic family. It achieves its high-speed operation by employing a relatively small voltage swing and preventing the transistors from entering the saturation region. An ECL implementation that uses a positive supply voltage is referred to as positive-referenced ECL or PECL. Noise immunity was the main reason for using a negative supply voltage with the early ECL gates. Later, PECL became popular because its logic levels are more compatible with those of other logic families such as TTL. ECL dissipates a relatively large amount of static power, but its overall current consumption is lower than that of CMOS at high frequencies. Thus, ECL is particularly advantageous in clock-distribution circuits and other high-frequency applications. To see a complete list of my articles, please visit this page. emitter-coupled logic high-speed logic digital communication bjt saturation differential signaling The Cyclic Redundancy Check (CRC): Finding—and Even Correcting—Errors in Digital Data This technical brief explains what a CRC is and how you can use it to make your digital communication more robust. Understanding Correlation This article provides insight into the practical aspects of correlation. Sneha H.L. Introduction to the Conversion of Flip-Flops This article describes the steps necessary to convert a given flip-flop into a desired flip-flop using the example of an SR-to-JK flip-flop... Modeling the Pulse-Width Modulator Pulse-width modulation can be of many forms. A PWM signal of constant frequency can be obtained by comparing the ramp signal (or carrier...
CommonCrawl
Mathematische Annalen The isoperimetric problem for lens spaces The isoperimetric problem for lens spaces The Borel map in locally integrable structures The Borel map in locally integrable structures Entire surfaces of constant curvature in Minkowski 3-space Entire surfaces of constant curvature in Minkowski 3-space Convex optimization learning of faithful Euclidean distance representations... Convex optimization learning of faithful Euclidean distance representations in nonlinear dimensionality reduction Construction of a measure of noncompactness in Sobolev spaces with an... Construction of a measure of noncompactness in Sobolev spaces with an application to functional integral-differential equations Zeros of the Selberg zeta function for symmetric infinite area hyperbolic... Zeros of the Selberg zeta function for symmetric infinite area hyperbolic surfaces Strongly damped wave equations with Stepanov type nonlinear forcing term Strongly damped wave equations with Stepanov type nonlinear forcing term Singular Integrals Associated with Zygmund Dilations Singular Integrals Associated with Zygmund Dilations Spectral estimates for infinite quantum graphs Spectral estimates for infinite quantum graphs Incorporating statistical model error into the calculation of acceptability... Incorporating statistical model error into the calculation of acceptability prices of contingent claims The Hausdorff–Young inequality on Lie groups Mathematische Annalen, Jan 2019 Michael G. Cowling, Alessio Martini, Detlef Müller, Javier Parcet Michael G. Cowling Alessio Martini Detlef Müller Javier Parcet We prove several results about the best constants in the Hausdorff–Young inequality for noncommutative groups. In particular, we establish a sharp local central version for compact Lie groups, and extend known results for the Heisenberg group. In addition, we prove a universal lower bound to the best constant for general Lie groups. A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser. Alternatively, you can download the file locally and open with any standalone PDF reader: https://link.springer.com/content/pdf/10.1007%2Fs00208-018-01799-9.pdf Mathematische Annalen pp 1–39 | Cite as The Hausdorff–Young inequality on Lie groups AuthorsAuthors and affiliations Michael G. CowlingAlessio MartiniDetlef MüllerJavier Parcet Open Access Article First Online: 09 January 2019 206 Downloads Abstract We prove several results about the best constants in the Hausdorff–Young inequality for noncommutative groups. In particular, we establish a sharp local central version for compact Lie groups, and extend known results for the Heisenberg group. In addition, we prove a universal lower bound to the best constant for general Lie groups. Mathematics Subject Classification22E30 43A15 43A30 Communicated by Loukas Grafakos. Michael G. Cowling was supported by the Australian Research Council (Grant DP170103025). Alessio Martini was supported by the Engineering and Physical Sciences Research Council (Grant EP/P002447/1). Javier Parcet was supported by the Europa Excelencia Grant MTM2016-81700-ERC and the CSIC Grant PIE-201650E030. 1 Introduction For \(f \in L^1(\mathbb {R}^n)\), define the Fourier transform \({\hat{f}}\) of f by $$\begin{aligned} {\hat{f}}(\xi ) = \int _{\mathbb {R}^n} f(x) \, e^{2\pi i \xi \cdot x} \, dx \qquad \forall \xi \in \mathbb {R}^n. \end{aligned}$$ Then the Riemann–Lebesgue lemma states that \({\hat{f}} \in C_0(\mathbb {R}^n)\) and $$\begin{aligned} \Vert {\hat{f}} \Vert _\infty \le \Vert f\Vert _1. \end{aligned}$$ Further, the Plancherel theorem entails that if \(f \in L^2(\mathbb {R}^n)\), then $$\begin{aligned} \Vert {\hat{f}} \Vert _2 = \Vert f \Vert _2. \end{aligned}$$ Suppose that \(1 \le p \le 2\) and \(p'\) is the conjugate exponent to p, that is, \(1/p' = 1 - 1/p\). Then interpolation implies the Hausdorff–Young inequality, namely, $$\begin{aligned} \Vert {\hat{f}} \Vert _{p'} \le C \Vert f\Vert _{p} \end{aligned}$$ (1.1) for all \(f \in L^p(\mathbb {R}^n)\), where \(C \le 1\). We denote the best constant for this inequality, that is, the smallest possible value of C, by \(H_p(\mathbb {R}^n)\). This was found many years after the original result. We define the Babenko–Beckner constant \(B_p\) by $$\begin{aligned} B_p = \frac{ p^{1/2p} }{ (p')^{ 1/2p' } }. \end{aligned}$$ Then \(B_p < 1\) when \(1< p < 2\). Theorem 1.1 (Babenko [3], Beckner [6]) For all \(p \in [1,2]\), $$\begin{aligned} H_p(\mathbb {R}^n) = (B_p)^n. \end{aligned}$$ Babenko treated the case where \(p' \in 2 \mathbb {Z}\), and Beckner proved the general case. The extremal functions are gaussians; see [46] for an alternative proof. One can extend the Babenko–Beckner theorem to more general contexts than \(\mathbb {R}^n\), such as locally compact abelian groups G. For instance, the best constant \(H_p(G)\) for the inequality (1.1) when \(G = \mathbb {R}^a \times \mathbb {T}^b \times \mathbb {Z}^c\) is \((B_p)^a\). The extremal functions are of the form \(\gamma \otimes \chi \otimes \delta \), where \(\gamma \) is a gaussian on \(\mathbb {R}^a\), \(\chi \) is a character of \(\mathbb {T}^b\), and \(\delta \) is the characteristic function of a point in \(\mathbb {Z}^c\). For nonabelian groups, matters are more complicated, in part because the interpretation of the \(L^{q}\) norm of the Fourier transform for \(q \in (2,\infty )\) is trickier. We refer the reader to Sect. 2 below for details. General versions of the Hausdorff–Young inequality (1.1) were obtained by Kunze [43] and Terp [63] for arbitrary locally compact groups G, and a number of works in the literature are devoted to the study of the corresponding best constants \(H_p(G)\). It is known, at least in the unimodular case, that \(H_p(G)<1\) for \(p \in (1,2)\) if and only if G has no compact open subgroups [25, 56]. On the other hand, when \(H_p(G)\) is not 1, its value is known only in few cases, and typically only for exponents p whose conjugate exponent is an even integer; in addition, as shown by Klein and Russo, extremal functions need not exist [38]. Recently, various authors considered local versions of the Hausdorff–Young inequality. Namely, for each neighbourhood U of the identity \(e \in G\), define \(H_p(G;U)\) as the best constant in the inequality (1.1) with the additional support constraint \({{\,\mathrm{supp}\,}}f \subseteq U\), and let \(H_p^\mathrm{loc}(G)\) be the infimum of the constants \(H_p(G;U)\). Clearly \(H_p^\mathrm{loc}(G) \le H_p(G)\), and equality holds whenever G has a contractive automorphism. For other groups, however, the inequality may be strict, which makes the study of \(H_p^\mathrm{loc}(G)\) interesting also for groups where \(H_p(G) = 1\), such as compact groups. Indeed, in the case of the torus \(G = \mathbb {T}^n\), the value of \(H_p^\mathrm{loc}(G)\) is known and is strictly less than 1 for \(p \in (1,2)\). Theorem 1.2 (Andersson [1, 2], Sjölin [61], Kamaly [35]) For all \(p \in [1,2]\), $$\begin{aligned} H_p^\mathrm{loc}(\mathbb {T}^n) = (B_p)^n. \end{aligned}$$ Here we are interested in analogues of the above result for noncommutative Lie groups G. We also study what happens when additional symmetries are imposed by restricting to functions f on G which are invariant under a compact group K of automorphisms of G. Let us denote by \(H_{p,K}(G)\) and \(H_{p,K}^\mathrm{loc}(G)\) the corresponding global and local best Hausdorff–Young constants. Note that the original constants \(H_p(G)\) and \(H_{p}^\mathrm{loc}(G)\) correspond to the case where K is trivial. When K is nontrivial, a priori the new constants \(H_{p,K}(G)\) and \(H_{p,K}^\mathrm{loc}(G)\) might be smaller. However we can prove a universal lower bound, which is independent of the symmetry group K and depends only on p and the dimension of G. Theorem 1.3 Let G be a Lie group and K be a compact group of automorphisms of G. For all \(p \in [1,2]\), $$\begin{aligned} H_{p,K}^\mathrm{loc}(G) \ge (B_p)^{\dim (G)}. \end{aligned}$$ Recall that a function f on a group G is central if \(f(xy) = f(yx)\), that is, if f is invariant under the group \({{\,\mathrm{Inn}\,}}(G)\) of inner automorphisms of G. García-Cuerva, Marco and Parcet [28] and García-Cuerva and Parcet [29] studied the Hausdorff–Young inequality for compact semisimple Lie groups G restricted to central functions; in particular, they obtained the inequality \(H^\mathrm{loc}_{p,{{\,\mathrm{Inn}\,}}(G)}(G) > 0\), which they applied to answer questions about Fourier type and cotype of operator spaces (see also [52]). Theorem 1.3 gives a substantially more precise lower bound to \(H^\mathrm{loc}_{p,{{\,\mathrm{Inn}\,}}(G)}(G)\). As a matter of fact, in this case we can prove that equality holds. Theorem 1.4 Suppose that G is a compact connected Lie group. Then, for all \(p \in [1,2]\), $$\begin{aligned} H_{p,{{\,\mathrm{Inn}\,}}(G)}^\mathrm{loc}(G) = (B_p)^{\dim (G)}. \end{aligned}$$ Note on the one hand that, in the abelian case \(G = \mathbb {T}^n\), all functions are central, so Theorem 1.4 extends Theorem 1.2. On the other hand, it would be interesting to know whether the result holds also without the restriction to central functions. More generally, one may ask whether the inequality in Theorem 1.3 is actually an equality for an arbitrary Lie group G. As a matter of fact, the equality $$\begin{aligned} H_{p,K}^\mathrm{loc}(G) = (B_p)^{\dim (G)} \end{aligned}$$ holds for arbitrary G and K whenever \(p' \in 2\mathbb {Z}\), as a consequence of a recent result of Bennett, Bez, Buschenhenke, Cowling and Flock [7] and the relation between the best constants for the Young and the Hausdorff–Young inequalities (see Proposition 2.2 below). In particular, by interpolation, $$\begin{aligned} H_{p,K}^\mathrm{loc}(G) < 1 \end{aligned}$$ for all \(p \in (1,2)\) and arbitrary G and K with \(\dim (G) > 0\). Moreover, the equality $$\begin{aligned} H_{p}(G) = H_p^\mathrm{loc}(G) = (B_p)^{\dim (G)} \end{aligned}$$ (1.2) holds when \(p' \in 2\mathbb {Z}\) for all Lie groups G with a contractive automorphism (which are nilpotent—see [60]), and also for all solvable Lie groups G admitting a chain of closed subgroups $$\begin{aligned} \{e\} = G_0< G_1< \dots< G_{n-1} < G_n = G, \end{aligned}$$ where \(G_j\) is normal in \(G_{j+1}\) and \(G_{j+1} / G_j\) is isomorphic to \(\mathbb {R}\) (here \(n=\dim (G)\)). For many of those groups G, the upper bound \(H_{p}(G) \le (B_p)^{\dim (G)}\) for \(p'\in 2\mathbb {Z}\) was proved in [38], but the question of the lower bound was left open there, except for the Heisenberg groups. Hence Theorem 1.3 proves the sharpness of a number of results in [38]. The Heisenberg groups \(\mathbb {H}_n\) are among the simplest examples of groups in the above class. Nevertheless, determining the value of \(H_{p}(\mathbb {H}_n) = H_{p}^\mathrm{loc}(\mathbb {H}_n)\) appears to be a nontrivial problem when \(p' \notin 2\mathbb {Z}\), and is related to a similar problem for the so-called Weyl transform. Recall that the Weyl transform \(\rho \) on \(\mathbb {C}^n\) maps functions on \(\mathbb {C}^n\) to integral operators on \(L^2(\mathbb {R}^n)\) [22], and an inequality of Hausdorff–Young type can be proved for \(\rho \) [38, 57]: for all \(p \in [1,2]\), $$\begin{aligned} \Vert \rho (f) \Vert _{\mathcal {S}^{p'}(L^2(\mathbb {R}^n))} \le C \Vert f\Vert _{L^p(\mathbb {C}^{n})}, \end{aligned}$$ (1.3) where \(\mathcal {S}^{q}(\mathcal {H})\) denotes the qth Schatten class of operators on the Hilbert space \(\mathcal {H}\), and \(C \le 1\). As above, we can define \(W_p(\mathbb {C}^n)\) as the best constant in (1.3), as well as corresponding local and symmetric versions \(W_{p}^\mathrm{loc}(\mathbb {C}^n), W_{p,K}(\mathbb {C}^n), W_{p,K}^\mathrm{loc}(\mathbb {C}^n)\). A scaling argument (see Proposition 5.1 below) then shows that, for all compact subgroups K of the unitary group \({\text {U}}(n)\), $$\begin{aligned} H_{p,K}(\mathbb {H}_n) = B_p \, W_{p,K}(\mathbb {C}^n) \end{aligned}$$ (1.4) (here \({\text {U}}(n)\) acts naturally on \(\mathbb {C}^n\) and the first layer of \(\mathbb {H}_n\)). So the problem of determining the best Hausdorff–Young constants for the Heisenberg group \(\mathbb {H}_n\) is equivalent to the analogous problem for the Weyl transform. In particular, (1.4) and Theorem 1.3 yield that $$\begin{aligned} W_{p,K}(\mathbb {C}^n) \ge (B_p)^{2n} \end{aligned}$$ for all \(p \in [1,2]\). As an indication that equality may well hold, here we prove the following local result. Theorem 1.5 Let K be a compact subgroup of \(\mathrm{U}(n)\). Then, for all \(p \in [1,2]\), $$\begin{aligned} W_{p,K}^\mathrm{loc}(\mathbb {C}^n) \ge (B_p)^{2n}. \end{aligned}$$ Moreover, if \(K \supseteq \mathrm{U}(1) \times \dots \times \mathrm{U}(1)\), then, for all \(p \in [1,2]\), $$\begin{aligned} W_{p,K}^\mathrm{loc}(\mathbb {C}^n) = (B_p)^{2n}. \end{aligned}$$ Functions on \(\mathbb {C}^n\) or \(\mathbb {H}_n\) that are invariant under \({\text {U}}(1) \times \dots \times {\text {U}}(1)\) are called polyradial. Equality in Theorem 1.5 is obtained as a consequence of the following weighted Hausdorff–Young inequality for polyradial functions f: $$\begin{aligned} \Vert \rho (f) \Vert _{\mathcal {S}^{p'}(\mathbb {R}^n)} \le (B_p)^{2n} \Vert f e^{(\pi /2)|\cdot |^2} \Vert _{L^p(\mathbb {C}^n)}. \end{aligned}$$ (1.5) Unfortunately we have not found a way to remove the weight and obtain the equality \(W_{p,K}(\mathbb {C}^n) = W_{p,K}^\mathrm{loc}(\mathbb {C}^n)\) for arbitrary \(p \in [1,2]\); note however that \(W_{p,K}(\mathbb {C}^n) = W_{p,K}^\mathrm{loc}(\mathbb {C}^n) = (B_p)^{2n}\) when \(p' \in 2\mathbb {Z}\), as proved in [38]. Both cases where we can prove equalities in Theorems 1.4 and 1.5 for general \(p \in [1,2]\) correspond to Gelfand pairs (see, for example, [12]): indeed, central functions on a compact group G and polyradial functions on the Heisenberg group \(\mathbb {H}_n\) form commutative subalgebras of the respective convolution algebras \(L^1(G)\) and \(L^1(\mathbb {H}_n)\). It seems a reasonable intermediate question to ask for best constants in Hausdorff–Young inequalities in the context of Gelfand pairs, since here the group Fourier transform reduces to the Gelfand transform for the corresponding commutative algebra of invariant functions, which makes the \(L^q\) norm of the Fourier transform in these settings more accessible. Indeed, in both the proofs of Theorems 1.4 and 1.5, this additional commutativity allows one to relate the group Fourier transform and the Weyl transform with the Euclidean Fourier transform, for which the Babenko–Beckner result is available. Regrettably, even in the case of polyradial functions on the Heisenberg group we are not able yet to fully answer the question. Indeed, as we discuss in Sect. 5, in this case it seems unlikely that the best Hausdorff–Young constant on the Heisenberg group can be obtained by a direct reduction to the corresponding sharp Euclidean estimate, and new ideas appear to be needed. As for the universal lower bound of Theorem 1.3, the intuitive idea behind its proof is that, at smaller and smaller scales, the group structure of a Lie group G looks more and more like the abelian group structure of its Lie algebra \(\mathfrak {g}\), whence \(H_p^\mathrm{loc}(G)\) is likely to be related to \(H_p(\mathfrak {g}) = (B_p)^{\dim (G)}\). Indeed, a scaling argument based on this idea readily yields the analogue of Theorem 1.3 for Young's convolution inequality (see the discussion in Sect. 2 below). This appears to have been overlooked in [38], where a number of upper bounds for Young constants on Lie groups are proved, which are actually equalities in view of this observation. The additional complication with the Hausdorff–Young inequality is that it involves the \(L^q\) norm of the Fourier transform. While it is reasonably clear that, at small scales, the noncommutative convolution on G approximates the commutative convolution on \(\mathfrak {g}\), the same is not so evident for the Fourier transform: indeed, if the group Fourier transform is defined, as it is common, in terms of irreducible unitary representations, then it is not immediately clear how to relate the representation theories of G and \(\mathfrak {g}\) for an arbitrary Lie group G, let alone the corresponding Fourier transforms and \(L^q\) norms thereof. Here we completely bypass the problem, by characterising the \(L^q\) norm of the Fourier transform in terms of an operator norm of a fractional power of an integral operator, acting on functions on G: $$\begin{aligned} \Vert {\hat{f}} \Vert _q^q = \Vert |L_f \Delta ^{1/q}|^q \Vert _{1 \rightarrow \infty }. \end{aligned}$$ (1.6) Here \(L_f\) is the operator of convolution on the left by f and \(\Delta \) is the operator of multiplication by the modular function of G. A transplantation argument, not dissimilar from those in [36, 49, 51], allows us to relate the operator \(L_f \Delta ^{1/q}\) on G to its counterpart on \(\mathfrak {g}\) and obtain the desired lower bound. Although it might be evident to some experts in noncommutative integration, we are not aware of the characterisation (1.6) being explicitly observed before. What is interesting about (1.6) is that it allows one to access the \(L^q\) norm of the Fourier transform through properties of a more "geometric" convolution-multiplication operator on G, which appears to be more tractable. As a matter of fact, when dealing with convolution, one can use induction-on-scales methods to completely determine the best local constants for the Young convolution inequality on any Lie group G; this result has been recently proved in [7], as a corollary of a more general result for nonlinear Brascamp–Lieb inequalities. It would be interesting to know whether similar methods could be applied to the Hausdorff–Young inequality on noncommutative Lie groups as well. 1.1 Plan of the paper In Sect. 2 we discuss the definition of the \(L^q\) norm of the Fourier transform for an arbitrary Lie group, by comparing a number of definitions available in the literature, and prove the characterisation (1.6); we also present a proof of the universal lower bound of Theorem 1.3, as well as its analogue for the Young convolution inequality, and discuss relations between best constants for Young and Hausdorff–Young inequalities. The sharp local central Hausdorff–Young inequality for arbitrary compact Lie groups (Theorem 1.4) is proved in Sect. 4; to better explain the underlying idea without delving into technicalities, the proof of the abelian case (Theorem 1.2) is briefly revisited in Sect. 3. Finally, in Sect. 5 we discuss the relations between Hausdorff–Young constants for the Heisenberg group and the Weyl transform and prove Theorem 1.5, together with the weighted inequality (1.5) for polyradial functions. 2 \(L^q\) norm of the Fourier transform Let G be a Lie group (or, more generally, a separable locally compact group) with a fixed left Haar measure. In order to discuss best Hausdorff–Young constants in this generality, we first need to clarify what is meant by the "Fourier transform" in this setting and how Hausdorff–Young inequalities — even the endpoint ones, such as the Plancherel formula — can be stated in this context. A common way to generalise the Fourier transformation to this setting exploits irreducible unitary representations of G (see, for example, [47] or [23, Chapter 7] for a survey). Namely, let \({\widehat{G}}_\mathrm{u}\) be the "unitary dual" of G, that is, the set of (equivalence classes of) irreducible unitary representations of G, endowed with the Fell topology and the Mackey Borel structure. The (unitary) Fourier transform \(\mathcal {F}_\mathrm{u}f\) of a function \(f \in L^1(G)\) is then defined as the operator-valued function on \({\widehat{G}}_\mathrm{u}\) given by $$\begin{aligned} {\widehat{G}}_\mathrm{u}\ni \pi \mapsto \pi (f) = \int _G f(x) \pi (x) \,dx \in \mathcal {L}(\mathcal {H}_\pi ); \end{aligned}$$ here \(\mathcal {L}(\mathcal {H}_\pi )\) denotes the space of bounded linear operators on the Hilbert space \(\mathcal {H}_\pi \) on which the representation \(\pi \) acts, and integration is with respect to the Haar measure. In case G is unimodular and type I (this includes the cases where G is abelian or compact), the Plancherel formula can be stated in the form $$\begin{aligned} \Vert f\Vert ^2_{L^2(G)} = \int _{{\widehat{G}}_\mathrm{u}} \Vert \pi (f) \Vert _{\mathrm{HS}(\mathcal {H}_\pi )}^2 \,d\pi \end{aligned}$$ (2.1) for all \(f \in L^1 \cap L^2(G)\). Here \(\mathrm{HS}(\mathcal {H}_\pi )\) denotes the space of Hilbert–Schmidt operators on \(\mathcal {H}_\pi \), and integration on \({\widehat{G}}_\mathrm{u}\) is with respect to a suitable measure, called the Plancherel measure, which is uniquely determined by the above formula; in addition, the Fourier transformation \(f \mapsto \mathcal {F}_\mathrm{u}f\) extends to an isometric isomorphism between \(L^2(G)\) and the direct integral \(L^2_\mathrm{u}({\widehat{G}}) := \int ^\oplus _{{\widehat{G}}_\mathrm{u}} \mathrm{HS}(\mathcal {H}_\pi ) \,d\pi \). Interpolation then leads to the Hausdorff–Young inequality $$\begin{aligned} \Vert \mathcal {F}_\mathrm{u}f\Vert _{L^{p'}_\mathrm{u}({\widehat{G}})} := \left( \int _{{\widehat{G}}_\mathrm{u}} \Vert \pi (f) \Vert _{\mathcal {S}^{p'}(\mathcal {H}_\pi )}^{p'} \,d\pi \right) ^{1/p'} \le C \Vert f\Vert _{L^p(G)} \end{aligned}$$ (2.2) when \(1< p < 2\), where \(C = 1\); here, for all \(q \in [1,\infty ]\), \(\mathcal {S}^q(\mathcal {H}_\pi )\) denotes the qth Schatten class of operators on \(\mathcal {H}_\pi \), and the operator-valued \(L^q\)-spaces \(L^q_\mathrm{u}({\widehat{G}})\) are defined in terms of measurable fields of operators as in [47]. The fact that the spaces \(L^q_\mathrm{u}({\widehat{G}})\) constitute a complex interpolation family, that is, $$\begin{aligned}{}[L^{q_0}_\mathrm{u}({\widehat{G}}),L^{q_1}_\mathrm{u}({\widehat{G}})]_\theta = L^q_\mathrm{u}({\widehat{G}}) \end{aligned}$$ (2.3) with equal norms for \(q_0,q_1,q \in [1,\infty ]\), \(\theta \in (0,1)\), \(1/q = (1-\theta )/q_0 + \theta /q_1\), readily follows from standard interpolation results for vector-valued Lebesgue spaces and Schatten classes (see, for example, [31, 54, 66]) and the structure of the measurable field of separable Hilbert spaces \(\pi \mapsto \mathcal {H}_\pi \) [23, Proposition 7.19]. In the case where G is not unimodular, under suitable type I assumptions it is possible to prove a Plancherel formula similar to (2.1), where the right-hand side is adjusted by means of "formal dimension operators" [19, 26, 39, 40, 62]. Analogous modifications of (2.2) lead to a version of the Hausdorff–Young inequality that has been studied in a number of works [4, 21, 27, 32, 57]. When G is not type I, the above approach to the Plancherel formula based on irreducible unitary representation theory does not work as neatly. This however does not prevent one from studying the Hausdorff–Young inequality. Indeed, what is possibly the first appearance in the literature of the Hausdorff–Young inequality in a noncommutative setting, that is, the work of Kunze [43] for arbitrary unimodular locally compact groups (not necessarily of type I), does not express the Fourier transform in terms of irreducible unitary representations, but uses instead the theory of noncommutative integration (the same theory was used in earlier works of Mautner [50] and Segal [58] to express the Plancherel formula). This point of view was subsequently developed by Terp [63] to cover the case of non-unimodular groups and more recently has been further extended to the context of locally compact quantum groups [13, 15]. One way of thinking of noncommutative \(L^q\) spaces is as complex interpolation spaces between a von Neumann algebra M and its predual \(M_*\) (which play the role of \(L^\infty \) and \(L^1\) respectively) [33, 42, 54, 64]. In general this requires establishing a "compatibility" between M and \(M_*\), which may involve a number of choices, but in our case there appears to be a natural way to proceed (see also [17, 24]). Namely, the von Neumann algebra \(\mathrm{VN}(G)\) of G (that is, the weak\({}^*\)-closed \(*\)-subalgebra of \(\mathcal {L}(L^2(G))\) of the operators which commute with right translations) can be identified with the space \(\mathrm{Cv}^2(G)\) of left convolutors of \(L^2(G)\), that is, those distributions on G which are left convolution kernels of \(L^2(G)\)-bounded operators. Moreover, the predual \(\mathrm{VN}(G)_*\) can be identified with the Fourier algebra A(G), an algebra of continuous functions on G defined by Eymard [20] for arbitrary locally compact groups G. Now A(G) and \(\mathrm{Cv}^2(G)\) are naturally compatible as spaces of distributions on G (see [20, Propositions (3.26) and (3.27)]), so we can use complex interpolation to define Fourier–Lebesgue spaces of distributions on G: for \(q\in [1,\infty ]\), we set $$\begin{aligned} \mathcal {F}L^q(G) = \left\{ \begin{array}{ll} A(G) &{}\,\text {if } q=1,\\ \mathrm{Cv}^2(G) &{}\,\text {if } q=\infty ,\\ {[}A(G),\mathrm{Cv}^2(G)]_{1-1/q} &{}\,\text {if } 1< q < \infty . \end{array}\right. \end{aligned}$$ One can check that this definition corresponds to Izumi's left \(L^p\) spaces [33, 34] for the von Neumann algebra \(\mathrm{VN}(G)\) with respect to the Plancherel weight, and therefore it matches the construction given in [13, 15] for quantum groups. In particular \(\mathcal {F}L^2(G) = L^2(G)\) with equality of norms (see [34, Section 5] and [13, Proposition 2.21(iii)]; this corresponds to the Plancherel theorem), while clearly \(L^1(G) \subseteq \mathrm{Cv}^2(G)\) with norm-decreasing embedding. Interpolation then leads to the following formulation of the Hausdorff–Young inequality: \(L^p(G) \subseteq \mathcal {F}L^{p'}(G)\) and $$\begin{aligned} \Vert f \Vert _{\mathcal {F}L^{p'}(G)} \le C \Vert f\Vert _{L^p(G)} \end{aligned}$$ (2.4) where \(C= 1\) and \(p \in [1,2]\). We then define the \(L^p\) Hausdorff–Young constant \(H_p(G)\) on the group G as the minimal constant C for which (2.4) holds for all \(f \in L^p(G)\). Similarly, if U is a neighbourhood of the identity in G, we let \(H_p(G;U)\) be the minimal constant C in (2.4) when f is constrained to have support in U, and define the local \(L^p\) Hausdorff–Young constant \(H_p^\mathrm{loc}(G)\) as the infimum of the constants \(H_p(G;U)\) where U ranges over the neighbourhoods of the identity of G. The approach to Hausdorff–Young constants via \(\mathcal {F}L^q\) spaces is consistent with the unitary Fourier transformation approach described above, when the latter is applicable. Indeed, as discussed in [47, Theorems 2.1 and 3.1], in the case where G is unimodular and type I, the unitary Fourier transformation \(\mathcal {F}_\mathrm{u}\) induces isometric isomorphisms \(\mathrm{Cv}^2(G) \cong L^\infty _\mathrm{u}({\widehat{G}})\) and \(A(G) \cong L^1_\mathrm{u}({\widehat{G}})\), besides the Plancherel isomorphism \(L^2(G) \cong L^2_\mathrm{u}({\widehat{G}})\) (analogous results in the nonunimodular case can be found in [26, Theorems 3.48 and 4.12]); so by interpolation \(\mathcal {F}_u\) induces an isometric isomorphism between \(\mathcal {F}L^q(G)\) and \(L^q_\mathrm{u}({\widehat{G}})\) for all \(q \in [1,\infty ]\). Hence defining Hausdorff–Young constants in terms of the inequality (2.2) would lead to the same constants \(H_p(G)\) and \(H_p^\mathrm{loc}(G)\) as those we have defined in terms of \(\mathcal {F}L^q\) spaces. On the other hand, the approach via \(\mathcal {F}L^q\) spaces does not require type I assumptions, or even separability, and can be applied to every locally compact group G. There is an alternative characterisation of the noncommutative \(L^q\) spaces associated to \(\mathrm{VN}(G)\), namely as certain spaces \(L^q_{\mathrm{VN}}({\widehat{G}})\) of (closed, possibly unbounded) operators on \(L^2(G)\). This characterisation, which is that originally used in the works of Kunze and Terp on the Hausdorff–Young inequality, corresponds to Hilsum's approach to noncommutative \(L^q\) spaces [30] based on Connes's "spatial derivative" construction [14] (the work of Kunze is actually based on an earlier version of the theory [18, 59] that only applies to semifinite von Neumann algebras). We will not enter into the details of this construction and only recall two important properties. First, if the operator T belongs to \(L^q_{\mathrm{VN}}({\widehat{G}})\) for some \(q \in [1,\infty )\), then \(|T|^q = (T^* T)^{q/2}\) belongs to \(L^1_{\mathrm{VN}}({\widehat{G}})\) and $$\begin{aligned} \Vert T\Vert _{L^q_{\mathrm{VN}}({\widehat{G}})}^q = \Vert |T|^q\Vert _{L^1_{\mathrm{VN}}({\widehat{G}})}. \end{aligned}$$ (2.5) Moreover, for all \(q \in [1,\infty ]\), an isometric isomorphism from \(\mathcal {F}L^q(G)\) to \(L^q_{\mathrm{VN}}({\widehat{G}})\) is given by $$\begin{aligned} f \mapsto L_f \Delta ^{1/q}, \end{aligned}$$ (2.6) where \(L_f\) is the left-convolution operator by f, and we identify the modular function \(\Delta \) of G with the corresponding multiplication operator (see [13, Proposition 2.21(ii)]). Recall that convolution on G is given by $$\begin{aligned} L_f \phi (x) = f * \phi (x) = \int _G f(xy) \, \phi (y^{-1}) \,dy, \end{aligned}$$ at least when f and \(\phi \) are in \(C_c(G)\). Note that, when \(q=p'\), (2.6) matches the definitions by Kunze and by Terp of the \(L^p\) Fourier transformation \(\mathcal {F}_p : L^p(G) \rightarrow L^{p'}_{\mathrm{VN}}({\widehat{G}})\) for \(p \in [1,2]\) [43, 63]. In other words, the \(L^p\) Fourier transformation \(\mathcal {F}_p : L^p(G) \rightarrow L^{p'}_{\mathrm{VN}}({\widehat{G}})\) factorises as the inclusion map \(L^p(G) \rightarrow \mathcal {F}L^{p'}(G)\) and the isometric isomorphism \(\mathcal {F}L^{p'}(G) \rightarrow L^{p'}_{\mathrm{VN}}({\widehat{G}})\), whence the compatibility with the Kunze–Terp approach of the above definition of the best Hausdorff–Young constants based on (2.4). Another consequence of the above discussion is the following characterisation of the \(\mathcal {F}L^q(G)\) norm in terms of a more "concrete" operator norm. Proposition 2.1 For all \(q \in [1,\infty )\) and \(f \in \mathcal {F}L^q(G)\), $$\begin{aligned} \Vert f \Vert _{\mathcal {F}L^q(G)} = \Vert |L_f \Delta ^{1/q}|^q \Vert _{L^1(G) \rightarrow L^\infty (G)}^{1/q}. \end{aligned}$$ (2.7) Proof By (2.5) and (2.6), $$\begin{aligned} \Vert f \Vert _{\mathcal {F}L^q(G)} = \Vert L_f \Delta ^{1/q} \Vert _{L^q_{\mathrm{VN}}({\widehat{G}})} = \Vert |L_f \Delta ^{1/q}|^q \Vert _{L^1_{\mathrm{VN}}({\widehat{G}})}^{1/q} = \Vert g \Vert _{A(G)}^{1/q}, \end{aligned}$$ where \(g \in A(G)\) satisfies \(L_g \Delta = |L_f \Delta ^{1/q}|^q\). On the other hand, the operator \(L_g \Delta \) is given by $$\begin{aligned} L_g \Delta \phi (x) = \int _G g(xy) \, \Delta (y^{-1}) \, \phi (y^{-1}) \,dy = \int _G g(xy^{-1}) \, \phi (y) \,dy; \end{aligned}$$ since \(L_g \Delta = |L_f \Delta ^{1/q}|^q\) is a positive operator, the kernel g must be a function of positive type (see, for example, [23, Section 3.3]), whence $$\begin{aligned} \Vert g\Vert _{A(G)} = g(e) = \Vert g\Vert _\infty = \Vert L_g \Delta \Vert _{L^1(G) \rightarrow L^\infty (G)} \end{aligned}$$ and we are done. \(\square \) A classical way of accessing Hausdorff–Young constants is through their relations with best constants in the Young convolution inequalities. Recall that, for a possibly nonunimodular group G, the k-linear version of Young's inequality takes the following form: for all \(p_1,\dots ,p_k,r \in [1,\infty ]\) such that \(\sum _{j=1}^k 1/p_j' = 1/r'\), Open image in new window (2.8) where \(C \le 1\) (see [63, Lemma 1.1], or [38, Corollary 2.3] where the inequality is written for the right Haar measure). As in the case of the Hausdorff–Young inequality, we can define the Young constant \(Y_{p_1,\dots ,p_k}(G)\) for G as the smallest constant C for which (2.8) holds for all \(f_1 \in L^{p_1}(G), \dots , f_k \in L^{p_k}(G)\), as well as the localised versions \(Y_{p_1,\dots ,p_k}(G;U)\) for neighbourhoods U of the identity of G (corresponding to the constraint \({{\,\mathrm{supp}\,}}f_1, \dots , {{\,\mathrm{supp}\,}}f_k \subseteq U\)) and \(Y^\mathrm{loc}_{p_1,\dots ,p_k}(G)\). Note that the above Young inequality (2.8) is "dual" to the following Hölder-type inequality for \(\mathcal {F}L^p\)-spaces: for all \(p_1,\dots ,p_k,r \in [1,\infty ]\) such that \(\sum _{j=1}^k 1/p_j = 1/r\), Open image in new window (2.9) this is a rephrasing of Hölder's inequality for Hilsum's noncommutative \(L^p\) spaces, $$\begin{aligned} \Vert T_1 \cdots T_k \Vert _{L^r_{\mathrm{VN}}({\widehat{G}})} \le \prod _{j=1}^k \Vert T_j \Vert _{L^{p_j}_{\mathrm{VN}}({\widehat{G}})} \end{aligned}$$ [30, Proposition 8], via the isomorphism (2.6) from \(\mathcal {F}L^q(G)\) to \(L^q_{\mathrm{VN}}({\widehat{G}})\) and the identities $$\begin{aligned} \Delta ^{\alpha } (f*g) = (\Delta ^\alpha f) * (\Delta ^\alpha g) \qquad \text {and}\qquad L_{\Delta ^{\alpha } f} = \Delta ^{\alpha } L_f \Delta ^{-\alpha }, \end{aligned}$$ (2.10) valid for all \(\alpha \in \mathbb {C}\). Let us also recall that $$\begin{aligned} L_{f^*} = L_{f}^*, \end{aligned}$$ (2.11) where \(f \mapsto f^*\) is the isometric conjugate-linear involution of \(L^1(G)\) given by $$\begin{aligned} f^*(x) = \Delta ^{-1}(x) \, \overline{f(x^{-1})}. \end{aligned}$$ The proposition below summarises a number of relations between Young and Hausdorff–Young constants that can be found in the literature, at least in particular cases (see, for example, [6] and [38]), as well as corresponding local versions. Proposition 2.2 Let G be a locally compact group. (i) For all \(p_1,\dots ,p_k,q \in [1,2]\) such that \(\sum _j 1/p_j' = 1/q\), $$\begin{aligned} Y_{p_1,\dots ,p_k}(G)&\le H_{q}(G) \, H_{p_1}(G) \cdots H_{p_k}(G), \\ Y_{p_1,\dots ,p_k}^\mathrm{loc}(G)&\le H_{q}^\mathrm{loc}(G) \, H_{p_1}^\mathrm{loc}(G) \cdots H_{p_k}^\mathrm{loc}(G). \end{aligned}$$ (ii) For all \(p \in [1,2)\) such that \(p'=2k\), \(k \in \mathbb {Z}\), if \(p_1=\dots =p_k=p\), then $$\begin{aligned} H_{p}(G)&= Y_{p_1,\dots ,p_k}(G)^{1/k}, \\ H_{p}^\mathrm{loc}(G)&= Y_{p_1,\dots ,p_k}^\mathrm{loc}(G)^{1/k}. \end{aligned}$$ (iii) If N is a closed normal subgroup of G, then, for all \(p_1,\dots ,p_k \in [1,\infty ]\) such that \(\sum _{j=1}^k 1/p_j' \in [0,1]\), $$\begin{aligned} Y_{p_1,\dots ,p_k}(G)&\le Y_{p_1,\dots ,p_k}(N) \, Y_{p_1,\dots ,p_k}(G/N),\\ Y_{p_1,\dots ,p_k}^\mathrm{loc}(G)&\le Y_{p_1,\dots ,p_k}^\mathrm{loc}(N) \, Y_{p_1,\dots ,p_k}^\mathrm{loc}(G/N), \end{aligned}$$ with equality when \(G \cong N \times (G/N)\). Proof (i). For all \(f_1,\dots ,f_k,g \in C_c(G)\), by (2.4) and (2.9), Open image in new window which proves that Open image in new window that is, \(Y_{p_1,\dots ,p_k}(G) \le H_{q}(G) \, H_{p_1}(G) \cdots H_{p_k}(G)\). Note now that, if \(f_1,\dots ,f_k\) are supported in a neighbourhood U of the identity, then Open image in new window is supported in \(U^k\) and, to estimate its \(L^{q'}\) norm, it is enough to test it against functions g that are also supported in \(U^k\); the same argument as above then also gives $$\begin{aligned} Y_{p_1,\dots ,p_k}(G;U) \le H_{q}(G;U^k) \, H_{p_1}(G;U) \cdots H_{p_k}(G;U) \end{aligned}$$ and \(Y_{p_1,\dots ,p_k}^\mathrm{loc}(G) \le H_{q}^\mathrm{loc}(G) \, H_{p_1}^\mathrm{loc}(G) \cdots H_{p_k}^\mathrm{loc}(G)\). (ii). Part (i) gives us the inequality \(H_{p}(G) \ge Y_{p_1,\dots ,p_k}(G)^{1/k}\) and its local version. On the other hand, for all \(f \in C_c(G)\), if we define \({\tilde{f}} = \Delta ^{1/p'} f^*\), then $$\begin{aligned} \Vert {\tilde{f}}\Vert _p = \Vert f\Vert _p \end{aligned}$$ and, by (2.10) and (2.11), $$\begin{aligned} L_{{\tilde{f}}} \Delta ^{1/p'} = (L_f \Delta ^{1/p'})^*. \end{aligned}$$ For all \(j=1,\dots ,k\), let \(f_j\) be either \({\tilde{f}}\) or f, according to whether \(k-j\) is odd or even, and define Open image in new window . Then, since \(p'=2k\), $$\begin{aligned} \begin{aligned} |L_f \Delta ^{1/p'}|^{p'}&= [(L_f \Delta ^{1/p'})^* (L_f \Delta ^{1/p'}) ]^k \\&= (L_{{\tilde{f}}} \Delta ^{1/p'}) (L_{f} \Delta ^{1/p'}) \cdots (L_{{\tilde{f}}} \Delta ^{1/p'}) (L_{f} \Delta ^{1/p'}) \\&= |(L_{f_1} \Delta ^{1/p'})\cdots (L_{f_k} \Delta ^{1/p'})|^2 \end{aligned} \end{aligned}$$ and, by (2.10), $$\begin{aligned} (L_{f_1} \Delta ^{1/p'}) \cdots (L_{f_k} \Delta ^{1/p'}) = L_g \Delta ^{1/2}. \end{aligned}$$ So \(|L_f \Delta ^{1/p'}|^{p'} = |L_g \Delta ^{1/2}|^2\) and, by (2.7) and (2.8), $$\begin{aligned}&\Vert f \Vert _{\mathcal {F}L^{p'}}^{p'} = \Vert g \Vert _{\mathcal {F}L^2}^2 = \Vert g \Vert _{L^2}^2\\&\quad \le Y_{p_1,\dots ,p_k}(G)^2 \Vert f_1\Vert _{L^p}^2 \cdots \Vert f_k \Vert _{L^p}^2 = Y_{p_1,\dots ,p_k}(G)^2 \Vert f\Vert _{L^p}^{p'}, \end{aligned}$$ which gives the inequality \(H_{p}(G) \le Y_{p_1,\dots ,p_k}(G)^{1/k}\). The same argument also gives \(H_{p}(G;U) \le Y_{p_1,\dots ,p_k}(G;U)^{1/k}\) and \(H_{p}^\mathrm{loc}(G) \le Y_{p_1,\dots ,p_k}^\mathrm{loc}(G)^{1/k}\). (iii). The inequalities are proved by a simple extension of Klein and Russo's argument for the case of semidirect products [38, proof of Lemma 2.4], using the "measure disintegration" in [23, Theorem (2.49)]. In the case of direct products, equalities follow by testing on tensor product functions (see [6, Lemma 5]). \(\square \) The next lemma contains the fundamental approximation results that allow us to relate Hausdorff–Young constants on a Lie group G and on its Lie algebra \(\mathfrak {g}\) by means of a "transplantation" or "blow-up" technique. The Lie algebra \(\mathfrak {g}\) will be considered as an abelian group with addition, and the Lebesgue measure on \(\mathfrak {g}\) is normalised so that the Jacobian determinant of the exponential map \(\exp : \mathfrak {g} \rightarrow G\) is equal to 1 at the origin. The context will make clear whether the notation for convolution, involution and convolution operators (\(f*g\), \(f^*\), \(L_f\)) refers to the group structure of G or the abelian group structure of \(\mathfrak {g}\). Denote by \(C_{pg}([0,\infty ))\) the space of continuous functions \(\Phi : [0,\infty ) \rightarrow \mathbb {C}\) with at most polynomial growth, that is, \(|\Phi (u)| \le C(1+u)^N\) for some \(C,N \in (0,\infty )\) and all \(u \in [0,\infty )\). Lemma 2.3 Let G be a Lie group with Lie algebra \(\mathfrak {g}\) of dimension n, and let \(\exp : \mathfrak {g} \rightarrow G\) be the exponential map. Let \(\Omega \) be an open neighbourhood of the origin in \(\mathfrak {g}\) such that \(\Omega = -\Omega \) and \(\exp |_\Omega : \Omega \rightarrow \exp (\Omega )\) is a diffeomorphism. For all \(f \in C_c(\mathfrak {g})\), \(\lambda \in (0,\infty )\), \(\alpha \in \mathbb {R}\) and \(p \in [1,\infty ]\), define \(f^{\lambda ,p,\alpha } : G \rightarrow \mathbb {C}\) by $$\begin{aligned} f^{\lambda ,p,\alpha }(x) = {\left\{ \begin{array}{ll} \lambda ^{-n/p} \Delta (x)^{-\alpha } f(\lambda ^{-1} \exp |_\Omega ^{-1}(x)) &{}\text {if } x \in \exp (\Omega ),\\ 0 &{}\text {otherwise.} \end{array}\right. } \end{aligned}$$ (2.12) Set also \(f^{\lambda ,p} = f^{\lambda ,p,0}\). Then the following hold. (i) For all \(f \in C_c(\mathfrak {g})\), \(\alpha \in \mathbb {R}\) and \(p \in [1,\infty ]\), $$\begin{aligned} \Vert f^{\lambda ,p,\alpha }\Vert _{L^p(G)} \le C_{\alpha ,p,\Omega } \, \Vert f\Vert _{L^p(\mathfrak {g})} \end{aligned}$$ (2.13) for all \(\lambda \in (0,\infty )\), and $$\begin{aligned} \Vert f^{\lambda ,p,\alpha }\Vert _{L^p(G)} \rightarrow \Vert f\Vert _{L^p(\mathfrak {g})} \end{aligned}$$ (2.14) as \(\lambda \rightarrow 0\). (ii) For all \(k \in \mathbb {N}\), \(\alpha _1,\dots ,\alpha _k,\beta \in \mathbb {R}\), \(f_1,\dots ,f_k,g \in C_c(\mathfrak {g})\), $$\begin{aligned} \langle f_1^{\lambda ,1,\alpha _1} * \cdots * f_k^{\lambda ,1,\alpha _k} , g^{\lambda ,\infty ,\beta } \rangle _{L^2(G)} \rightarrow \langle f_1 * \cdots * f_k , g \rangle _{L^2(\mathfrak {g})} \end{aligned}$$ (2.15) as \(\lambda \rightarrow 0\). (iii) For all \(\alpha \in \mathbb {R}\), \(f,g,h \in C_c(\mathfrak {g})\), \(\Phi \in C_{pg}([0,\infty ))\), $$\begin{aligned} \langle \Phi (\Delta ^\alpha L_{(f^{\lambda ,1})^* * f^{\lambda ,1}} \Delta ^\alpha ) g^{\lambda ,2}, h^{\lambda ,2}\rangle _{L^2(G)} \rightarrow \langle \Phi (L_{f^* * f}) g, h\rangle _{L^2(\mathfrak {g})} \end{aligned}$$ (2.16) as \(\lambda \rightarrow 0\). (iv) For all \(\alpha \in \mathbb {R}\), \(f,g,h \in C_c(\mathfrak {g})\) and \(q \in [0,\infty )\), $$\begin{aligned} \lambda ^{-n(q-1)} \langle |L_{f^{\lambda ,\infty }} \Delta ^\alpha |^{q} g^{\lambda ,1}, h^{\lambda ,1}\rangle _{L^2(G)} \rightarrow \langle |L_{f}|^{q} g, h\rangle _{L^2(\mathfrak {g})} \end{aligned}$$ as \(\lambda \rightarrow 0\). Proof Let \(J : \mathfrak {g} \rightarrow \mathbb {R}\) denote the modulus of the Jacobian determinant of \(\exp \), and define \(\Delta _e : \mathfrak {g} \rightarrow (0,\infty )\) to be \(\Delta \circ \exp \). (i). Note that $$\begin{aligned}&\Vert f^{\lambda ,p,\alpha }\Vert _{p}^{p} = \lambda ^{-n} \int _\Omega |f(\lambda ^{-1} X)|^{p} (J\Delta _e^{-\alpha p})(X) \,dX \\ {}&= \int _{\lambda ^{-1}\Omega } |f(X)|^{p} (J \Delta _e^{-\alpha p})(\lambda X) \,dX. \end{aligned}$$ From this, (2.13) follows (with \(C_{\alpha ,p,\Omega }^p = \sup _{\Omega } J \Delta _e^{-\alpha p}\)), and (2.14) follows as well because f is compactly supported and \(\lim _{X \rightarrow 0} (J \Delta _e^{-\alpha p})(X) = J(0) \Delta (e)^{-\alpha p} = 1\). (ii). By the Baker–Campbell–Hausdorff formula, $$\begin{aligned} \exp (X_1) \cdots \exp (X_k) = \exp (X_1 + \dots + X_k + B(X_1,\dots ,X_k)), \end{aligned}$$ where \(B(X_1,\dots ,X_k) = \sum _{m \ge 2} B_m(X_1,\dots ,X_k)\) and, for all \(m \ge 2\), \(B_m(X_1,\dots ,X_k)\) is a homogeneous polynomial function of \(X_1,\dots ,X_k\) of degree m; indeed we can find a sufficiently small neighbourhood \(\tilde{\Omega }\subseteq \Omega \) of the origin in \(\mathfrak {g}\) so that, if \(X_1,\dots ,X_k \in \tilde{\Omega }\), then \(X_1 + \dots + X_k + B(X_1,\dots ,X_k) \in \Omega \). Note that $$\begin{aligned}&\langle f_1^{\lambda ,1,\alpha _1} * \cdots * f_k^{\lambda ,1,\alpha _k} , g^{\lambda ,\infty ,\beta } \rangle _{L^2(G)} \\&\quad = \int _{G^k} f_1^{\lambda ,1,\alpha _1}(x_1) \, \cdots \, f_k^{\lambda ,1,\alpha _k}(x_k) \, \overline{g^{\lambda ,\infty ,\beta }(x_1\cdots x_k)} \,dx_1 \dots \,dx_k \end{aligned}$$ If \(\lambda \) is sufficiently small that \(\bigcup _{j=1}^k \lambda {{\,\mathrm{supp}\,}}f_j \subseteq \exp (\tilde{\Omega })\), then the last integral may be rewritten as $$\begin{aligned} \int _{\mathfrak {g}^k} {\bar{g}}\Biggl (\sum _{j=1}^k X_j + \lambda ^{-1}B(\lambda X_1,\dots ,\lambda X_k)\Biggr ) \prod _{j=1}^k ( f_j(X_j) (J \Delta _e^{-\alpha _j-\beta })(\lambda X_j) ) \,dX_1\cdots \,dX_k. \end{aligned}$$ Since \(\lambda ^{-1} B(\lambda X_1,\dots ,\lambda X_k) = \lambda \sum _{m \ge 2} \lambda ^{m-2} B_m(X_1,\dots ,X_k)\) tends to 0 as \(\lambda \rightarrow 0\), the last integral tends to \(\langle f_1 * \cdots * f_k , g \rangle _{L^2(\mathfrak {g})}\). (iii). Note first that \(\Delta ^\alpha L_{(f^{\lambda ,1})^* * f^{\lambda ,1}} \Delta ^\alpha \) is a nonnegative self-adjoint operator on \(L^2(G)\) (which may be unbounded when G is nonunimodular) and that, for all \(N \in \mathbb {N}\), the \(L^2\)-domain of \((\Delta ^\alpha L_{(f^{\lambda ,1})^* * f^{\lambda ,1}} \Delta ^\alpha )^N\) contains all compactly supported functions in \(L^2(G)\), so the left-hand side of (2.16) is well-defined. Note moreover that $$\begin{aligned} (f^{\lambda ,p,\alpha })^* = (f^*)^{\lambda ,p,1-\alpha } \end{aligned}$$ (2.17) whence, by (2.10), Open image in new window So, in the case where \(\Phi (u) = u^N\) for some \(N \in \mathbb {N}\), (2.16) follows from (2.15). Note that, by shrinking \(\Omega \) if necessary, we may assume that \(\Omega \) and \(\exp (\Omega )\) have compact closures in \(\mathfrak {g}\) and G, and moreover the topological boundary of \(\exp (\Omega )\) has null Haar measure (indeed shrinking \(\Omega \) does not change the left-hand side of (2.16) for \(\lambda \) sufficiently small). As in [49, proof of Theorem 5.2], we can now extend the diffeomorphism \(\phi := \exp |_\Omega ^{-1} : \exp (\Omega ) \rightarrow \Omega \) to a diffeomorphism \(\phi _* : U \rightarrow V\), where U and V are open sets in G and \(\mathfrak {g}\) containing \(\exp (\Omega )\) and \(\Omega \), and moreover \(G \setminus U\) has null Haar measure. Finally, let \(J_* : V \rightarrow (0,\infty )\) be the density of the push-forward via \(\phi _*\) of the Haar measure with respect to the Lebesgue measure (so \(J_* = J\) on \(\Omega \)), and define an isometric isomorphism \(\Psi : L^2(G) \rightarrow L^2(V)\) by $$\begin{aligned} \Psi (F) = (F \circ \phi _*^{-1}) \, J_*^{1/2} . \end{aligned}$$ Since \(A_\lambda := \Delta ^\alpha L_{(f^{\lambda ,1})^* * f^{\lambda ,1}} \Delta ^\alpha \) is a self-adjoint operator on \(L^2(G)\), we can define a self-adjoint operator \({\tilde{A}}_\lambda \) on \(L^2(\mathfrak {g}) = L^2(V) \oplus L^2(\mathfrak {g} \setminus V)\) by $$\begin{aligned} {\tilde{A}}_\lambda = \begin{pmatrix} \Psi A_\lambda \Psi ^{-1} &{} 0 \\ 0 &{} 0 \end{pmatrix} \end{aligned}$$ and another self-adjoint operator \({\hat{A}}_\lambda \) on \(L^2(\mathfrak {g})\) by \({\hat{A}}_\lambda = T_\lambda ^{-1} {\tilde{A}}_\lambda T_\lambda \), where \(T_\lambda \) is the isometry on \(L^2(\mathfrak {g})\) defined by $$\begin{aligned} T_\lambda f(X) = \lambda ^{-n/2} f(X/\lambda ). \end{aligned}$$ It is now not difficult to check that, for all \(\Phi \in C_{pg}([0,\infty ))\) and \(g,h \in C_c(\mathfrak {g})\), $$\begin{aligned} \langle \Phi ({\hat{A}}_\lambda ) g, h \rangle _{L^2(\mathfrak {g})} = \langle \Phi (A_\lambda ) g^{\lambda ,2}, h^{\lambda ,2} \rangle _{L^2(G)} \end{aligned}$$ (2.18) for all \(\lambda \) sufficiently small that \({{\,\mathrm{supp}\,}}T_\lambda g \cup {{\,\mathrm{supp}\,}}T_\lambda h \subseteq \Omega \). For all \(N \in \mathbb {N}\), from the cases \(\Phi (u) = u^N\) and \(\Phi (u) = u^{2N}\) of (2.16) and (2.18) it follows that, for all \(g,h \in C_c(\mathfrak {g})\), $$\begin{aligned} \langle {\hat{A}}_\lambda ^N g, h \rangle _{L^2(\mathfrak {g})} \rightarrow \langle A^N g,h \rangle _{L^2(\mathfrak {g})}, \qquad \Vert {\hat{A}}_\lambda ^N g \Vert _{L^2(\mathfrak {g})} \rightarrow \Vert A^N g \Vert _{L^2(\mathfrak {g})} \end{aligned}$$ (2.19) as \(\lambda \rightarrow 0\), where \(A := L_{f^* * f}\). In particular, from this and the density of \(C_c(\mathfrak {g})\) in \(L^2(\mathfrak {g})\) it is not difficult to conclude that, for all \(g \in C_c(\mathfrak {g})\), $$\begin{aligned} {\hat{A}}_\lambda ^N g \rightarrow A^N g \end{aligned}$$ (2.20) in \(L^2\)-norm as \(\lambda \rightarrow 0\) [10, Proposition 3.32]. Since A is a bounded self-adjoint operator on \(L^2(\mathfrak {g})\), \(C_c(\mathfrak {g})\) is a core for A and [67, Theorem 9.16] implies that $$\begin{aligned} {\hat{A}}_\lambda \rightarrow A \end{aligned}$$ in the sense of strong resolvent convergence as \(\lambda \rightarrow 0\). In turn this implies that, for all bounded continuous functions \(\Phi : [0,\infty ) \rightarrow \mathbb {C}\), $$\begin{aligned} \Phi ({\hat{A}}_\lambda ) \rightarrow \Phi (A) \end{aligned}$$ (2.21) in the sense of strong operator convergence as \(\lambda \rightarrow 0\) [67, Theorem 9.17]. Suppose now that \(\Phi \in C_{pg}([0,\infty ))\). Then we can write \(\Phi (u) = \tilde{\Phi }(u) \, (1+u^N)\) for some bounded continuous function \(\tilde{\Phi }: [0,\infty ) \rightarrow \mathbb {C}\) and \(N \in \mathbb {N}\). For all \(g,h \in C_c(\mathfrak {g})\), by (2.18), $$\begin{aligned} \langle \Phi (A_\lambda ) g^{\lambda ,2}, h^{\lambda ,2} \rangle _{L^2(G)}= & {} \langle \Phi ({\hat{A}}_\lambda ) g, h \rangle _{L^2(\mathfrak {g})}\\= & {} \langle \tilde{\Phi }({\hat{A}}_\lambda ) g, h \rangle _{L^2(\mathfrak {g})} + \langle \tilde{\Phi }({\hat{A}}_\lambda ) g, {\hat{A}}_\lambda ^N h \rangle _{L^2(\mathfrak {g})} \end{aligned}$$ for all \(\lambda \) sufficiently small, and the last quantity tends to $$\begin{aligned} \langle \tilde{\Phi }(A) g, h \rangle _{L^2(\mathfrak {g})} + \langle \tilde{\Phi }(A) g, A^N h \rangle _{L^2(\mathfrak {g})} = \langle \Phi (A) g, h \rangle _{L^2(\mathfrak {g})} \end{aligned}$$ as \(\lambda \rightarrow 0\), by (2.20) and (2.21). (iv). This is just a restatement of part (iii) in the case where \(\Phi (u) = u^{q/2}\). \(\square \) We can finally prove the enunciated relation between Hausdorff–Young constants of a Lie group and its Lie algebra. We find it convenient to state the result together with its analogue for Young constants, since both follow by the approximation results of Lemma 2.3. Part (ii) of Proposition 2.4, together with the following Remark 2.5 and the Babenko–Beckner theorem for \(\mathbb {R}^n\), prove Theorem 1.3. As in [60], we define a contractive automorphism of a locally compact group G as an automorphism \(\tau \) such that \(\lim _{k\rightarrow \infty } \tau ^k(x)=e\) for all \(x \in G\). Proposition 2.4 Let G be a locally compact group. (i) For all \(p_1,\dots ,p_k \in [1,\infty ]\) such that \(\sum _{j=1}^k 1/p_j' \in [0,1]\), $$\begin{aligned} Y_{p_1,\dots ,p_k}(G) \ge Y_{p_1,\dots ,p_k}^\mathrm{loc}(G), \end{aligned}$$ (2.22) with equality when G has a contractive automorphism; moreover, if G is a Lie group with Lie algebra \(\mathfrak {g}\), $$\begin{aligned} Y_{p_1,\dots ,p_k}^\mathrm{loc}(G) \ge Y_{p_1,\dots ,p_k}(\mathfrak {g}). \end{aligned}$$ (2.23) (ii) For all \(p \in [1,2]\), $$\begin{aligned} H_p(G) \ge H_p^\mathrm{loc}(G), \end{aligned}$$ (2.24) with equality if G has a contractive automorphism. Moreover, when G is an n-dimensional Lie group with Lie algebra \(\mathfrak {g}\), $$\begin{aligned} H_p^\mathrm{loc}(G) \ge H_p(\mathfrak {g}). \end{aligned}$$ (2.25) Proof (i). The first inequality is obvious. Moreover, in case G has a contractive automorphism, the reverse inequality follows from a scaling argument. Indeed, for all automorphisms \(\gamma \) of G, there exists \(\kappa _\gamma \in (0,\infty )\) such that the push-forward via \(\gamma \) of the Haar measure on G is \(\kappa _\gamma \) times the Haar measure. So, if \(R_\gamma f = f \circ \gamma ^{-1}\), then Open image in new window whence it is immediate that both sides of Young's inequality (2.8) are scaled by the same factor when each \(f_j\) is replaced with \(R_\gamma f_j\). Now, by density, the value of the best constant \(Y_{p_1,\dots ,p_k}(G)\) may be determined by testing (2.8) on arbitrary \(f_1,\dots ,f_k \in C_c(G)\). Moreover, if \(\tau \) is a contractive automorphism of G and U is any neighbourhood of the identity, then, for all compact subsets \(K \subseteq G\), there exists \(N \in \mathbb {N}\) such that \(\tau ^N(K) \subseteq U\) [60, Lemma 1.4(iv)]; in particular, for all \(f_1,\dots ,f_k \in C_c(G)\), by taking \(\gamma = \tau ^N\) for sufficiently large \(N \in \mathbb {N}\), we see that \({{\,\mathrm{supp}\,}}R_\gamma f_j \subseteq U\). This shows that \(Y_{p_1,\dots ,p_k}(G) \le Y_{p_1,\dots ,p_k}(G;U)\) for all neighbourhoods U of \(e \in G\), and consequently \(Y_{p_1,\dots ,p_k}(G) \le Y_{p_1,\dots ,p_k}^\mathrm{loc}(G)\). As for the second inequality, let U be an arbitrary neighbourhood of \(e \in G\). To conclude, it is sufficient to show that \(Y_{p_1,\dots ,p_k}(\mathfrak {g}) \le Y_{p_1, \dots , p_k}(G;U)\). Let \(r \in [1,\infty ]\) be defined by \(\sum _j 1/p_j' = 1/r'\). Consider \(g,f_1,\dots ,f_k \in C_c(\mathfrak {g})\). For all \(\lambda \in (0,\infty )\), \(\alpha \in \mathbb {C}\) and \(p\in [1,\infty ]\), define \(g^{\lambda ,p},f_j^{\lambda ,p},f_j^{\lambda ,p,\alpha }\) as in Lemma 2.3. Then \(\bigcup _{j=1}^k {{\,\mathrm{supp}\,}}f_j^{\lambda ,1} \subseteq U\) for all sufficiently small \(\lambda \), and therefore, by (2.8), Open image in new window Note that \(\sum _{j=1}^k 1/p_j + 1/r' = k\). So the last inequality can be rewritten as $$\begin{aligned}&\left\langle f_1^{\lambda ,1,\alpha _1} * \dots * f_k^{\lambda ,1,\alpha _k} , g^{\lambda ,\infty } \right\rangle _{L^2(G)} \\&\quad \le Y_{p_1, \dots , p_k}(G;U) \, \Vert f_1^{\lambda ,p_1} \Vert _{L^{p_1}(G)} \cdots \Vert f_k^{\lambda ,p_k} \Vert _{L^{p_k}(G)} \Vert g^{\lambda ,r'} \Vert _{L^{r'}(G)}, \end{aligned}$$ where \(\alpha _j = -\sum _{l=1}^{j-1} 1/p_l'\). Hence, by Lemma 2.3, by taking the limit as \(\lambda \rightarrow 0\), we obtain $$\begin{aligned} \langle f_1 * \dots * f_k , g \rangle _{L^2(\mathfrak {g})} \le Y_{p_1, \dots , p_k}(G;U) \, \Vert f_1\Vert _{L^{p_1}(\mathfrak {g})} \cdots \Vert f_k \Vert _{L^{p_k}(\mathfrak {g})} \Vert g \Vert _{L^{r'}(\mathfrak {g})}. \end{aligned}$$ The arbitrariness of \(f_1,\dots ,f_k,g \in C_c(\mathfrak {g})\) implies that \(Y_{p_1,\dots ,p_k}(\mathfrak {g}) \le Y_{p_1, \dots , p_k}(G;U)\). (ii). Much as in part (i), the first inequality is obvious, and equality follows from a rescaling argument when G has a contractive automorphism, since $$\begin{aligned} \Vert R_\gamma f \Vert _{\mathcal {F}L^q(G)} = \kappa _\gamma ^{-1/q'} \Vert f \Vert _{\mathcal {F}L^q(G)} \end{aligned}$$ for all automorphisms \(\gamma \) of G. As for the second inequality, we need to show that \(H_p(\mathfrak {g}) \le H_p(G;U)\) for all neighbourhoods U of \(e \in G\). Set \(q = p'\) and note that, by (2.7), $$\begin{aligned} \Vert f\Vert _{\mathcal {F}F^{q}(G)}^{q} = \sup _{\Vert g\Vert _{L^1(G)}, \Vert h\Vert _{L^1(G)} \le 1} \langle |L_f \Delta ^{1/q}|^{q} g, h \rangle _{L^2(G)}. \end{aligned}$$ For \(\lambda \in (0,\infty )\), \(r \in [1,\infty ]\) and \(f,g,h \in C_c(\mathfrak {g})\), we define \(f^{\lambda ,r},g^{\lambda ,r},h^{\lambda ,r} : G \rightarrow \mathbb {C}\) as in Lemma 2.3. For all sufficiently small \(\lambda \), \({{\,\mathrm{supp}\,}}f^{\lambda ,r} \subseteq U\) and therefore $$\begin{aligned} \langle |L_{f^{\lambda ,\infty }} \Delta ^{1/q}|^{q} g^{\lambda ,1}, h^{\lambda ,1} \rangle _{L^2(G)} \le H_p(G;U)^{q} \Vert f^{\lambda ,\infty } \Vert _{L^p(G)}^{q} \Vert g^{\lambda ,1}\Vert _{L^1(G)} \Vert h^{\lambda ,1}\Vert _{L^1(G)}, \end{aligned}$$ that is, $$\begin{aligned}&\lambda ^{-n(q-1)} \langle |L_{f^{\lambda ,\infty }} \Delta ^{1/q}|^{q} g^{\lambda ,1}, h^{\lambda ,1} \rangle _{L^2(G)}\\&\quad \le H_p(G;U)^{q} \Vert f^{\lambda ,p} \Vert _{L^p(G)}^{q} \Vert g^{\lambda ,1}\Vert _{L^1(G)} \Vert h^{\lambda ,1}\Vert _{L^1(G)}. \end{aligned}$$ As \(\lambda \rightarrow 0\), by Lemma 2.3 we then deduce that $$\begin{aligned} \langle |L_{f}|^{q} g, h \rangle _{L^2(\mathfrak {g})} \le H_p(G;U)^{q} \Vert f \Vert _{L^p(\mathfrak {g})}^{q} \Vert g\Vert _{L^1(\mathfrak {g})} \Vert h\Vert _{L^1(\mathfrak {g})}. \end{aligned}$$ By the arbitrariness of \(g,h \in C_c(\mathfrak {g})\), $$\begin{aligned} \Vert f\Vert _{\mathcal {F}L^q(\mathfrak {g})} \le H_p(G;U) \Vert f\Vert _{L^p(\mathfrak {g})} \end{aligned}$$ and finally, by the arbitrariness of \(f \in C_c(\mathfrak {g})\), \(H_p(\mathfrak {g}) \le H_p(G;U)\). \(\square \) Remark 2.5 The argument in Proposition 2.4 can be extended to the case of inequalities restricted to particular classes of functions on G. In particular, suppose that the class of functions is determined by invariance with respect to the action of a compact group K of automorphisms of G. Then it is possible to choose a positive inner product on \(\mathfrak {g}\) so that K acts on \(\mathfrak {g}\) by isometries (take any inner product on \(\mathfrak {g}\) and average it with respect to the action of K), and the correspondence (2.12) preserves K-invariance whenever \(\Omega \) is a ball centred at the origin. Moreover the class of functions on \(\mathfrak {g}\) under consideration contains all radial functions. Since the extremisers for Young and Hausdorff–Young constants on \(\mathfrak {g}\) are centred gaussians [6, 9], which may be assumed to be radial, the resulting lower bounds do not change. This observation completes the proof of Theorem 1.3. Remark 2.6 While the inequalities (2.22) and (2.24) may be strict for certain Lie groups G (note that, when G is compact, the global Young and Hausdorff–Young constants are equal to 1), it appears natural to ask whether the inequalities (2.23) and (2.25) are actually equalities. We are not aware of any counterexample. As a matter of fact, a particular case of a recent result of Bennett, Bez, Buschenhenke, Cowling and Flock about nonlinear Brascamp–Lieb inequalities [7] entails that equality always holds in (2.23) for all Lie groups G: $$\begin{aligned} Y_{p_1,\dots ,p_k}^\mathrm{loc}(G) = Y_{p_1,\dots ,p_k}(\mathfrak {g}) \end{aligned}$$ for all\(p_1,\dots ,p_k \in [1,\infty ]\) such that \(\sum _{j=1}^k 1/p_j' \in [0,1]\). By Proposition 2.2(ii), this in turn implies that $$\begin{aligned} H_p^\mathrm{loc}(G) = H_p(\mathfrak {g}) = (B_p)^{\dim G} \end{aligned}$$ for all \(p \in [1,2]\) such that \(p'\) is an even integer, and a fortiori the same equality holds for the K-invariant version of the constants for any compact group of automorphisms K. As a consequence of the above results, we strengthen some results of Klein and Russo [38, Corollaries 2.5' and 2.8], where upper bounds for Young and Hausdorff–Young constants are obtained for particular solvable Lie groups. Klein and Russo explicitly remark that they are able to obtain equalities instead of upper bounds in the particular case of the Heisenberg groups and only for special exponents (through a different argument, involving the analysis of the Weyl transform) and seem to leave the general case open. Here instead we obtain equality for all the Young constants, as well as a lower bound for the Hausdorff–Young constants (which becomes an equality in the case of Babenko's exponents). Corollary 2.7 Let G be a n-dimensional solvable Lie group admitting a chain of closed subgroups $$\begin{aligned} \{e\} = G_0< \dots < G_n = G, \end{aligned}$$ where \(G_j\) is normal in \(G_{j+1}\) and \(G_{j+1}/G_j\) is isomorphic to \(\mathbb {R}\). Denote by \(B_p\) the Babenko–Beckner constant. Then the following hold. (i) For all \(p_1,\dots ,p_k,r \in [1,\infty ]\) such that \(\sum _{j=1}^k 1/p_j' = 1/r'\), $$\begin{aligned} Y_{p_1,\dots ,p_k}(G) = Y^\mathrm{loc}_{p_1,\dots ,p_k}(G) = (B_{r'} B_{p_1} \cdots B_{p_k})^n. \end{aligned}$$ (ii) For all \(p \in [1,2]\), $$\begin{aligned} H_p(G) \ge H_p^\mathrm{loc}(G) \ge (B_p)^n, \end{aligned}$$ with equalities if \(p' \in 2\mathbb {Z}\). Proof (i). The inequality \(Y_{p_1,\dots ,p_k}(G) \le (B_{r'} B_{p_1} \cdots B_{p_k})^n\) can be obtained, as in [38], by iteratively applying Proposition 2.2(iii) and the fact that \(Y_{p_1,\dots ,p_k}(\mathbb {R}) = B_{r'} B_{p_1} \cdots B_{p_k}\) [6, 9]. On the other hand, by Propositions 2.4(i) and 2.2(iii), $$\begin{aligned} Y_{p_1,\dots ,p_k}(G) \ge Y^\mathrm{loc}_{p_1,\dots ,p_k}(G) \ge Y_{p_1,\dots ,p_k}(\mathfrak {g}) = Y_{p_1,\dots ,p_k}(\mathbb {R})^n = (B_{r'} B_{p_1} \cdots B_{p_k})^n, \end{aligned}$$ and we are done. (ii). From part (i) and Proposition 2.2(ii), we deduce immediately that \(H_p(G) = (B_p)^n\) whenever q is an even integer. On the other hand, by Proposition 2.4(ii), $$\begin{aligned} H_{p}(G) \ge H^\mathrm{loc}_{p}(G) \ge H_{p}(\mathfrak {g}) = (B_p)^n, \end{aligned}$$ by [6], and we are done. \(\square \) 3 The n-torus \(\mathbb {T}^n\) revisited The proof of the central local Hausdorff–Young theorem on a compact Lie group mimics that of the local Hausdorff–Young theorem on \(\mathbb {T}^n\), and we present this case first to make the proof of the general case more evident. Proof of Theorem 1.2 There is no loss of generality in supposing functions smooth; this ensures that all the sums and integrals that occur in the proof below converge. Let us identify \(\mathbb {T}^n\) with the subset \((-1/2, 1/2]^n\) of \(\mathbb {R}^n\). For \(f \in L^1(\mathbb {T}^n)\), the Fourier transform \({\hat{f}} : \mathbb {Z}^n \rightarrow \mathbb {C}\) of f is given by $$\begin{aligned} {\hat{f}}(\mu ) = \int _{\mathbb {T}^n} f(x) \, e^{2\pi i \mu \cdot x} \, dx. \end{aligned}$$ for all \(\mu \in \mathbb {Z}^n\). We denote by V the open subset \((-1/2,1/2)^n\) of \(\mathbb {R}^n\). For any function \(f \in L^1(\mathbb {T}^n)\) such that \({{\,\mathrm{supp}\,}}f \subseteq V\), we define F on \(\mathbb {R}^n\) by $$\begin{aligned} F(x) = {\left\{ \begin{array}{ll} f(x) &{} \text {when }x \in V, \\ 0 &{} \text {otherwise;} \end{array}\right. } \end{aligned}$$ we say that F corresponds to f. Clearly \(F \in L^1(\mathbb {R}^n)\) and \({\hat{F}}|_{\mathbb {Z}^n} = {\hat{f}}\); further, if f is smooth, so is F. We are going to transfer the sharp Hausdorff–Young theorem for F to f. The Plancherel formulae for Fourier series and Fourier integrals imply that $$\begin{aligned} \Vert {\hat{f}} \Vert _{\ell ^2(\mathbb {Z}^n)} = \Vert f \Vert _{L^2(\mathbb {T}^n)} = \Vert F \Vert _{L^2(\mathbb {R}^n)} = \Vert {\hat{F}} \Vert _{L^2(\mathbb {R}^n)} . \end{aligned}$$ In particular, since \({\hat{F}}|_{\mathbb {Z}^n} = {\hat{f}}\), $$\begin{aligned} \Vert {\hat{F}}|_{\mathbb {Z}^n} \Vert _{\ell ^2(\mathbb {Z}^n)} \le \Vert {\hat{F}} \Vert _{L^2(\mathbb {R}^n)} . \end{aligned}$$ (3.1) Further, trivially, $$\begin{aligned} \Vert {\hat{F}}|_{\mathbb {Z}^n} \Vert _{\ell ^\infty (\mathbb {Z}^n)} \le \Vert {\hat{F}} \Vert _{L^\infty (\mathbb {R}^n)}. \end{aligned}$$ If we could interpolate between these inequalities, then it would follow that $$\begin{aligned} \Vert {\hat{F}}|_{\mathbb {Z}^n} \Vert _{\ell ^q(\mathbb {Z}^n)} \le \Vert {\hat{F}} \Vert _{L^q(\mathbb {R}^n)} \end{aligned}$$ (3.2) for all \(q \in [2,\infty ]\) and \({\hat{F}}\) in \(L^{{{q}}}(\mathbb {R}^n)\), whence $$\begin{aligned} \Vert {\hat{f}} \Vert _{\ell ^{p'}(\mathbb {Z}^n)} = \Vert {\hat{F}}|_{\mathbb {Z}^n} \Vert _{\ell ^{p'}(\mathbb {Z}^n)} \le \Vert {\hat{F}} \Vert _{L^{p'}(\mathbb {R}^n)} \le (B_p)^n \Vert F \Vert _{L^p(\mathbb {R}^n)} = (B_p)^n \Vert f \Vert _{L^p(\mathbb {T}^n)} , \end{aligned}$$ and we would be done. But we can not interpolate, because (3.1) does not hold for all \({\hat{F}}\) in \(L^{2}(\mathbb {R}^n)\), or even for all \({\hat{F}}\) in a dense subspace of \(L^{2}(\mathbb {R}^n)\), but only for those \({\hat{F}}\) where \({{\,\mathrm{supp}\,}}F \subseteq V\); inter alia, this ensures that \({\hat{F}}\) is smooth so that \({\hat{F}}|_{\mathbb {Z}^n}\) is well-defined. So we prove a variant of (3.2). Let U be a small neighbourhood U of 0 in \(\mathbb {T}^n\) such that \({\overline{U}} \subseteq V\), and take \(\phi \in A(\mathbb {R}^n)\) such that \({{\,\mathrm{supp}\,}}\phi \subseteq V \) and \(\phi (x) = 1\) for all \(x \in U\). We now define $$\begin{aligned} T G = ({\hat{\phi }} * G)|_{\mathbb {Z}^n} \qquad \forall G \in L^1(\mathbb {R}^n) + L^\infty (\mathbb {R}^n). \end{aligned}$$ We claim that when \(q \in [2, \infty ]\), $$\begin{aligned} \Vert TG \Vert _{\ell ^q(\mathbb {Z}^n)} \le \Vert {\hat{\phi }} \Vert _{L^1(\mathbb {R}^n)} \Vert G \Vert _{L^q(\mathbb {R}^n)} \qquad \forall G \in L^{q}(\mathbb {R}^n). \end{aligned}$$ (3.3) To prove the claim, observe that the inverse Fourier transform of \({\hat{\phi }} * G\) is supported in V, whence $$\begin{aligned} \Vert TG \Vert _{\ell ^2(\mathbb {Z}^n)} = \Vert ({\hat{\phi }} * G)|_{\mathbb {Z}^n} \Vert _{\ell ^2(\mathbb {Z}^n)} \le \Vert {\hat{\phi }} * G \Vert _{L^2(\mathbb {R}^n)} \le \Vert {\hat{\phi }} \Vert _{L^1(\mathbb {R}^n)} \Vert G \Vert _{L^2(\mathbb {R}^n)}, \end{aligned}$$ for all \(G \in L^{2}(\mathbb {R}^n)\), by (3.1) and a standard convolution inequality. Similarly, since \({\hat{\phi }} * G\) is continuous, the same inequalities hold when 2 is replaced by \(\infty \). Thus (3.3) holds when q is 2 or \(\infty \). The Riesz–Thorin interpolation theorem establishes (3.3) for all \(q \in [2, \infty ]\). To conclude the proof, take \(f \in C^\infty (\mathbb {T}^n)\) such that \({{\,\mathrm{supp}\,}}f \subseteq U\), and let F correspond to f. Then \({\hat{F}} \in L^1(\mathbb {R}^n) \cap L^{\infty }(\mathbb {R}^n)\) and \({\hat{\phi }} * {\hat{F}} = {\hat{F}}\). Thus $$\begin{aligned} \Vert {\hat{f}} \Vert _{\ell ^q(\mathbb {Z}^n)} = \Vert T{\hat{F}} \Vert _{\ell ^q(\mathbb {Z}^n)} \le \Vert {\hat{\phi }} \Vert _{L^1(\mathbb {R}^n)} \Vert {\hat{F}} \Vert _{L^q(\mathbb {R}^n)} \end{aligned}$$ by (3.3). This now gives $$\begin{aligned}\begin{aligned} \Vert {\hat{f}} \Vert _{\ell ^{p'}(\mathbb {Z}^n)}&\le \Vert {\hat{\phi }} \Vert _{L^1(\mathbb {R}^n)} \Vert {\hat{F}} \Vert _{L^{p'}(\mathbb {R}^n)}\\&\le \Vert {\hat{\phi }} \Vert _{L^1(\mathbb {R}^n)} (B_p)^n \Vert F \Vert _{L^p(\mathbb {R}^n)} = \Vert {\hat{\phi }} \Vert _{L^1(\mathbb {R}^n)} (B_p)^n \Vert f \Vert _{L^p(\mathbb {Z}^n)}. \end{aligned}\end{aligned}$$ This proves that \(H_p(\mathbb {T}^n;U) \le \Vert {\hat{\phi }}\Vert _{L^1(\mathbb {R}^n)} (B_p)^n\). By choosing U small enough, we may make \(\Vert {\hat{\phi }} \Vert _{L^1(\mathbb {R}^n)}\) as close to 1 as we like (see [45]): indeed, we can take \(\phi = |K|^{-1} \mathbf{1 }_{U+K} * \mathbf{1 }_K\), where \(K=-K\) is a fixed small neighbourhood of the origin (here \(\mathbf{1 }_\Omega \) denotes the characteristic function of a measurable set \(\Omega \subseteq \mathbb {R}^n\) and \(|\Omega |\) its Lebesgue measure), so that \({{\,\mathrm{supp}\,}}\phi \subseteq U +2K\) and $$\begin{aligned} 1 = \phi (0) \le \Vert {\hat{\phi }}\Vert _{L^1(\mathbb {R}^n)} \le |K|^{-1} \Vert \mathbf{1 }_K\Vert _{L^2(\mathbb {R}^n)} \Vert \mathbf{1 }_{U+K}\Vert _{L^2(\mathbb {R}^n)} = (|U+K|/|K|)^{1/2}. \end{aligned}$$ So \(H_p^\mathrm{loc}(\mathbb {T}^n) \le (B_p)^n\), and the converse inequality is given by Theorem 1.3. \(\square \) 4 Compact Lie groups Before entering into the proof of Theorem 1.4, we present a summary of the theory of representations and characters of compact connected Lie groups G. For more details, the reader may consult, for example, [11, 41]. We assume throughout that G is not abelian, since the abelian case was treated in Theorem 1.2. A compact connected Lie group G comes with a set \(\Lambda ^+\) of dominant weights, which parametrise the collection of irreducible unitary representations \(\pi _\lambda \) of G modulo equivalence. Each such representation \(\pi _\lambda \) is of finite dimension \(d_\lambda \) and has a character \(\chi _\lambda \) given by \({{\,\mathrm{trace}\,}}\pi _\lambda (\cdot )\). Assume that the Haar measure on G is normalised so as to have total mass 1. The Peter–Weyl theory gives us the Plancherel formula: if \(f \in L^2(G)\), then $$\begin{aligned} \Vert f\Vert _2^2 = \sum _{\lambda \in \Lambda ^+} d_\lambda \Vert \pi _\lambda (f) \Vert _\mathrm{HS}^2 . \end{aligned}$$ In other words, the group Plancherel measure on the unitary dual of G can be identified with the discrete measure on \(\Lambda ^+\) that assigns mass \(d_\lambda \) to the point \(\lambda \). From the discussion in Sect. 2, we deduce that $$\begin{aligned} \Vert f \Vert _{\mathcal {F}L^q} = \left( \sum _{\lambda \in \Lambda ^+} d_\lambda \Vert \pi _\lambda (f) \Vert _{\mathcal {S}^q}^q \right) ^{1/q}. \end{aligned}$$ for all \(q \in [1,\infty )\). If f is a central function, then \(\pi _\lambda (f)\) is a multiple of the identity and $$\begin{aligned} {\tilde{f}}(\lambda ) := \int _G f(x) \, \chi _\lambda (x) \,dx = {{\,\mathrm{trace}\,}}\pi _\lambda (f), \end{aligned}$$ whence $$\begin{aligned} \Vert f \Vert _{\mathcal {F}L^q} = \left( \sum _{\lambda \in \Lambda ^+} d_\lambda ^{2-q} |{\tilde{f}}(\lambda )|^q \right) ^{1/q}. \end{aligned}$$ For \(q=2\), this corresponds to the fact that the characters \(\chi _\lambda \) form an orthonormal basis for the space of square-integrable central functions. A more precise description of the set \(\Lambda ^+\) of dominant weights and the characters \(\chi _\lambda \) can be given as follows. Recall that the conjugation action of the group G on itself determines the adjoint representation of G on \(\mathfrak {g}\): $$\begin{aligned} \exp ( {\text {Ad}}(x) Y) = x \exp (Y) x^{-1} \qquad \forall x \in G \quad \forall Y \in \mathfrak {g}. \end{aligned}$$ Since G is compact, there exists an \({\text {Ad}}(G)\)-invariant inner product on \(\mathfrak {g}\), which in turn determines a Lebesgue measure on \(\mathfrak {g}\); we scale the inner product so that the Jacobian determinant \(J : \mathfrak {g} \rightarrow \mathbb {R}\) of the exponential mapping is 1 at the origin. Clearly J is an \({\text {Ad}}(G)\)-invariant function. The group G contains a maximal torus T, that is, a maximal closed connected abelian subgroup, which is unique up to conjugacy; its Lie algebra \(\mathfrak {t}\) is a maximal abelian Lie subalgebra of \(\mathfrak {g}\). The set \(\Gamma \) of X in \(\mathfrak {t}\) such that \(\exp X = e\) is a lattice in \(\mathfrak {t}\), and T may be identified with \(\mathfrak {t} / \Gamma \). The weight lattice\(\Lambda \) is the dual lattice to \(\Gamma \), that is, the set of elements \(\lambda \) of the dual space \(\mathfrak {t}^*\) taking integer values on \(\Gamma \): equivalently, \(\Lambda \) is the set of the \(\lambda \in \mathfrak {t}^*\) such that \(X \mapsto e^{2\pi i \lambda (X)}\) descends to a character \(\kappa _\lambda \) of T. We say that a weight \(\lambda \in \Lambda \) occurs in a unitary representation \(\pi \) of G if the character \(\kappa _\lambda \) of T is contained in the restriction of \(\pi \) to T. Weights occurring in the (complexified) adjoint representation are called roots. A choice of an ordering splits roots into into positive and negative roots. We denote by \(\rho \) half the sum of the positive roots. The set \(\Lambda ^+\) of dominant weights is the set of the \(\lambda \in \Lambda \) having nonnegative inner product with all positive roots. The irreducible representation \(\pi _\lambda \) of G corresponding to \(\lambda \in \Lambda ^+\) is determined, up to equivalence, by the fact that \(\lambda \) is the highest weight occurring in \(\pi _\lambda \) (that is, \(\lambda \) occurs in \(\pi _\lambda \), while \(\lambda + \alpha \) does not occur in \(\pi _\lambda \) for any positive root \(\alpha \)). Via the orthogonal projection of \(\mathfrak {g}\) onto \(\mathfrak {t}\), we can identify \(\mathfrak {t}^*\) with a subspace of \(\mathfrak {g}^*\). Given \(\lambda \) in \(\mathfrak {g}^*\), we write \(O_{\lambda }\) for the compact set \({\text {Ad}}(G)^* \lambda \), usually called the orbit of \(\lambda \). Kirillov's character formula [37, p. 459] states that, for all \(X \in \mathfrak {g}\) and all \(\lambda \in \Lambda ^+\), $$\begin{aligned} J(X)^{1/2} \, \chi _\lambda ( \exp ( X) ) = \int _{O_{\lambda +\rho }} \exp ( 2 \pi i \xi \cdot X) \,d\sigma (\xi ), \end{aligned}$$ (4.1) where \(\sigma \) is a canonical \({\text {Ad}}(G)^*\)-invariant measure on \(O_{\lambda +\rho }\), and \(\xi \cdot X\) denotes the duality pairing between \(\xi \in \mathfrak {g}^*\) and \(X \in \mathfrak {g}\). When \(X = 0\), this formula becomes the normalisation $$\begin{aligned} \int _{O_{\lambda +\rho }} \,d\sigma (\xi ) = d_\lambda . \end{aligned}$$ Proof of Theorem 1.4 Take a small connected conjugation-invariant neighbourhood U of the identity in G that is also symmetric, that is, \(U^{-1} = U\). Then \(U = \bigcup _{x \in G} x (U \cap T) x^{-1}\). Let V be the small connected neighbourhood of 0 in \(\mathfrak {g}\) such that \(U = \exp V\) and \(\exp \) is a diffeomorphism from a neighbourhood of \({\overline{V}}\) onto a neighbourhood of \({\overline{U}}\) in G. To a function f on G supported in U, we associate the function F on \(\mathfrak {g}\) supported in V by the formula $$\begin{aligned} F(X) = {\left\{ \begin{array}{ll} J(X)^{1/2} \, f(\exp (X)) &{}\text {when }X \in V, \\ 0 &{}\text {otherwise}. \end{array}\right. } \end{aligned}$$ Then \(\Vert J^{1/p -1/2} F \Vert _p = \Vert f \Vert _p\). We define the Fourier transform \({\hat{F}}\) of F as follows: $$\begin{aligned} {\hat{F}} (\xi ) = \int _{\mathfrak {g}} F(X) \, \exp ( 2\pi i \xi \cdot X) \, dX \qquad \forall \xi \in \mathfrak {g}^*. \end{aligned}$$ The following conditions are equivalent: f is central on G; F is \({\text {Ad}}(G)\)-invariant on \(\mathfrak {g}\); and \({\hat{F}}\) is \({\text {Ad}}(G)^*\)-invariant on \(\mathfrak {g}^*\). Assume that f is central and supported in U, and let F be the associated function on \(\mathfrak {g}\). From the character formula (4.1), a change of variables, and a change of order of integration, $$\begin{aligned} {\tilde{f}}(\lambda )&= \int _G f(x) \, \chi _\lambda (x) \,dx = \int _{\mathfrak {g}} F(X) \int _{O_{\lambda +\rho }} \exp ( 2 \pi i \xi \cdot X) \,d\sigma (\xi )\,dX \\&= \int _{O_{\lambda +\rho }} \int _{\mathfrak {g}} F(X) \exp (2 \pi i \xi \cdot X) \,dX\, d\sigma (\xi ) = \int _{O_{\lambda +\rho }} {\hat{F}}(\xi ) \, d\sigma (\xi ) \\&= d_\lambda {\hat{F}}(\lambda +\rho ). \end{aligned}$$ This, combined with the Plancherel theorems for central functions on G and for functions on \(\mathfrak {g}\), implies that $$\begin{aligned} \sum _{\lambda \in \Lambda ^+} d_\lambda ^2 |{\hat{F}}(\lambda +\rho )|^2 = \Vert f\Vert _2^2 = \Vert F \Vert _2^2 = \Vert {\hat{F}} \Vert _2^2. \end{aligned}$$ For such functions, moreover, \({\hat{F}}\) is continuous and so $$\begin{aligned} \sup _{\lambda \in \Lambda ^+} | {\hat{F}} (\lambda + \rho ) |_\infty \le \Vert {\hat{F}} \Vert _\infty . \end{aligned}$$ For a function H on \(\mathfrak {g}^*\), we define $$\begin{aligned} H^G(\lambda ) = \int _G H({\text {Ad}}(g)^*\lambda ) \, dg. \end{aligned}$$ Much as in the case of \(\mathbb {T}^n\), we choose an \({\text {Ad}}(G)\)-invariant function \(\phi \in A(\mathfrak {g})\) which vanishes off V and takes the value 1 on the open \({\text {Ad}}(G)\)-invariant subset W of V. For H in \(L^1(\mathfrak {g}^*) + L^\infty (\mathfrak {g}^*)\), we define the function TH by $$\begin{aligned} TH(\lambda ) = {\hat{\phi }}*H^G(\lambda + \rho ) \qquad \forall \lambda \in \Lambda ^+. \end{aligned}$$ For such functions H, the inverse Fourier transform F of \({\hat{\phi }}*H^G\) is supported in V and is \({\text {Ad}}(G)\)-invariant, so the corresponding function f on G is central and supported in U. From our previous discussion, $$\begin{aligned} \left( \sum _{\lambda \in \Lambda ^+} d_\lambda ^2 | TH (\lambda )|^2 \right) ^{1/2} = \Vert {\hat{\phi }}* H^G \Vert _{2} \le \Vert {\hat{\phi }}\Vert _1 \Vert H^G \Vert _2 \le \Vert {\hat{\phi }}\Vert _1 \Vert H \Vert _2 \end{aligned}$$ and $$\begin{aligned} \sup _{\lambda \in \Lambda ^+} | TH (\lambda )| \le \Vert TH \Vert _\infty \le \Vert {\hat{\phi }}\Vert _1 \Vert H^G \Vert _\infty \le \Vert {\hat{\phi }}\Vert _1 \Vert H \Vert _\infty . \end{aligned}$$ By Riesz–Thorin interpolation, when \(2 \le q < \infty \), $$\begin{aligned} \left( \sum _{\lambda \in \Lambda ^+} d_\lambda ^2 | TH (\lambda )|^q \right) ^{1/q} \le \Vert {\hat{\phi }}\Vert _1 \Vert H \Vert _q . \end{aligned}$$ Much as in the proof of Theorem 1.2, if f is a central function on G supported in \(\exp (W) \subseteq U\), and F is the \({\text {Ad}}(G)\)-invariant function on \(\mathfrak {g}\) corresponding to f, then \(T{\hat{F}}(\lambda ) = {\hat{\phi }} * {\hat{F}}(\lambda +\rho ) = {\hat{F}}(\lambda +\rho )\) for all \(\lambda \in \Lambda ^+\). Hence, if \(n=\dim G\), from the Hausdorff–Young inequality on \(\mathbb {R}^n\) we deduce that $$\begin{aligned} \Vert f\Vert _{\mathcal {F}L^{p'}}= \left( \sum _{\lambda \in \Lambda ^+} d_\lambda ^{2-p'} | {\tilde{f}}(\lambda ) |^{p'} \right) ^{1/{p'}} = \left( \sum _{\lambda \in \Lambda ^+} d_\lambda ^2 | {\hat{F}} (\lambda +\rho ) |^{p'} \right) ^{1/{p'}} \\ \le \Vert {\hat{\phi }} \Vert _1 \Vert {\hat{F}} \Vert _{p'} \le \Vert {\hat{\phi }} \Vert _1 (B_p)^n \Vert F \Vert _p \le \Vert {\hat{\phi }} \Vert _1 (B_p)^n \sup _{X \in W} J(X)^{1/2-1/p} \Vert f \Vert _p, \end{aligned}$$ which shows that \(H_{p,{{\,\mathrm{Inn}\,}}(G)}(G;\exp (W)) \le \Vert {\hat{\phi }} \Vert _1 (B_p)^n \sup _{X \in W} J(X)^{1/2-1/p}\). By taking W small, we may make both \(\sup _{X \in W} J(X)^{1/2-1/p}\) and \(\Vert {\hat{\phi }} \Vert _1\) close to 1. So \(H_{p,{{\,\mathrm{Inn}\,}}(G)}^\mathrm{loc}(G) \le (B_p)^n\), and the converse inequality is given by Theorem 1.3. \(\square \) 5 The Weyl transform In this section, we shall mostly adopt the notation from Folland's book [22]. The Weyl transform \(\rho (f)\) of a function \(f \in L^1(\mathbb {C}^n)\) can be written as the operator $$\begin{aligned} \rho (f) =\int _{\mathbb {R}^n} \int _{\mathbb {R}^n} f(u+iv) \, e^{2\pi i(uD+vX)}\, du \,dv \end{aligned}$$ on \(L^2(\mathbb {R}^n)\), where \(uD=\sum _{j=1}^n u_j D_j\) and \(vX=\sum _{j=1}^n v_j X_j\), and where \(D_j\) and \(X_j\) denote the operators $$\begin{aligned} D_j\phi (x)=\frac{1}{2\pi i}\frac{\partial }{\partial x_j} f(x) \qquad \text {and}\qquad X_j\phi (x)=x_j\phi (x). \end{aligned}$$ Explicitly, \(\rho (f)\) is the integral operator given by $$\begin{aligned} \rho (f) \phi (x)=\int _{\mathbb {R}^n} K_f(x,y) \, \phi (y) \, dy, \end{aligned}$$ with integral kernel given by $$\begin{aligned} K_f(x,y)=\int _{\mathbb {R}^n} f(y-x+iv) \, e^{\pi i v(x+y)} \, dv. \end{aligned}$$ As Folland points out on page 24 of his monograph, this notion of "Weyl transform" is historically incorrect—the Weyl transform of f should rather be \(\rho ({\hat{f}})\), the pseudodifferential operator associated to the symbol f in the Weyl calculus [22, Chapter 2]. Nevertheless, we shall use the definition of Weyl transform above. In [38], the authors consider the operator \(\nu (f)\) given by $$\begin{aligned} \nu (f) =\int _{\mathbb {R}^n}\int _{\mathbb {R}^n} f(u +iv) \, e^{2\pi iuD} \, e^{2\pi ivX}\, du \, dv , \end{aligned}$$ and call this the Weyl operator associated to f—this appears to be even more inappropriate, as \(\nu (f)\) is actually more closely related to the Kohn–Nirenberg calculus (see, for example, [22, (2.32)]). In any case, it is easily seen that the operators \(\nu (f)\) and \(\rho (f)\) are related by the identity $$\begin{aligned} \nu (f)=\rho (e^{i\pi u \cdot v} f) \end{aligned}$$ (5.1) (compare also [22, Proposition 2.33]). We are interested in best constants in Hausdorff–Young inequalities of the form $$\begin{aligned} \Vert \rho (f)\Vert _{\mathcal {S}^{p'}}\le C \Vert f\Vert _{L^p(\mathbb {C}^n)}, \end{aligned}$$ (5.2) for suitable functions f, for instance Schwartz functions. In light of (5.1), we may work with \(\nu (f)\) in place of \(\rho (f)\) equally well. As discussed in the introduction, we denote by \(W_p(\mathbb {C}^n)\) the best constant C in (5.2), and use the symbols \(W_p^\mathrm{loc}(\mathbb {C}^n)\), \(W_{p,K}(\mathbb {C}^n)\) and \(W_{p,K}^\mathrm{loc}(\mathbb {C}^n)\) for the corresponding local and K-invariant variants. If \(p=2\), then \(\rho \) is indeed isometric from \(L^2(\mathbb {C}^{n})\) onto the space of Hilbert–Schmidt operators [22, Theorem (1.30)], and thus the following "Plancherel identity" for the Weyl transform holds true: $$\begin{aligned} \Vert \rho (f)\Vert _{\mathrm{HS}}=\Vert f\Vert _2. \end{aligned}$$ (5.3) This tells us that \(W_2(\mathbb {C}^n) = 1\) and, by interpolation, \(W_p(\mathbb {C}^n) \le 1\) for all \(p \in [1,2]\). However, as Klein and Russo have shown, \(W_p(\mathbb {C}^n) < 1\) when \(1<p<2\). Indeed, [38, Theorem 1] may be restated by saying that $$\begin{aligned} W_p(\mathbb {C}^n) = (B_p)^{2n} \end{aligned}$$ (5.4) when \(p'\in 2\mathbb {Z}\). Moreover, in contrast with the Euclidean case, there are no extremal functions for the optimal estimate—the best constant can only be found as a limit, for instance along a suitable family of Gaussian functions f. This raises the question whether (5.4) holds for more general \(p\in [1,2]\). Besides being of interest in its own right, the determination of the best constants in the Hausdorff–Young inequality (5.2) for the Weyl transform on \(\mathbb {C}^n\) is relevant to the analysis of the analogous inequality on the Heisenberg group \(\mathbb {H}_n\). Indeed, the proof of Klein and Russo [38, Theorem 3] that $$\begin{aligned} H_p(\mathbb {H}_n) = (B_p)^{2n+1} \end{aligned}$$ (5.5) when \(p' \in 2\mathbb {Z}\) is based on a reduction, via a scaling argument, to the corresponding result (5.4) for the Weyl transform. A somewhat refined version of the scaling argument, presented below, shows that the problem of determining the best Hausdorff–Young constants for the Heisenberg group is completely equivalent to the analogous problem for the Weyl transform, irrespective of the exponent \(p \in [1,2]\), and also in case of restriction to functions with symmetries. Proposition 5.1 For all compact subgroups K of \(\mathrm{U}(n)\) and all \(p \in [1,2]\), $$\begin{aligned} H_{p,K}(\mathbb {H}_n) = B_p W_{p,K}(\mathbb {C}^n). \end{aligned}$$ Proof Let us identify \(\mathbb {H}_n\) with \(\mathbb {C}^n \times \mathbb {R}\) with group law $$\begin{aligned} (z,t) \cdot (z',t') = (z+z',t+t'+\mathfrak {I}({\bar{z}} \cdot z')/2). \end{aligned}$$ The Lebesgue measure on \(\mathbb {C}^n \times \mathbb {R}\) is a Haar measure on \(\mathbb {H}_n\), which we fix throughout. The Schrödinger representation \(\pi \) of \(\mathbb {H}_n\) on \(L^2(\mathbb {R}^n)\) is given by $$\begin{aligned} \pi (u+iv,t) \phi (x) = e^{2\pi i t + 2\pi i v \cdot x + \pi i u \cdot v} \phi (u+x) \end{aligned}$$ [22, (1.25)]. For all \(\lambda \in \mathbb {R}\setminus \{0\}\), the map \(A_\lambda : \mathbb {H}_n \rightarrow \mathbb {H}_n\), given by $$\begin{aligned} A_\lambda (z,t) = {\left\{ \begin{array}{ll} (\sqrt{|\lambda |} \, z, \lambda t) &{}\text {if }\lambda > 0,\\ (\sqrt{|\lambda |} \, {\bar{z}}, \lambda t) &{}\text {if }\lambda < 0, \end{array}\right. } \end{aligned}$$ is an automorphism of \(\mathbb {H}_n\). The representations \(\pi _\lambda = \pi \circ A_\lambda \) form a family of pairwise inequivalent irreducible unitary representations of \(\mathbb {H}_n\), in terms of which we can express the Plancherel formula for \(\mathbb {H}_n\): $$\begin{aligned} \Vert F \Vert ^2_{L^2(\mathbb {H}_n)} = \int _{\mathbb {R}\setminus \{0\}} \Vert \pi _\lambda (F) \Vert _{\mathrm{HS}}^2 \, |\lambda |^n \,d\lambda \end{aligned}$$ [22, p. 39]. Hence, by the discussion in Sect. 2, for all \(q \in [1,\infty )\), $$\begin{aligned} \Vert F \Vert ^q_{\mathcal {F}L^q} = \int _{\mathbb {R}\setminus \{0\}} \Vert \pi _\lambda (F) \Vert _{\mathcal {S}^q}^q \, |\lambda |^n \,d\lambda . \end{aligned}$$ (5.6) For all \(F \in L^1(\mathbb {H}_n)\) and \(\lambda \in \mathbb {R}\), let us set $$\begin{aligned} F^\lambda (z) = \int _\mathbb {R}F(z,t) \, e^{2\pi i t \lambda } \,dt. \end{aligned}$$ Then $$\begin{aligned} \pi _\lambda (F) = \rho (Z_\lambda F^\lambda ), \end{aligned}$$ (5.7) where, for all functions f on \(\mathbb {C}^n\), $$\begin{aligned} Z_\lambda f(z) = {\left\{ \begin{array}{ll} |\lambda |^{-n} f(|\lambda |^{-1/2} z) &{} \text {if }\lambda >0,\\ |\lambda |^{-n} f(|\lambda |^{-1/2} {\bar{z}}) &{} \text {if }\lambda <0. \end{array}\right. } \end{aligned}$$ From the definition of \(\rho \), it is not difficult to show that $$\begin{aligned} \rho (Z_{-1} f) = S \rho (f)^* S, \end{aligned}$$ where \(S f(z) = f^*(z) = \overline{f(-z)}\). From this it readily follows that $$\begin{aligned} \Vert \rho (Z_{-\lambda } f)\Vert _{\mathcal {S}^q} = \Vert \rho (Z_\lambda f)\Vert _{\mathcal {S}^q} \end{aligned}$$ (5.8) for all \(\lambda \in \mathbb {R}\setminus \{0\}\) and \(q \in [1,\infty ]\). Let \(F \in C^\infty _c(\mathbb {H}_n)\) be K-invariant. Then \(Z_\lambda F^\lambda \) is also K-invariant for all \(\lambda >0\). Hence, by (5.6) to (5.8), $$\begin{aligned} \Vert F\Vert _{\mathcal {F}L^{p'}}&= \left( \int _{\mathbb {R}\setminus \{0\}} \Vert \rho (Z_{|\lambda |} F^\lambda ) \Vert _{\mathcal {S}^{p'}}^{p'} \, |\lambda |^n \,d\lambda \right) ^{1/p'} \\&\le W_{p,K}(\mathbb {C}^n) \left( \int _{\mathbb {R}\setminus \{0\}} \Vert Z_{|\lambda |} F^\lambda \Vert _{p}^{p'} \, |\lambda |^n \,d\lambda \right) ^{1/p'} \\&= W_{p,K}(\mathbb {C}^n) \left( \int _{\mathbb {R}\setminus \{0\}} \Vert F^\lambda \Vert _{p}^{p'} \,d\lambda \right) ^{1/p'} \\&\le W_{p,K}(\mathbb {C}^n) \left( \int _{\mathbb {C}^n} \left( \int _\mathbb {R}| F^\lambda (z) |^{p'} \,d\lambda \right) ^{p/p'} \,dz \right) ^{1/p} \\&\le W_{p,K}(\mathbb {C}^n) B_p \Vert F\Vert _p, \end{aligned}$$ where we applied, in order, the sharp Hausdorff–Young inequality for the Weyl transform and K-invariant functions, a scaling, the Minkowski integral inequality (note that \(p'/p \ge 1\)) and the sharp Hausdorff–Young inequality on \(\mathbb {R}\). This shows that \(H_{p,K}(\mathbb {H}_n) \le B_p W_{p,K}(\mathbb {C}^n)\). Conversely, let \(f \in C^\infty _c(\mathbb {C}^n)\) be K-invariant and \(\phi : \mathbb {R}\rightarrow \mathbb {C}\) be in the Schwartz class, and let \(F = f \otimes \phi \). Then F is also K-invariant, and moreover \(F^\lambda = {\hat{\phi }}(\lambda ) f\). So, by applying the sharp Hausdorff–Young inequality on \(\mathbb {H}_n\) to F we obtain that $$\begin{aligned} \left( \int _{\mathbb {R}\setminus \{0\}} \Vert \rho (Z_\lambda f) \Vert _{\mathcal {S}^{p'}}^{p'} \, |{\hat{\phi }}(\lambda )|^{p'} |\lambda |^n \,d\lambda \right) ^{1/p'} \le H_{p,K}(\mathbb {H}_n) \Vert f\Vert _p \Vert \phi \Vert _p. \end{aligned}$$ (5.9) For \(\mu \in (0,\infty )\) and \(\lambda _0 \in \mathbb {R}\setminus \{0\}\), take $$\begin{aligned} \phi (t) = e^{-\pi \mu t^2 - 2\pi i t \lambda _0}, \end{aligned}$$ so that $$\begin{aligned} {\hat{\phi }}(\lambda ) = \mu ^{-1/2} e^{-(\pi /\mu ) (\lambda -\lambda _0)^2} \qquad \text {and}\qquad \Vert {\hat{\phi }}\Vert _{p'} = B_p \Vert \phi \Vert _p, \end{aligned}$$ since gaussians are extremal functions for the Hausdorff–Young inequality on \(\mathbb {R}\). With this choice of \(\phi \), the inequality (5.9) can be rewritten as $$\begin{aligned} B_p ( R_f * \Phi _\mu (\lambda _0) )^{1/p'} \le H_{p,K}(\mathbb {H}_n) \Vert f\Vert _p, \end{aligned}$$ where \(*\) denotes convolution on \(\mathbb {R}\) and $$\begin{aligned} R_f(\lambda ) = \Vert \rho (Z_\lambda f) \Vert _{\mathcal {S}^{p'}}^{p'} |\lambda |^n, \qquad \Phi _\mu (\lambda ) = \frac{e^{-(\pi p'/\mu ) \lambda ^2}}{\int _\mathbb {R}e^{-(\pi p'/\mu ) s^2} \,ds}. \end{aligned}$$ Note that \(\lambda \mapsto Z_\lambda f\) is continuous \(\mathbb {R}\setminus \{0\} \rightarrow L^p(\mathbb {C}^n)\) and \(\rho : L^p(\mathbb {C}^n) \rightarrow \mathcal {S}^{p'}(L^2(\mathbb {R}^n))\) is continuous too, so \(R_f\) is a continuous function on \(\mathbb {R}\setminus \{0\}\). Moreover \(\Phi _\mu \) is an approximate identity as \(\mu \rightarrow 0\). Hence, by taking the limit as \(\mu \rightarrow 0\), we obtain $$\begin{aligned} B_p \sup _{\lambda \in \mathbb {R}\setminus \{0\}} \Vert \rho (Z_\lambda f) \Vert _{\mathcal {S}^{p'}} |\lambda |^{n/p'} \le H_{p,K}(\mathbb {H}_n) \Vert f\Vert _p, \end{aligned}$$ which for \(\lambda = 1\) gives $$\begin{aligned} B_p \Vert \rho (f) \Vert _{\mathcal {S}^{p'}} \le H_{p,K}(\mathbb {H}_n) \Vert f\Vert _p, \end{aligned}$$ that is, \(B_p W_{p,K}(\mathbb {C}^n) \le H_{p,K}(\mathbb {H}_n)\). \(\square \) Let us come back to the question whether the identity (5.4) holds for arbitrary \(p \in [1,2]\). The following result, which allows for arbitrary p but restricts the class of functions f and, regrettably, also requires a weight in the p-norm, gives another indication that this might be true. Recall that a function f on \(\mathbb {C}^{n}\) is polyradial if $$\begin{aligned} f(z) = f_0(|z_1|,\dots ,|z_n|), \end{aligned}$$ or, equivalently, if f is invariant under the n-fold product group \({\text {U}}(1) \times \dots \times {\text {U}}(1)\). Proposition 5.2 If \(f\in C_c^\infty (\mathbb {C}^{n})\) is polyradial, then, for all \(p \in [1,2]\), $$\begin{aligned} \Vert \rho (f)\Vert _{\mathcal {S}^{p'} }\le (B_p)^{2n}\Vert f e^{(\pi /2) |\cdot |^2}\Vert _{L^p(\mathbb {C}^{n})}. \end{aligned}$$ (5.10) As observed in the introduction, this inequality implies that \(W^\mathrm{loc}_{p,K}(\mathbb {C}^n) \le (B_p)^{2n}\) for \(K = {\text {U}}(1) \times \dots \times {\text {U}}(1)\), and a fortiori also for any larger group K. Proof We present a proof of Proposition 5.2 which follows the philosophy of the proof of Theorem 1.4. The key is the following identity relating Laguerre polynomials to Bessel functions: $$\begin{aligned} L^\alpha _k(x)=\frac{e^x x^{-\alpha /2}}{k!} \int _0^\infty t^{k+\alpha /2} \, J_\alpha (2\sqrt{xt}) \, e^{-t} \, dt \qquad \forall x>0, \end{aligned}$$ (5.11) where \(\alpha \in (-1,\infty )\) [44, (4.19.3)]. In order to avoid technicalities, let us concentrate on the case where \(n=1\); we shall later indicate the straightforward changes in the argument which are needed to deal with general \(n\ge 1\). If \(f(z)=f_0(|z|)\) is a radial \(L^1\)-function on \(\mathbb {C}\), then one may use the orthonormal basis of Hermite functions \(h_k\) (\(k\in \mathbb {N}\)) of \(L^2(\mathbb {R})\) to represent the operator \(\rho (f)\) as an infinite diagonal matrix, with diagonal elements given by $$\begin{aligned} {\tilde{f}}(k):=\langle \rho (f) h_k,h_k\rangle = \int _{\mathbb {C}} f(z) \, \chi _k(z) \, dz \qquad \forall k\in \mathbb {N}, \end{aligned}$$ (5.12) where \(\chi _k\) is the Laguerre function $$\begin{aligned} \chi _k(z)=e^{-(\pi /2)|z|^2}L^0_k(\pi |z|^2). \end{aligned}$$ (see [22, (1.45) and (1.104)]; see also [65, (1.4.32)]). In particular, $$\begin{aligned} \Vert \rho (f)\Vert _{\mathcal {S}^q}=\Vert {\tilde{f}}\Vert _{\ell ^q} \end{aligned}$$ (5.13) for all \(q \in [1,\infty ]\). Recall also that the Euclidean Fourier transform of any radial \(L^1\)-function g on \(\mathbb {C}\cong \mathbb {R}^2\) can be written in polar coordinates as $$\begin{aligned} {\hat{g}}(\zeta )=2\pi \int _0^\infty g_0(r) \, J_0(2\pi |\zeta | r) r\, dr, \end{aligned}$$ (5.14) where \(g(z) = g_0(|z|)\). We assume that f has compact support, and put \(F(z)=e^{(\pi /2) |z|^2} f(z)\). Since also \({\hat{F}}\) is radial, we may write \({\hat{F}}(\zeta )={\hat{F}}_0(|\zeta |).\) Combining (5.11) and (5.14), we obtain $$\begin{aligned} {\tilde{f}}(k)=\int _0^\infty {\hat{F}}_0\big (\sqrt{t/\pi }\big )\, \frac{t^k}{k!} e^{-t} \,dt, \end{aligned}$$ (5.15) which can be rewritten as $$\begin{aligned} {\tilde{f}}(k)=\int _{\mathbb {C}} {\hat{F}}(\zeta ) \frac{\pi ^k|\zeta |^{2k}}{k!} e^{-\pi |\zeta |^2} \,d\zeta =\int _{\mathbb {C}}{\hat{F}}(\zeta ) \, d\mu _k(\zeta ), \end{aligned}$$ (5.16) where the measures \(d\mu _k\), \(k\in \mathbb {N}\), are probability measures on \(\mathbb {C}\). Combining the aforementioned Plancherel identity for the Weyl transform, which leads to $$\begin{aligned} \sum _{k\in \mathbb {N}}\left| \int _{\mathbb {C}}{\hat{F}}(\zeta ) \, d\mu _k(\zeta ) \right| ^2 = \Vert f\Vert _2^2 \le \Vert F\Vert _2^2 =\Vert {\hat{F}}\Vert _2^2, \end{aligned}$$ with the trivial estimate $$\begin{aligned} \sup _{k\in \mathbb {N}} \left| \int _{\mathbb {C}}{\hat{F}}(\zeta ) \, d\mu _k(\zeta ) \right| \le \Vert {\hat{F}}\Vert _\infty , \end{aligned}$$ we see that from here on we can easily modify the argument in the proof of Theorem 1.4 in order to arrive at (5.10). Indeed, an even simpler interpolation argument is possible here, which avoids any smallness assumption on the support of f. For suitable functions \(\phi \) on the positive real line, let us write $$\begin{aligned} \breve{\phi }(k)=\int _0^\infty \phi (t) \, \frac{t^k}{k!} \, e^{-t} \,dt \end{aligned}$$ for all \(k \in \mathbb {Z}\). We claim that $$\begin{aligned} \Vert \breve{\phi }\Vert _{\ell ^q}\le \Vert \phi \Vert _{L^q(\mathbb {R}^+, dt)} \end{aligned}$$ (5.17) for all \(q \in [1,\infty ]\). Indeed, this estimate is trivial for \(q=\infty \), since the \(\frac{t^k}{k!} \, e^{-t} \,dt\) are probability measures, and for \(q=1\) we may estimate as follows: $$\begin{aligned} \sum _{k=0}^\infty |\breve{\phi }(k)|\le \int _0^\infty |\phi (t)| \sum _{k=0}^\infty \frac{t^k}{k!} \, e^{-t} \,dt =\Vert \phi \Vert _1. \end{aligned}$$ Thus, (5.17) follows by Riesz–Thorin interpolation. From (5.17) and (5.15), $$\begin{aligned} \Vert {\tilde{f}} \Vert _{\ell ^q}\le \left( \int _0^\infty \left| {\hat{F}}_0\big (\sqrt{t/\pi }\big ) \right| ^q \, dt\right) ^{1/q}=\Vert {\hat{F}}\Vert _q, \end{aligned}$$ and thus, by (5.13) and the sharp Hausdorff–Young inequality on \(\mathbb {R}^2\), we obtain $$\begin{aligned} \Vert \rho (f)\Vert _{\mathcal {S}^{p'}}=\Vert {\tilde{f}} \Vert _{\ell ^{p'}}\le (B_p)^2 \Vert F\Vert _p, \end{aligned}$$ whence (5.10) follows. Let us finally indicate the changes needed to deal with the case of arbitrary n. The Laguerre functions must be replaced by the n-fold tensor products $$\begin{aligned} \chi _k(z_1,\dots , z_n)=\chi _{k_1}(z_1) \dots \chi _{k_1}(z_1), \end{aligned}$$ where \(k=(k_1,\dots ,k_n)\in \mathbb {N}^n\), and thus, in place of (5.12), $$\begin{aligned} {\tilde{f}}(k)=\int _{\mathbb {C}^{n}} f(z_1,\dots , z_n) \,\chi _k(z_1,\dots , z_n)\, dz_1\dots dz_n \end{aligned}$$ where \(k\in \mathbb {N}^n\). Accordingly, the measures \(d\mu _k\) must be replaced by the n-fold tensor products \(d\mu _k=d\mu _{k_1}\otimes \dots \otimes d\mu _{k_n}\), which are again probability measures, and so on. It then becomes evident that the proof carries over without any difficulty to this general case. \(\square \) Remark 5.3 There are indications that it may not be possible to establish (5.10) without the presence of the weight \(e^{(\pi /2) |\cdot |^2}\) by means of a reduction to the Euclidean Fourier transform and the Babenko–Beckner estimate, and that new techniques are required. Let us again restrict our discussion for simplicity to the case \(n=1\). There is another interesting identity relating Laguerre functions and Bessel functions, namely $$\begin{aligned} e^{-x/2} x^{\alpha /2}L^\alpha _k(x)=\frac{(-1)^k}{2} \int _0^\infty J_\alpha (\sqrt{xy}) \, e^{-y/2} \, y^{\alpha /2} \, L_k^\alpha (y) \,dy \qquad \forall x>0, \end{aligned}$$ where \(\alpha \in (-1,\infty )\) [44, (4.20.3)]. For \(\alpha =0\), this in combination with (5.14) implies the well-known identity $$\begin{aligned} \chi _k(z)=e^{-(\pi /2)|z|^2} L^0_k(\pi |z|^2)= \frac{(-1)^k}{2} \widehat{\chi _k}(z/2) \end{aligned}$$ (5.18) (see [22, Remark after Theorem (1.105)], which is based on a more conceptual approach based on the Wigner transform). This easily leads to the identity $$\begin{aligned} {\tilde{f}}(k)=\int _{\mathbb {C}} {\hat{f}}(\zeta ) \, (-1)^k \, 2 \, \chi _k(2\zeta )\, d\zeta =\int _{\mathbb {C}} {\hat{f}}(\zeta ) \,d\nu _k(\zeta ). \end{aligned}$$ (5.19) In contrast with (5.16), the signed measure \(d\nu _k\) oscillates when \(k \ge 1\) and is no longer a probability measure. Indeed, by [48, Lemma 1], we have $$\begin{aligned} \Vert \chi _k\Vert _1\sim k^{1/2}\quad \text{ as }\quad k\rightarrow \infty . \end{aligned}$$ Thus we cannot use (5.19) in place of (5.16) as before in order to get a sharp Hausdorff–Young estimate for \(\rho (f)\) without a weight. Even the case where \(p'=2m\) for some \(m \in \mathbb {Z}\) does not seem to allow one to reduce to the Euclidean estimate. Indeed, note that, for all \(f \in L^1(\mathbb {C}^n)\), $$\begin{aligned} \rho (f^*) = \rho (f)^* \qquad \text {and}\qquad \rho (f) \, \rho (g)=\rho (f\times g), \end{aligned}$$ (5.20) where \(f^*(z) = \overline{f(-z)}\) and \(f\times g\) denotes the twisted convolution of f and g, that is, $$\begin{aligned} f \times g(z) = \int _{\mathbb {C}^n} f(z-w) \, g(w) \, e^{\pi i \mathfrak {I}({\bar{z}} \cdot w)} \,dw \end{aligned}$$ (5.21) [22, (1.32)]. In particular, if f is radial and real-valued, then \(f = f^*\) and therefore $$\begin{aligned} \Vert \rho (f)\Vert _{\mathcal {S}^{2m}}^{2m}=\Vert \rho (f)^m\Vert _{\mathrm{HS}}^2 =\Vert \rho (f\times \cdots \times f)\Vert _\mathrm{HS}^2 =\Vert f\times \cdots \times f\Vert _2^2, \end{aligned}$$ with m factors f. A reduction to the sharp estimate for the Euclidean Fourier transform \({\hat{f}}\) of f would therefore require the validity of an estimate of the form $$\begin{aligned} \Vert f\times \cdots \times f\Vert _2\le \Vert {\hat{f}}\Vert _{2m}^m=\Vert f* \cdots * f\Vert _2, \end{aligned}$$ (5.22) where \(*\) denotes the Euclidean convolution. However this estimate is false, even when \(m=2\). Indeed, it is sufficient to test the estimate (5.22) when \(f = \chi _k\). Note that, from (5.12) and the orthogonality of Laguerre polynomials, $$\begin{aligned} \tilde{\chi }_k(l) = \langle \rho (\chi _k) h_l, h_l \rangle = \langle \chi _k, \chi _l \rangle = \delta _{kl}. \end{aligned}$$ In particular \(\rho (\chi _k \times \chi _k) = \rho (\chi _k)\), that is, $$\begin{aligned} \chi _k \times \chi _k = \chi _k. \end{aligned}$$ Therefore $$\begin{aligned} \Vert \chi _k \times \chi _k\Vert _2 = \Vert \chi _k\Vert _2 = 1, \end{aligned}$$ while $$\begin{aligned} \Vert {\hat{\chi _k}}\Vert _4 = 2^{-1/2} \Vert \chi _k\Vert _4 \sim k^{-1/4} (\log k)^{1/4} \quad \text {as } k \rightarrow \infty , \end{aligned}$$ by (5.18) and [48, Lemma 1]. This shows that (5.22) cannot hold when \(m=2\) and for all radial real-valued functions f (not even with some constant larger than one multiplying the right-hand side).\(\square \) In order to conclude the proof of Theorem 1.5, we need to prove the lower bound $$\begin{aligned} W^\mathrm{loc}_K(\mathbb {C}^n) \ge (B_{p})^{2n} \end{aligned}$$ (5.23) for any compact subgroup K of \({\text {U}}(n)\). As we will see, this can be done much as in Sect. 2. For a function \(f \in L^1(\mathbb {C}^n) + L^2(\mathbb {C}^n)\), let \(T_f\) denote the operator of twisted convolution on the left by f, that is, $$\begin{aligned} T_f \phi = f \times \phi . \end{aligned}$$ In analogy with Proposition 2.1, we can characterise the Schatten norms of Weyl transforms \(\rho (f)\) as follows. Proposition 5.4 For all \(q \in [2,\infty ]\) and \(f \in C_c(\mathbb {C}^n)\), $$\begin{aligned} \Vert \rho (f)\Vert _{\mathcal {S}^q}^q = \Vert |T_f|^q\Vert _{L^1(\mathbb {C}^n) \rightarrow L^\infty (\mathbb {C}^n)}. \end{aligned}$$ Proof From the Plancherel formula (5.3) for the Weyl transform, together with (5.20), it is easily seen that, for all \(f \in L^1(G)\), $$\begin{aligned} \Vert \rho (f)\Vert _{L^2(\mathbb {R}^n)\rightarrow L^2(\mathbb {R}^n)} = \Vert T_f\Vert _{L^2(\mathbb {C}^n) \rightarrow L^2(\mathbb {C}^n)}. \end{aligned}$$ This corresponds to the well-known fact that the norm of a linear operator on \(L^2(\mathbb {R}^n)\) is the same as the norm of the corresponding left-multiplication operator on \(\mathrm{HS}(L^2(\mathbb {R}^n))\). Note, moreover, that the analogue of (5.20) holds: $$\begin{aligned} T_{f^*} = T_f^* \qquad \text {and}\qquad T_{f \times g} = T_f T_g. \end{aligned}$$ Hence the correspondence \(\rho (f) \mapsto T_f\) induces an isometric \(*\)-isomorphism between \(\mathcal {L}(L^2(\mathbb {R}^n))\) and the von Neumann algebra of operators on \(L^2(\mathbb {C}^n)\) generated by \(\{ T_f \,:\,f \in L^1(\mathbb {C}^n) \}\). Take now \(f \in C_c(\mathbb {C}^n)\). Then \(\rho (f) \in \mathcal {S}^q(\mathbb {C}^n)\) and $$\begin{aligned} \Vert \rho (f) \Vert _{\mathcal {S}^q(\mathbb {C}^n)}^q = \Vert |\rho (f)|^{q/2} \Vert _{\mathrm{HS}}^2. \end{aligned}$$ Since \(|\rho (f)|^{q/2} \in \mathrm{HS}(L^2(\mathbb {R}^n))\), by the Plancherel theorem for the Weyl transform there exists \(g \in L^2(\mathbb {C}^n)\) such that $$\begin{aligned} \rho (g) = |\rho (f)|^{q/2}. \end{aligned}$$ Since isomorphisms between von Neumann algebras preserve the polar decomposition and the functional calculus, $$\begin{aligned} T_g = |T_f|^{q/2}. \end{aligned}$$ In order to conclude, then, it is enough to show that $$\begin{aligned} \Vert \rho (g)\Vert _{\mathrm{HS}}^2 = \Vert T_g^2\Vert _{L^1(\mathbb {C}^n) \rightarrow L^\infty (\mathbb {C}^n)}. \end{aligned}$$ On the other hand, \(T_g = |T_f|^{q/2}\) is a nonnegative self-adjoint operator, so $$\begin{aligned} \Vert T_g^2\Vert _{L^1(\mathbb {C}^n) \rightarrow L^\infty (\mathbb {C}^n)} = \Vert T_g\Vert ^2_{L^1(\mathbb {C}^n) \rightarrow L^2(\mathbb {C}^n)} \end{aligned}$$ and, according to (5.21), \(T_g\) is an integral operator with kernel \({\tilde{K}}_g\) given by $$\begin{aligned} {\tilde{K}}_g(z,w) = g(z-w) \, e^{\pi i \mathfrak {I}({\bar{z}} \cdot w)}, \end{aligned}$$ whence $$\begin{aligned} \Vert T_g\Vert _{L^1(\mathbb {C}^n) \rightarrow L^2(\mathbb {C}^n)} = {{\,\mathrm{ess\,sup}\,}}_{w \in \mathbb {C}^n} \Vert {\tilde{K}}_g(\cdot ,w)\Vert _2 = \Vert g\Vert _2 = \Vert \rho (g)\Vert _{\mathrm{HS}}, \end{aligned}$$ and we are done. \(\square \) Given the above characterisation, the proof of the inequality (5.23) proceeds, much as in Sect. 2, via a "blow-up" argument. The main observation here is that, if \(S_\lambda \) denotes the \(L^1\)-isometric scaling on \(\mathbb {C}^n\), $$\begin{aligned} S_\lambda f(z) = \lambda ^{-2n} f(z/\lambda ), \end{aligned}$$ then $$\begin{aligned} (S_\lambda f) \times (S_\lambda g) = S_\lambda (f \times _\lambda g), \end{aligned}$$ where $$\begin{aligned} f \times _\lambda g(z) = \int _{\mathbb {C}^n} f(z-w) \, g(w) \, e^{\pi i \lambda ^2 \mathfrak {I}({\bar{z}} \cdot w)} \,dw; \end{aligned}$$ moreover, from the above formula it is clear that, as \(\lambda \rightarrow 0\), the scaled twisted convolution \(\times _\lambda \) tends to the standard convolution on \(\mathbb {C}^n \cong \mathbb {R}^{2n}\) (see also [16]). Following this idea, it is not difficult to prove the analogues of Lemma 2.3 and Proposition 2.4, where the twisted convolution \(\times \) and the standard convolution on \(\mathbb {C}^n\) take the place of the convolutions on the Lie group and the Lie algebra respectively. In addition, the action of \({\text {U}}(n)\) on functions on \(\mathbb {C}^n\) commutes with the scaling operators \(S_\lambda \) and the twisted convolution, so the analogue of Remark 2.5 applies here. We leave the details to the interested reader. Remark 5.5 Given the noncommutative subject of this paper, it is natural to ask whether the best constants \(H_p(G), H^\mathrm{loc}_p(G),\dots \) are the same in the category of operator spaces (that is, quantized or noncommutative Banach spaces). To be more precise, let us equip the (commutative and noncommutative) \(L^q\)-spaces involved in the corresponding Hausdorff–Young inequality with their natural operator space structures [53]. Does the complete \(L^p \rightarrow L^{p'}\) norm of the Fourier transform coincide with the corresponding norm \(H_p(G)\) in the category of Banach spaces? In the Euclidean case of \(H_p(\mathbb {R}^n)\), this problem was asked by Pisier in 2002 to the fourth-named author, but it is still open. Éric Ricard recently noticed that such a result for the Euclidean Fourier transform (that is, its completely bounded norm is still given by the Babenko–Beckner constant raised to the dimension of the underlying space) would give the expected constants for the Weyl transform in CCR algebras and, therefore, also for the Fourier transform in the Heisenberg group. Unfortunately, Beckner's original strategy crucially uses hypercontractivity, which has been recently proved to fail in the completely bounded setting [5]. In conclusion, the above discussion indicates one more time (see Remark 5.3) that some new ideas seem to be necessary to solve these questions. Notes Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. References 1. Andersson, M.E.: The Hausdorff–Young inequality and Fourier type. Ph.D. thesis, Uppsala (1993)Google Scholar 2. Andersson, M.E.: Local variants of the Hausdorff–Young inequality. In: Gyllenberg, M., Persson, L.-E. (eds.) Analysis, Algebra, and Computers in Mathematical Research, (Luleå, 1992). Lecture Notes in Pure and Applied Mathematics, vol. 156, pp 25–34. Dekker, New York (1994)Google Scholar 3. Babenko, K.I.: An inequality in the theory of Fourier integrals. Izv. Akad. Nauk SSSR Ser. Mat. 25, 531–542 (1961) (Russian); translated as Am. Math. Soc. Transl. Ser. 2 44:115–128 (1962)Google Scholar 4. Baklouti, A., Ludwig, J., Scuto, L., Smaoui, K.: Estimate of the \(L^p\)-Fourier transform norm on strong \(\ast \)-regular exponential solvable Lie groups. Acta Math. Sin. (Engl. Ser.) 23, 1173–1188 (2007)MathSciNetCrossRefzbMATHGoogle Scholar 5. Bardet, I., Rouzé, C.: Hypercontractivity and logarithmic Sobolev inequality for nonprimitive quantum Markov semigroups and estimation of decoherence rates. Preprint (2018). arXiv:1803.05379 6. Beckner, W.: Inequalities in Fourier analysis. Ann. Math. (2) 102, 159–182 (1975)MathSciNetCrossRefzbMATHGoogle Scholar 7. Bennett, J., Bez, N., Buschenhenke, S., Cowling, M.G., Flock, T.C.: On the nonlinear Brascamp–Lieb inequality. Preprint (2018). arXiv:1811.11052v1 8. Bergh, J., Löfström, J.: Interpolation Spaces. An Introduction. Springer, Berlin (1976)CrossRefzbMATHGoogle Scholar 9. Brascamp, H.J., Lieb, E.H.: Best constants in Young's inequality, its converse, and its generalization to more than three functions. Adv. Math. 20, 151–173 (1976)MathSciNetCrossRefzbMATHGoogle Scholar 10. Brezis, H.: Functional Analysis, Sobolev Spaces and Partial Differential Equations. Universitext. Springer, New York (2011)zbMATHGoogle Scholar 11. Bröcker, T., tom Dieck, T.: Representations of Compact Lie Groups. Graduate Texts in Mathematics, vol. 98. Springer, New York (1985)zbMATHGoogle Scholar 12. Carcano, G.: A commutativity condition for algebras of invariant functions. Boll. Un. Mat. Ital. B 7(1), 1091–1105 (1987)MathSciNetzbMATHGoogle Scholar 13. Caspers, M.: The \(L^p\)-Fourier transform on locally compact quantum groups. J. Oper. Theory 69, 161–193 (2013)CrossRefzbMATHGoogle Scholar 14. Connes, A.: On the spatial theory of von Neumann algebras. J. Funct. Anal. 35, 153–164 (1980)MathSciNetCrossRefzbMATHGoogle Scholar 15. Cooney, T.: A Hausdorff-Young inequality for locally compact quantum groups. Int. J. Math. 21, 1619–1632 (2010)MathSciNetCrossRefzbMATHGoogle Scholar 16. Cowling, M.G.: A remark on twisted convolution. In: Proceedings of the Seminar on Harmonic Analysis (Pisa, 1980), Rend. Circ. Mat. Palermo (2), pp. 203–209 (1981)Google Scholar 17. Daws, M.: Representing multipliers of the Fourier algebra on non-commutative \(L^p\) spaces. Can. J. Math. 63, 798–825 (2011)CrossRefzbMATHGoogle Scholar 18. Dixmier, J.: Formes linéaires sur un anneau d'opérateurs. Bull. Soc. Math. Fr. 81, 9–39 (1953)CrossRefzbMATHGoogle Scholar 19. Duflo, M., Moore, C.C.: On the regular representation of a nonunimodular locally compact group. J. Funct. Anal. 21, 209–243 (1976)MathSciNetCrossRefzbMATHGoogle Scholar 20. Eymard, P.: L'algèbre de Fourier d'un groupe localement compact. Bull. Soc. Math. Fr. 92, 181–236 (1964)CrossRefzbMATHGoogle Scholar 21. Eymard, P., Terp, M.: La transformation de Fourier et son inverse sur le groupe des \(ax+b\) d'un corps local. In: Eymard, P., Faraut, J., Schiffmann, G., Takahashi, R. (eds.) Analyse harmonique sur les groupes de Lie (Sém. Nancy-Strasbourg 1976–1978), II. Lecture Notes in Mathematics, vol. 739, pp. 207–248. Springer, Berlin (1979)Google Scholar 22. Folland, G.B.: Harmonic Analysis in Phase Space. Annals of Mathematics Studies, vol. 122. Princeton University Press, Princeton (1989)zbMATHGoogle Scholar 23. Folland, G.B.: A Course in Abstract Harmonic Analysis. Studies in Advanced Mathematics. CRC Press, Boca Raton (1995)zbMATHGoogle Scholar 24. Forrest, B.E., Lee, H.H., Samei, E.: Projectivity of modules over Fourier algebras. Proc. Lond. Math. Soc. 3(102), 697–730 (2011)MathSciNetCrossRefzbMATHGoogle Scholar 25. Fournier, J.J.F.: Sharpness in Young's inequality for convolution Pacific. J. Math. 72, 383–397 (1977)MathSciNetGoogle Scholar 26. Führ, H.: Abstract Harmonic Analysis of Continuous Wavelet Transforms. Lecture Notes in Mathematics, vol. 1863. Springer, Berlin (2005)zbMATHGoogle Scholar 27. Führ, H.: Hausdorff–Young inequalities for group extensions. Can. Math. Bull. 49, 549–559 (2006)MathSciNetCrossRefzbMATHGoogle Scholar 28. García-Cuerva, J., Marco, J.M., Parcet, J.: Sharp Fourier type and cotype with respect to compact semisimple Lie groups. Trans. Am. Math. Soc. 355, 3591–3609 (2003)MathSciNetCrossRefzbMATHGoogle Scholar 29. García-Cuerva, J., Parcet, J.: Vector-valued Hausdorff–Young inequality on compact groups. Proc. Lond. Math. Soc. (3) 88, 796–816 (2004)MathSciNetCrossRefzbMATHGoogle Scholar 30. Hilsum, M.: Les espaces \(L^p\) d'une algèbre de von Neumann définies par la derivée spatiale. J. Funct. Anal. 40, 151–169 (1981)MathSciNetCrossRefzbMATHGoogle Scholar 31. Hytönen, T., van Neerven, J., Veraar, M., Weis, L.: Analysis in Banach Spaces. Vol. I. Martingales and Littlewood–Paley theory. Ergebnisse der Mathematik und ihrer Grenzgebiete, vol. 63. Springer, Cham (2016)Google Scholar 32. Inoue, J.: \(L^p\)-Fourier transforms on nilpotent Lie groups and solvable Lie groups acting on Siegel domains. Pac. J. Math. 155, 295–318 (1992)CrossRefzbMATHGoogle Scholar 33. Izumi, H.: Constructions of non-commutative \(L^p\)-spaces with a complex parameter arising from modular actions. Int. J. Math. 8, 1029–1066 (1997)CrossRefzbMATHGoogle Scholar 34. Izumi, H.: Natural bilinear forms, natural sesquilinear forms and the associated duality on non-commutative \(L^p\)-spaces. Int. J. Math. 9, 975–1039 (1998)CrossRefzbMATHGoogle Scholar 35. Kamaly, A.: A new local variant of the Hausdorff–Young inequality. In: Ramírez de Arellano, E., Shapiro, M.V., Tover, L.M., Vasilevski, N.L. (eds.) Complex Analysis and Related Topics. Operator Theory Advances and Applications, vol. 114, pp. 107–130. Basel, Birkhäuser (2000)CrossRefGoogle Scholar 36. Kenig, C.E., Stanton, R.J., Tomas, P.A.: Divergence of eigenfunction expansions. J. Funct. Anal. 46, 28–44 (1982)MathSciNetCrossRefzbMATHGoogle Scholar 37. Kirillov, A.A.: Merits and demerits of the orbit method. Bull. (N. S.) Am. Math. Soc. 36, 433–488 (1999)MathSciNetCrossRefzbMATHGoogle Scholar 38. Klein, A., Russo, B.: Sharp inequalities for Weyl operators and Heisenberg groups. Math. Ann. 235, 175–194 (1978)MathSciNetCrossRefzbMATHGoogle Scholar 39. Kleppner, A., Lipsman, R.L.: The Plancherel formula for group extensions. I. Ann. Sci. École Norm. Sup. (4) I(5), 459–516 (1972)MathSciNetCrossRefzbMATHGoogle Scholar 40. Kleppner, A., Lipsman, R.L.: The Plancherel formula for group extensions. II. Ann. Sci. École Norm. Sup. (4) 6, 103–132 (1973)CrossRefzbMATHGoogle Scholar 41. Knapp, A.W.: Lie Groups Beyond an Introduction. Progress in Mathematics, vol. 140. Birkäuser, Boston (2002)zbMATHGoogle Scholar 42. Kosaki, H.: Applications of the complex interpolation method to a von Neumann algebra: noncommutative \(L^p\)-spaces. J. Funct. Anal. 56, 29–78 (1984)MathSciNetCrossRefzbMATHGoogle Scholar 43. Kunze, R.A.: \(L^p\) Fourier transforms on locally compact unimodular groups. Trans. Am. Math. Soc. 89, 519–540 (1958)Google Scholar 44. Lebedev, N.N.: Special Functions and Their Applications. Revised edition. Translated from the Russian and edited by R.A. Silverman. Dover Publications, New York (1972)Google Scholar 45. Leptin, H.: Sur l'algèbre de Fourier d'un groupe localement compact. C. R. Acad. Sci. Paris Sér. A 266, 1180–1182 (1968)zbMATHGoogle Scholar 46. Lieb, E.H.: Gaussian kernels have only Gaussian maximizers. Invent. Math. 102, 179–208 (1990)MathSciNetCrossRefzbMATHGoogle Scholar 47. Lipsman, R.L.: Non-abelian Fourier analysis. Bull. Sci. Math. (2) 98, 209–233 (1974)MathSciNetzbMATHGoogle Scholar 48. Markett, C.: Mean Cesàro summability of Laguerre expansions and norm estimates with shifted parameter. Anal. Math. 8, 19–37 (1982)MathSciNetCrossRefzbMATHGoogle Scholar 49. Martini, A.: Joint functional calculi and a sharp multiplier theorem for the Kohn Laplacian on spheres. Math. Z. 286, 1539–1574 (2017)MathSciNetCrossRefzbMATHGoogle Scholar 50. Mautner, F.: Unitary representations of locally compact groups. II. Ann. Math. (2) 52, 528–556 (1950)MathSciNetCrossRefzbMATHGoogle Scholar 51. Mitjagin, B.S.: Divergenz von Spektralentwicklungen in \(L^p\)-Räumen. In: Butzer, P.L., Sz.-Nagy, B. (eds.) Linear Operators and Approximation, II (Proc. Conf., Oberwolfach Math. Res. Inst., Oberwolfach, 1974), pp. 521–530. Birkäuser, Basel (1974)Google Scholar 52. Parcet, J.: A local Hausdorff–Young inequality on the classical compact Lie groups and related topics. In: Ying, L.M. (ed.) Focus on Group Theory Research, pp. 1–25. Nova Sci. Publ., New York (2006)Google Scholar 53. Pisier, G.: Introduction to Operator Space Theory. London Mathematical Society Lecture Note Series, vol. 294. Cambridge University Press, Cambridge (2003)CrossRefzbMATHGoogle Scholar 54. Pisier, G., Xu, Q.: Non-commutative \(L^p\)-spaces. In: Johnson, W.B., Lindenstrauss, J. (eds.) Handbook of the Geometry of Banach Spaces, vol. 2, pp. 1459–1517. North-Holland, Amsterdam (2003)Google Scholar 55. Ricard, É., Xu, Q.: Complex interpolation of weighted noncommutative \(L_p\)-spaces. Houst. J. Math. 37, 1165–1179 (2011)zbMATHGoogle Scholar 56. Russo, B.: The norm of the \(L^p\)-Fourier transform on unimodular groups. Trans. Am. Math. Soc. 192, 293–305 (1974)zbMATHGoogle Scholar 57. Russo, B.: On the Hausdorff–Young theorem for integral operators. Pac. J. Math. 68, 241–253 (1977)MathSciNetCrossRefzbMATHGoogle Scholar 58. Segal, I.E.: An extension of Plancherel's formula to separable unimodular groups. Ann. Math. (2) 52, 272–292 (1950)MathSciNetCrossRefzbMATHGoogle Scholar 59. Segal, I.E.: A non-commutative extension of abstract integration. Ann. Math. (2) 57, 401–457 (1953)MathSciNetCrossRefzbMATHGoogle Scholar 60. Siebert, E.: Contractive automorphisms on locally compact groups. Math. Z. 191, 73–90 (1986)MathSciNetCrossRefzbMATHGoogle Scholar 61. Sjölin, P.: A remark on the Hausdorff–Young inequality. Proc. Am. Math. Soc. 123, 3085–3088 (1995)MathSciNetCrossRefzbMATHGoogle Scholar 62. Tatsuuma, N.: Plancherel formula for non-unimodular locally compact groups. J. Math. Kyoto Univ. 12, 179–261 (1972)MathSciNetCrossRefzbMATHGoogle Scholar 63. Terp, M.: \(L^p\) Fourier transformation on non-unimodular locally compact groups. Adv. Oper. Theory 2, 547–583 (2017). Originally appeared in Matematisk Institut København. Preprint series 11 (1980)MathSciNetzbMATHGoogle Scholar 64. Terp, M.: Interpolation spaces between a von Neumann algebra and its predual. J. Oper. Theory 8, 327–360 (1982)MathSciNetzbMATHGoogle Scholar 65. Thangavelu, S.: Harmonic Analysis on the Heisenberg Group. Progress in Mathematics, vol. 159. Birkäuser, Boston (1998)CrossRefzbMATHGoogle Scholar 66. Triebel, H.: Interpolation Theory, Function Spaces, Differential Operators. North-Holland Mathematical Library, vol. 18. North-Holland Publishing Co., Amsterdam (1978)zbMATHGoogle Scholar 67. Weidmann, J.: Linear Operators in Hilbert Spaces. Graduate Texts in Mathematics, vol. 68. Springer, New York (1980)zbMATHGoogle Scholar Copyright information © The Author(s) 2019 OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Authors and Affiliations Michael G. Cowling1View author's OrcID profileAlessio Martini2Email authorView author's OrcID profileDetlef Müller3Javier Parcet41.School of Mathematics and StatisticsUniversity of New South WalesSydneyAustralia2.School of MathematicsUniversity of BirminghamBirminghamUK3.Mathematisches SeminarChristian-Albrechts-Universität zu KielKielGermany4.Instituto de Ciencias Matemáticas, Consejo Superior de Investigaciones CientíficasMadridSpain This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2Fs00208-018-01799-9.pdf Michael G. Cowling, Alessio Martini, Detlef Müller, Javier Parcet. The Hausdorff–Young inequality on Lie groups, Mathematische Annalen, 2019, 1-39, DOI: 10.1007/s00208-018-01799-9
CommonCrawl
Centralizers of partially hyperbolic diffeomorphisms in dimension 3 Equilibrium states for non-uniformly hyperbolic systems: Statistical properties and analyticity Suzete Maria Afonso 1, , Vanessa Ramos 2,3, and Jaqueline Siqueira 2,3,, Universidade Estadual Paulista (UNESP), Instituto de Geociências e Ciências Exatas, Câmpus de Rio Claro, Avenida 24-A, 1515, Bela Vista, Rio Claro, São Paulo, 13506-900, Brazil Centro de Ciências Exatas e Tecnologia - UFMA, Av. dos Portugueses, 1966, Bacanga, 65080-805, São Luís, Brazil *Instituto de Matemática - Universidade Federal do Rio de Janeiro, Av. Athos da Silveira Ramos 149, Cidade Universitária, Ilha do Fundão, Rio de Janeiro 21945-909, Brazil * Corresponding author: [email protected] Received July 2020 Revised December 2020 Published September 2021 Early access March 2021 Fund Project: The authors were supported by grant 2017/08732-1, São Paulo Research Foundation (FAPESP). VR was supported by grant Universal-01309/17, FAPEMA-Brazil. JS was supported by grant Universal-430154/2018-6, CNPq-Brazil Figure(1) We consider a wide family of non-uniformly expanding maps and hyperbolic Hölder continuous potentials. We prove that the unique equilibrium state associated to each element of this family is given by the eigenfunction of the transfer operator and the eigenmeasure of the dual operator (both having the spectral radius as eigenvalue). We show that the transfer operator has the spectral gap property in some space of Hölder continuous observables and from this we obtain an exponential decay of correlations and a central limit theorem for the equilibrium state. Moreover, we establish the analyticity with respect to the potential of the equilibrium state as well as that of other thermodynamic quantities. Furthermore, we derive similar results for the equilibrium state associated to a family of non-uniformly hyperbolic skew products and hyperbolic Hölder continuous potentials. Keywords: Equilibrium states, non-uniform hyperbolicity, analyticity, limit theorems, partial hyperbolicity. Mathematics Subject Classification: Primary: 37A05, 37A30; Secondary: 37A35, 37D35. Citation: Suzete Maria Afonso, Vanessa Ramos, Jaqueline Siqueira. Equilibrium states for non-uniformly hyperbolic systems: Statistical properties and analyticity. Discrete & Continuous Dynamical Systems, 2021, 41 (9) : 4485-4513. doi: 10.3934/dcds.2021045 J. F. Alves, C. Bonatti and M. Viana, SRB measures for partially hyperbolic systems whose central direction is mostly expanding, Inventiones Mathematicae, 140 (2000), 351-398. doi: 10.1007/s002220000057. Google Scholar J. F. Alves, V. Ramos and J. Siqueira, Equilibrium stability for non-uniformly hyperbolic systems, Ergodic Theory and Dynamical Systems, 39 (2019), 2619-2642. doi: 10.1017/etds.2017.138. Google Scholar A. Arbieto, C. Matheus and K. Oliveira, Equilibrium states for random non-uniformly expanding maps, Nonlinearity, 17 (2004), 581-593. doi: 10.1088/0951-7715/17/2/013. Google Scholar V. Baladi, Positive Transfer Operators and Decay of Correlations, Advanced series in nonlinear dynamics. World Scientific Publishing Company, 2000. doi: 10.1142/9789812813633. Google Scholar V. Baladi and D. Smania, Linear response for smooth deformations of generic nonuniformly hyperbolic unimodal maps, Annales scientifiques de l'École Normale Supérieure, Ser. 4, 45 (2012), 861–926. Google Scholar V. Baladi and M. Todd, Linear response for intermittent maps, Communications in Mathematical Physics, 347 (2016), 857-874. doi: 10.1007/s00220-016-2577-z. Google Scholar G. Birkhoff, Lattice Theory, Number v. 25, pt. 2 in American Mathematical Society colloquium publications. American Mathematical Society, 1940. Google Scholar T. Bomfim, A. Castro and P. Varandas, Differentiability of thermodynamical quantities in non-uniformly expanding dynamics, Advances in Mathematics, 292 (2016), 478-528. doi: 10.1016/j.aim.2016.01.017. Google Scholar R. Bowen, Entropy for group endomorphisms and homogeneous spaces, Transactions of the American Mathematical Society, 153 (1971), 401-414. doi: 10.1090/S0002-9947-1971-0274707-X. Google Scholar H. Bruin and M. Todd, Equilibrium states for interval maps: Potentials with $\sup \varphi-\inf \varphi < htop(f)$, Communications in Mathematical Physics, 283 (2008), 579-611. doi: 10.1007/s00220-008-0596-0. Google Scholar J. Buzzi and O. Sarig, Uniqueness of equilibrium measures for countable markov shifts and multidimensional piecewise expanding maps, Ergodic Theory and Dynamical Systems, 23 (2003), 1383-1400. doi: 10.1017/S0143385703000087. Google Scholar A. Castro and P. Varandas, Equilibrium states for non-uniformly expanding maps: Decay of correlations and strong stability, Annales de l'Institut Henri Poincaré C, Analyse non linéaire, 30 (2013), 225-249. doi: 10.1016/j.anihpc.2012.07.004. Google Scholar V. Climenhaga, Y. Pesin and A. Zelerowicz, Equilibrium states in dynamical systems via geometric measure theory, Bulletin of the American Mathematical Society, 56 (2019), 569-610. doi: 10.1090/bull/1659. Google Scholar V. Climenhaga and D. J. Thompson, Unique equilibrium states for flows and homeomorphisms with non-uniform structure, Advances in Mathematics, 303 (2016), 745-799. doi: 10.1016/j.aim.2016.07.029. Google Scholar L. J. Díaz, V. Horita, I. Rios and M. Sambarino, Destroying horseshoes via heterodimensional cycles: Generating bifurcations inside homoclinic classes, Ergodic Theory and Dynamical Systems, 29 (2009), 433-474. doi: 10.1017/S0143385708080346. Google Scholar P. Giulietti, B. Kloeckner, A. Lopes and D. Marcon, The calculus of thermodynamical formalism, Journal of the European Mathematical Society, 20 (2018), 2357-2412. doi: 10.4171/JEMS/814. Google Scholar S. Gouëzel, Central limit theorem and stable laws for intermittent maps, Probability Theory and Related Fields, 128 (2002), 82-122. Google Scholar F. Hofbauer and G. Keller, Ergodic properties of invariant measures for piecewise monotonic transformations, Mathematische Zeitschrift, 180 (1982), 119-140. doi: 10.1007/BF01215004. Google Scholar G. Iommi and M. Todd, Natural equilibrium states for multimodal maps, Communications in Mathematical Physics, 300 (2010), 65-94. doi: 10.1007/s00220-010-1112-x. Google Scholar T. Kato, Perturbation Theory for Linear Operators, Classics in Mathematics. Springer Berlin Heidelberg, 1995. Google Scholar G. Keller, Un théorème de la limite centrale pour une classe de transformations monotones par morceaux, C. R. Acad. Sci., Paris, Sér. A, 291 (1980), 155-158. Google Scholar A. Korepanov, Linear response for intermittent maps with summable and nonsummable decay of correlations, Nonlinearity, 29 (2016), 1735-1754. doi: 10.1088/0951-7715/29/6/1735. Google Scholar R. Leplaideur, K. Oliveira and I. Rios, Equilibrium states for partially hyperbolic horseshoes, Ergodic Theory and Dynamical Systems, 31 (2011), 179-195. doi: 10.1017/S0143385709000972. Google Scholar H. Li and J. Rivera-Letelier, Equilibrium states of weakly hyperbolic one-dimensional maps for Hölder potentials, Communications in Mathematical Physics, 328 (2014), 397-419. doi: 10.1007/s00220-014-1952-x. Google Scholar C. Liverani, Decay of correlations, Annals of Mathematics, 142 (1995), 239-301. doi: 10.2307/2118636. Google Scholar C. Liverani, Decay of correlations for piecewise expanding maps, Journal of Statistical Physics, 78 (1995), 1111-1129. doi: 10.1007/BF02183704. Google Scholar C. Liverani, B. Saussol and S. Vaienti, A probabilistic approach to intermittency, Ergodic Theory Dynam. Systems, 19 (1999), 671-685. doi: 10.1017/S0143385799133856. Google Scholar I. Melbourne and M. Nicol, Statistical properties of endomorphisms and compact group extensions, Journal of the London Mathematical Society, 70 (2004), 427-446. doi: 10.1112/S0024610704005587. Google Scholar K. Oliveira and M. Viana, Thermodynamical formalism for robust classes of potentials and non-uniformly hyperbolic maps, Ergodic Theory and Dynamical Systems, 28 (2008), 501-533. doi: 10.1017/S0143385707001009. Google Scholar [30] Y. B. Pesin, Dimension Theory in Dynamical Systems: Contemporary Views and Applications, Chicago Lectures in Mathematics. University of Chicago Press, 2008. doi: 10.7208/chicago/9780226662237.001.0001. Google Scholar V. Ramos and J. Siqueira, On equilibrium states for partially hyperbolic horseshoes: Uniqueness and statistical properties, Bulletin of the Brazilian Mathematical Society, New Series, 48 (2017), 347-375. doi: 10.1007/s00574-017-0027-y. Google Scholar V. Ramos and M. Viana, Equilibrium states for hyperbolic potentials, Nonlinearity, 30 (2017), 825-847. doi: 10.1088/1361-6544/aa4ec3. Google Scholar I. Rios and J. Siqueira, On equilibrium states for partially hyperbolic horseshoes, Ergodic Theory and Dynamical Systems, 38 (2018), 301-335. doi: 10.1017/etds.2016.21. Google Scholar D. Ruelle, Statistical mechanics of a one-dimensional lattice gas, Communications in Mathematical Physics, 9 (1968), 267-278. doi: 10.1007/BF01654281. Google Scholar D. Ruelle, Thermodynamic Formalism: The Mathematical Structures of Classical Equilibrium Statistical Mechanics, Encyclopedia of mathematics and its applications. Addison-Wesley Publishing Company, Advanced Book Program, 1978. Google Scholar O. Sarig, Thermodynamic formalism for countable markov shifts, Ergodic Theory and Dynamical Systems, 19 (1999), 1565-1593. doi: 10.1017/S0143385799146820. Google Scholar O. Sarig, Phase transitions for countable {M}arkov shifts, Communications in Mathematical Physics, 217 (2001), 555-577. doi: 10.1007/s002200100367. Google Scholar O. Sarig, Subexponential decay of correlations, Inventiones Mathematicae, 150 (2002), 629-653. doi: 10.1007/s00222-002-0248-5. Google Scholar O. Sarig, Lecture notes on thermodynamic formalism for topological markov shifts, (2009). Google Scholar Y. Sinai, Gibbs measures in ergodic theory, Russian Mathematical Surveys, 27 (1972), 21. Google Scholar P. Varandas and M. Viana, Existence, uniqueness and stability of equilibrium states for non-uniformly expanding maps, Annales de l'Institut Henri Poincaré C, Analyse non linéaire, 27 (2010), 555-593. doi: 10.1016/j.anihpc.2009.10.002. Google Scholar M. Viana, Stochastic Dynamics of Deterministic Systems, Lecture Notes XXI Bras. Math. Colloq. IMPA, Rio de Janeiro, 1997. Google Scholar L.-S. Young, Statistical properties of dynamical systems with some hyperbolicity, Annals of Mathematics. Second Series, 147 (1998), 585-650. doi: 10.2307/120960. Google Scholar L.-S. Young, Recurrence times and rates of mixing, Israel Journal of Mathematics, 110 (1999), 153-188. doi: 10.1007/BF02808180. Google Scholar Figure 1. Hyperbolic pre-balls Yakov Pesin, Vaughn Climenhaga. Open problems in the theory of non-uniform hyperbolicity. Discrete & Continuous Dynamical Systems, 2010, 27 (2) : 589-607. doi: 10.3934/dcds.2010.27.589 Boris Hasselblatt, Yakov Pesin, Jörg Schmeling. Pointwise hyperbolicity implies uniform hyperbolicity. Discrete & Continuous Dynamical Systems, 2014, 34 (7) : 2819-2827. doi: 10.3934/dcds.2014.34.2819 Mickaël Kourganoff. Uniform hyperbolicity in nonflat billiards. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1145-1160. doi: 10.3934/dcds.2018048 Andy Hammerlindl, Jana Rodriguez Hertz, Raúl Ures. Ergodicity and partial hyperbolicity on Seifert manifolds. Journal of Modern Dynamics, 2020, 0: 331-348. doi: 10.3934/jmd.2020012 Federico Rodriguez Hertz, María Alejandra Rodriguez Hertz, Raúl Ures. Partial hyperbolicity and ergodicity in dimension three. Journal of Modern Dynamics, 2008, 2 (2) : 187-208. doi: 10.3934/jmd.2008.2.187 Jérôme Buzzi, Todd Fisher. Entropic stability beyond partial hyperbolicity. Journal of Modern Dynamics, 2013, 7 (4) : 527-552. doi: 10.3934/jmd.2013.7.527 Yakov Pesin. On the work of Dolgopyat on partial and nonuniform hyperbolicity. Journal of Modern Dynamics, 2010, 4 (2) : 227-241. doi: 10.3934/jmd.2010.4.227 David Damanik, Jake Fillman, Milivoje Lukic, William Yessen. Characterizations of uniform hyperbolicity and spectra of CMV matrices. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 1009-1023. doi: 10.3934/dcdss.2016039 Andy Hammerlindl. Partial hyperbolicity on 3-dimensional nilmanifolds. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3641-3669. doi: 10.3934/dcds.2013.33.3641 Eleonora Catsigeras, Xueting Tian. Dominated splitting, partial hyperbolicity and positive entropy. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 4739-4759. doi: 10.3934/dcds.2016006 Rafael Potrie. Partial hyperbolicity and foliations in $\mathbb{T}^3$. Journal of Modern Dynamics, 2015, 9: 81-121. doi: 10.3934/jmd.2015.9.81 Marcin Mazur, Jacek Tabor, Piotr Kościelniak. Semi-hyperbolicity and hyperbolicity. Discrete & Continuous Dynamical Systems, 2008, 20 (4) : 1029-1038. doi: 10.3934/dcds.2008.20.1029 Dawei Yang, Shaobo Gan, Lan Wen. Minimal non-hyperbolicity and index-completeness. Discrete & Continuous Dynamical Systems, 2009, 25 (4) : 1349-1366. doi: 10.3934/dcds.2009.25.1349 Boris Kalinin, Victoria Sadovskaya. Normal forms for non-uniform contractions. Journal of Modern Dynamics, 2017, 11: 341-368. doi: 10.3934/jmd.2017014 Marcin Mazur, Jacek Tabor. Computational hyperbolicity. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 1175-1189. doi: 10.3934/dcds.2011.29.1175 Boris Kalinin, Anatole Katok. Measure rigidity beyond uniform hyperbolicity: invariant measures for cartan actions on tori. Journal of Modern Dynamics, 2007, 1 (1) : 123-146. doi: 10.3934/jmd.2007.1.123 Pablo G. Barrientos, Abbas Fakhari. Ergodicity of non-autonomous discrete systems with non-uniform expansion. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1361-1382. doi: 10.3934/dcdsb.2019231 Markus Bachmayr, Van Kien Nguyen. Identifiability of diffusion coefficients for source terms of non-uniform sign. Inverse Problems & Imaging, 2019, 13 (5) : 1007-1021. doi: 10.3934/ipi.2019045 Luis Barreira, Claudia Valls. Growth rates and nonuniform hyperbolicity. Discrete & Continuous Dynamical Systems, 2008, 22 (3) : 509-528. doi: 10.3934/dcds.2008.22.509 Suzete Maria Afonso Vanessa Ramos Jaqueline Siqueira
CommonCrawl
Does a reaction have to have a rate determining step? I am a bit confused about the concept of the rate determining step. From what I understand, a step in a reaction is the RDS if it meets the following requirements: It is the slowest step in a reaction It causes a bottleneck in the progress of the reaction Therefore, if there is a catalyst, it means the slowest step in the reaction will no longer be the slowest step because the reaction can take place faster with a different mechanism involving the catalyst. Would it therefore be of any significance to say that a particular step in this reaction, say for example the step with the highest activation energy is the RDS, or is there no point in saying there is a rate determining step, ie, that it does not exist? Is it possible to say that the concept of the RDS has no significance in some reactions? amiliyaamiliya $\begingroup$ Here's a related question. $\endgroup$ – Nicolau Saker Neto Dec 21 '15 at 21:16 $\begingroup$ There are so many chemical reactions that any scenario is possible. Consider mixing two liquids, or dissolving a solid in a liquid. If the reaction is complete within that sort of time scale then the RDS is pretty insignificant. In aqueous solutions acid-base neutralizations are such reactions. $\endgroup$ – MaxW Dec 21 '15 at 21:40 Any kind of chemical reaction can be modelled as a (complex) network of elementary reaction steps. For instance, if we model the conversion of R to P via the intermediates $\ce{I^1}$, $\ce{I^2}$, and $\ce{I^3}$, we have the following kinetic network: $$\ce{R <=> I^1 <=> I^2 <=> I^3 <=> P}$$ Note that on the basis of microscopic reversibility, all these reactions can go both ways and the boundary conditions of the system (pressure, temperature and chemical composition) determine in which way and at what rate the reactions occur. To answer your question: there exists a formal way of identifying the rate-determining step. If we look at the overall reaction rate $r$; then we can define a concept called degree of rate control: $$ \chi_{i} = \left( \frac{\partial \ln r}{\partial \ln k} \right)_{k_{i \ne j}, K_{i}} $$ Here, $\chi_{i}$ is the degree of rate control coefficient that indicates the extend to which an elementary reaction steps controls the overall rate and $k$ is the rate constant of the particular elementary reaction step you want to calculate the DRC coefficient for. Importantly, in this calculation you keep the rate constants for all other reaction steps and for the equilibrium constant of the elementary reaction step you are looking at constant. (in other words, you only change the height of the reaction barrier). When you find that this coefficient is 1 and all other elementary reaction steps have a coefficient of 0; in that case this elementary reaction step is the rate-determining step. Interestingly, when investigating a lot of chemical systems, it turns out that only roughly half of these systems have a rate-determining step. A lot of the systems (especially in catalysis) have multiple rate-controlling steps and even rate-inhibiting steps (where the DRC coefficient is less than zero). I will not go too much in-depth, but there exists a complete mathematical treatment and derivation of this theory that shows that the sum of all coefficients has to be unity. This means that a chemical system should always have either a single rate controlling step (the RDS) or multiple rate-controlling steps and possible one or more rate-inhibiting steps. To conclude: Not every overall chemical reaction has an RDS, but they should have one or more rate-determining steps. Notable references: The above theory was proposed by Campbell and co-authors in this research paper: J. Am. Chem. Soc. 2009, 131 (23), 8077–8082. DOI:10.1021/ja9000097. Ivo FilotIvo Filot Simple answer: No, there is no requirement for a reaction to have a rate determining step, though largely we are able to show through kinetic experiments that the vast majority of reactions do have a rate determining step. Longer answer: Most reactions have a reaction pathway that consists of one of more intermediates (one or more 'steps'). In these cases, one of the steps is likely (though not necessarily) slower than the others, leading to the concept of a rate determining step (RDS). The RDS is, as you say, a bottleneck to the reaction. If you imagine the reaction where... $$\ce{SM -> I -> P}$$ ...and $\ce{I -> P}$ is the rate determining step, then no matter how fast you convert $\ce{SM -> I}$, the $\ce{I}$ will just be accumulating until it reacts to form $\ce{P}$ and hence the second 'step' is said to be rate limiting. This can be derived mathematically, but as you are asking the question I assume you're yet to reach this topic yet. In essence, the $\ce{I -> P}$ rate constant ($k$) dominates the kinetics, making the rate constant for $\ce{SM -> I}$ irrelevant. This simplification is incredibly useful in physical organic chemistry as if we know the rate limiting step, we can think of ways to increase the overall reaction rate by considering how to speed up the slowest step on the reaction pathway. Catalysts lower the activation energy of a step, allowing it to proceed faster, and in some cases this might mean that the rate determining step is no longer rate determining and some other step along the reaction pathway is. They may do this in a variety of ways, one of which is a change in mechanism (i.e. providing an alternate reaction pathway), when this happens, its not really valid to keep using the same kinetic argument developed for a non-catalytic variant of the reaction, and as such a new rate equation (and hence the rate limiting step) should be developed (if this is important). Of course, it is extremely time consuming to measure these kinetic parameters, and even when measured it can be difficult to interpret the data for complex reactions. When we discuss the 'rate determining step' as practicing organic chemists, we are often generalising based on similar systems for which the kinetics have been studied. There are, however, reactions that occur in a single, concerted step. These are known as pericyclic reactions. All of the bond making and breaking occurs via a single transition state, with no intermediates formed and as such there is no RDS (everything occurs in one step). A famous example of this is the Diels–Alder cycloaddition: NotEvans.NotEvans. $\begingroup$ Nice answer; I'll upvote tomorrow (used up all my votes already ^^') But technically, if a reaction is single-step than that step is rate-determining by definition, isn't it? Yes, I'm being nitpicky ;) $\endgroup$ – Jan Dec 21 '15 at 22:08 $\begingroup$ I did originally write 'yes or no, depending on how you look at it', but I thought I was being more philosophical at that point. But yes, I do agree with you, if theres one step, it is, by definition, rate limiting. ^_^ $\endgroup$ – NotEvans. Dec 21 '15 at 22:10 In addition to contribution of other people. Usually, there is at least one bottleneck on the reaction path even in case of catalysis. However, it is possible for reactions to form a network with cross-influence. A well known, widely recognized case is Belousov–Zhabotinsky reaction, where intermediates accumulate and, once critical concentration is reached, fall further the path together in a big bunch, following with new accumulation period. In case of heterogeneous catalysts a reaction still can have a simple path with no weird behavior, but the reaction rate may be determined not by any chemical interaction, but diffusion of reagents towards the surfaces. This is the case of steam conversion of methane. permeakrapermeakra Not the answer you're looking for? Browse other questions tagged kinetics or ask your own question. Can in any case the faster step of the reaction be rate determining? Catalytic energy profiles (is this Wikipedia image misleading?) Do SN1 reactions have dependency on the nucleophile? Belousov-Zhabotinsky reaction: questions about rate determining step, k and activation energy Is the rate determining step the step with the largest Ea? Why does the rate-determining step determine the overall rate of reaction? reaction coordinate, kinetics, equilibrium in example What is the rate-determining step in free radical halogenation?
CommonCrawl
On the relationship between the traceability properties of Reed-Solomon codes AMC Home Syndrome decoding for Hermite codes with a Sugiyama-type algorithm November 2012, 6(4): 443-466. doi: 10.3934/amc.2012.6.443 An algebraic approach for decoding spread codes Elisa Gorla 1, , Felice Manganiello 2, and Joachim Rosenthal 3, Institut de Mathématiques, Université de Neuchâtel, Rue Emile-Argand 11, 2000 Neuchâtel, Switzerland Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, M5S 3G4, Canada Institut für Mathematik, Universität Zürich, 8057 Zürich, Switzerland Received September 2011 Revised June 2012 Published November 2012 In this paper we study spread codes: a family of constant-dimension codes for random linear network coding. In other words, the codewords are full-rank matrices of size $k\times n$ with entries in a finite field $\mathbb F_q$. Spread codes are a family of optimal codes with maximal minimum distance. We give a minimum-distance decoding algorithm which requires $\mathcal{O}((n-k)k^3)$ operations over an extension field $\mathbb F_{q^k}$. Our algorithm is more efficient than the previous ones in the literature, when the dimension $k$ of the codewords is small with respect to $n$. The decoding algorithm takes advantage of the algebraic structure of the code, and it uses original results on minors of a matrix and on the factorization of polynomials over finite fields. Keywords: spread codes, decoding algorithm., Random linear network coding. Mathematics Subject Classification: 11T7. Citation: Elisa Gorla, Felice Manganiello, Joachim Rosenthal. An algebraic approach for decoding spread codes. Advances in Mathematics of Communications, 2012, 6 (4) : 443-466. doi: 10.3934/amc.2012.6.443 R. Ahlswede, N. Cai, S.-Y. R. Li and R. W. Yeung, Network information flow, IEEE Trans. Inform. Theory, 46 (2000), 1204-1216. doi: 10.1109/18.850663. Google Scholar T. Etzion and N. Silberstein, Error-correcting codes in projective spaces via rank-metric codes and Ferrers diagrams, IEEE Trans. Inform. Theory, 55 (2009), 2909-2919. doi: 10.1109/TIT.2009.2021376. Google Scholar T. Etzion and A. Vardy, Error-correcting codes in projective space, in "IEEE International Symposium on Information Theory,'' (2008), 871-875. Google Scholar È. M. Gabidulin, Theory of codes with maximum rank distance, Problemy Peredachi Informatsii, 21 (1985), 3-16. Google Scholar E. Gorla, C. Puttmann and J. Shokrollahi, Explicit formulas for efficient multiplication in $GF(3$6m$)$, in "Selected Areas in Cryptography: Revised Selected Papers from the 14th International Workshop (SAC 2007) held at University of Ottawa'' (eds. C. Adams, A. Miri and M. Wiener), Springer, (2007), 173-183. Google Scholar J. W. P. Hirschfeld, "Projective Geometries over Finite Fields,'' 2nd edition, The Clarendon Press, Oxford University Press, New York, 1998. Google Scholar A. Kohnert and S. Kurz, Construction of large constant dimension codes with a prescribed minimum distance, in "MMICS'' (eds. J. Calmet, W. Geiselmann and J. Müller-Quade), Springer, (2008), 31-42. Google Scholar R. Kötter and F. R. Kschischang, Coding for errors and erasures in random network coding, IEEE Trans. Inform. Theory, 54 (2008), 3579-3591. doi: 10.1109/TIT.2008.926449. Google Scholar S.-Y. R. Li, R. W. Yeung and N. Cai, Linear network coding, IEEE Trans. Inform. Theory, 49 (2003), 371-381. doi: 10.1109/TIT.2002.807285. Google Scholar R. Lidl and H. Niederreiter, "Introduction to Finite Fields and their Applications,'' revised edition, Cambridge University Press, Cambridge, 1994. doi: 10.1017/CBO9781139172769. Google Scholar P. Loidreau, A Welch-Berlekamp like algorithm for decoding Gabidulin codes, in "Coding and Cryptography,'' Springer, Berlin, (2006), 36-45. doi: 10.1007/11779360_4. Google Scholar H. Mahdavifar and A. Vardy, Algebraic list-decoding on the operator channel, in "Proceedings of 2010 IEEE International Symposium on Information Theory,'' (2010), 1193-1197. doi: 10.1109/ISIT.2010.5513656. Google Scholar F. Manganiello, E. Gorla and J. Rosenthal, Spread codes and spread decoding in network coding, in "Proceedings of 2008 IEEE International Symposium on Information Theory,'' Toronto, Canada, (2008), 851-855. doi: 10.1109/ISIT.2008.4595113. Google Scholar G. Richter and S. Plass, Fast decoding of rank-codes with rank errors and column erasures, in "Proceedings of 2008 IEEE International Symposium on Information Theory,'' (2004), page 398. Google Scholar D. Silva, F. R. Kschischang and R. Kötter, A rank-metric approach to error control in random network coding, IEEE Trans. Inform. Theory, 54 (2008), 3951-3967. doi: 10.1109/TIT.2008.928291. Google Scholar V. Skachek, Recursive code construction for random networks, IEEE Trans. Inform. Theory, 56 (2010), 1378-1382. doi: 10.1109/TIT.2009.2039163. Google Scholar A.-L. Trautmann, F. Manganiello and J. Rosenthal, Orbit codes - a new concept in the area of network coding, in "2010 IEEE Information Theory Workshop (ITW),'' Dublin, Ireland, (2010), 1-4. doi: 10.1109/CIG.2010.5592788. Google Scholar A.-L. Trautmann and J. Rosenthal, A complete characterization of irreducible cyclic orbit codes, in "Proceedings of the Seventh International Workshop on Coding and Cryptography (WCC),'' (2011), 219-223. Google Scholar Giuseppe Bianchi, Lorenzo Bracciale, Keren Censor-Hillel, Andrea Lincoln, Muriel Médard. The one-out-of-k retrieval problem and linear network coding. Advances in Mathematics of Communications, 2016, 10 (1) : 95-112. doi: 10.3934/amc.2016.10.95 Irene I. Bouw, Sabine Kampf. Syndrome decoding for Hermite codes with a Sugiyama-type algorithm. Advances in Mathematics of Communications, 2012, 6 (4) : 419-442. doi: 10.3934/amc.2012.6.419 Julia Lieb, Raquel Pinto. A decoding algorithm for 2D convolutional codes over the erasure channel. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021031 Kwankyu Lee. Decoding of differential AG codes. Advances in Mathematics of Communications, 2016, 10 (2) : 307-319. doi: 10.3934/amc.2016007 Ghislain Fourier, Gabriele Nebe. Degenerate flag varieties in network coding. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021027 Washiela Fish, Jennifer D. Key, Eric Mwambene. Partial permutation decoding for simplex codes. Advances in Mathematics of Communications, 2012, 6 (4) : 505-516. doi: 10.3934/amc.2012.6.505 Alexander Barg, Arya Mazumdar, Gilles Zémor. Weight distribution and decoding of codes on hypergraphs. Advances in Mathematics of Communications, 2008, 2 (4) : 433-450. doi: 10.3934/amc.2008.2.433 Artyom Nahapetyan, Panos M. Pardalos. A bilinear relaxation based algorithm for concave piecewise linear network flow problems. Journal of Industrial & Management Optimization, 2007, 3 (1) : 71-85. doi: 10.3934/jimo.2007.3.71 Keisuke Minami, Takahiro Matsuda, Tetsuya Takine, Taku Noguchi. Asynchronous multiple source network coding for wireless broadcasting. Numerical Algebra, Control & Optimization, 2011, 1 (4) : 577-592. doi: 10.3934/naco.2011.1.577 Min Ye, Alexander Barg. Polar codes for distributed hierarchical source coding. Advances in Mathematics of Communications, 2015, 9 (1) : 87-103. doi: 10.3934/amc.2015.9.87 Terasan Niyomsataya, Ali Miri, Monica Nevins. Decoding affine reflection group codes with trellises. Advances in Mathematics of Communications, 2012, 6 (4) : 385-400. doi: 10.3934/amc.2012.6.385 Heide Gluesing-Luerssen, Uwe Helmke, José Ignacio Iglesias Curto. Algebraic decoding for doubly cyclic convolutional codes. Advances in Mathematics of Communications, 2010, 4 (1) : 83-99. doi: 10.3934/amc.2010.4.83 Stefan Martignoli, Ruedi Stoop. Phase-locking and Arnold coding in prototypical network topologies. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 145-162. doi: 10.3934/dcdsb.2008.9.145 Michael Braun. On lattices, binary codes, and network codes. Advances in Mathematics of Communications, 2011, 5 (2) : 225-232. doi: 10.3934/amc.2011.5.225 Qian Guo, Thomas Johansson, Erik Mårtensson, Paul Stankovski Wagner. Some cryptanalytic and coding-theoretic applications of a soft stern algorithm. Advances in Mathematics of Communications, 2019, 13 (4) : 559-578. doi: 10.3934/amc.2019035 T. Jäger. Neuronal coding of pacemaker neurons -- A random dynamical systems approach. Communications on Pure & Applied Analysis, 2011, 10 (3) : 995-1009. doi: 10.3934/cpaa.2011.10.995 Hannes Bartz, Antonia Wachter-Zeh. Efficient decoding of interleaved subspace and Gabidulin codes beyond their unique decoding radius using Gröbner bases. Advances in Mathematics of Communications, 2018, 12 (4) : 773-804. doi: 10.3934/amc.2018046 Joan-Josep Climent, Diego Napp, Raquel Pinto, Rita Simões. Decoding of $2$D convolutional codes over an erasure channel. Advances in Mathematics of Communications, 2016, 10 (1) : 179-193. doi: 10.3934/amc.2016.10.179 Johan Rosenkilde. Power decoding Reed-Solomon codes up to the Johnson radius. Advances in Mathematics of Communications, 2018, 12 (1) : 81-106. doi: 10.3934/amc.2018005 Anas Chaaban, Vladimir Sidorenko, Christian Senger. On multi-trial Forney-Kovalev decoding of concatenated codes. Advances in Mathematics of Communications, 2014, 8 (1) : 1-20. doi: 10.3934/amc.2014.8.1 Elisa Gorla Felice Manganiello Joachim Rosenthal
CommonCrawl
Paranoia and belief updating during the COVID-19 crisis Global survey on COVID-19 beliefs, behaviours and norms Avinash Collis, Kiran Garimella, … Dean Eckles Believers in pseudoscience present lower evidential criteria Javier Rodríguez-Ferreiro & Itxaso Barberia How people know their risk preference Ruben C. Arslan, Martin Brümmer, … Gert G. Wagner Christopher S. Y. Benwell, Greta Mohr, … Robin A. A. Ince Complacency, panic, and the value of gentle rule enforcement in addressing pandemics Ido Erev, Ori Plonsky & Yefim Roth Defining and measuring vaccine hesitancy Heidi J. Larson The psychological drivers of misinformation belief and its resistance to correction Ullrich K. H. Ecker, Stephan Lewandowsky, … Michelle A. Amazeen Confidence drives a neural confirmation bias Max Rollwage, Alisa Loosen, … Stephen M. Fleming An epidemic of uncertainty: rumors, conspiracy theories and vaccine hesitancy Ed Pertwee, Clarissa Simas & Heidi J. Larson Praveen Suthaharan ORCID: orcid.org/0000-0002-2388-70091 na1, Erin J. Reed ORCID: orcid.org/0000-0003-1669-19292,3 na1, Pantelis Leptourgos1, Joshua G. Kenney1, Stefan Uddenberg ORCID: orcid.org/0000-0002-2689-39064, Christoph D. Mathys ORCID: orcid.org/0000-0003-4079-54535,6,7,8, Leib Litman9, Jonathan Robinson9, Aaron J. Moss ORCID: orcid.org/0000-0003-4396-41289, Jane R. Taylor1,10, Stephanie M. Groman1 & Philip R. Corlett ORCID: orcid.org/0000-0002-5368-19921,10,11 Nature Human Behaviour volume 5, pages 1190–1202 (2021)Cite this article The COVID-19 pandemic has made the world seem less predictable. Such crises can lead people to feel that others are a threat. Here, we show that the initial phase of the pandemic in 2020 increased individuals' paranoia and made their belief updating more erratic. A proactive lockdown made people's belief updating less capricious. However, state-mandated mask-wearing increased paranoia and induced more erratic behaviour. This was most evident in states where adherence to mask-wearing rules was poor but where rule following is typically more common. Computational analyses of participant behaviour suggested that people with higher paranoia expected the task to be more unstable. People who were more paranoid endorsed conspiracies about mask-wearing and potential vaccines and the QAnon conspiracy theories. These beliefs were associated with erratic task behaviour and changed priors. Taken together, we found that real-world uncertainty increases paranoia and influences laboratory task behaviour. Crises, from terrorist attacks1 to viral pandemics, are fertile grounds for paranoia2, the belief that others bear malicious intent towards us. Paranoia may be driven by altered social inferences3 or by domain-general mechanisms for processing uncertainty4,5. The COVID-19 pandemic increased real-world uncertainty and provided an unprecedented opportunity to track the impact of an unfolding crisis on human beliefs. We examined self-rated paranoia6 alongside social and non-social belief updating in computer-based tasks (Fig. 1a) spanning three time periods: before the pandemic lockdown; during lockdown; and into reopening. We further explored the impact of state-level pandemic responses on beliefs and behaviour. We hypothesized that paranoia would increase during the pandemic, perhaps driven by the need to explain and understand real-world volatility1. Furthermore, we expected that real-world volatility would change individuals' sensitivity to task-based volatility, causing them to update their beliefs in a computerized task accordingly5. Finally, since different states responded more or less vigorously to the pandemic and the residents of those states complied with those policies differently, we expected that efforts to quell the pandemic would change perceived real-world volatility and thus paranoid ideation and task-based belief updating. We did not preregister our experiments. Our interests evolved as the pandemic did. We chose to continue gathering data on participants' belief updating and leverage publicly available data in an effort to explore and explain the differences we observed. Fig. 1: Probabilistic reversal learning and hierarchical Gaussian filter. Depictions of our behavioural tasks and computational model used to ascertain belief-updating behaviour. a, Non-social and social task stimuli and reward contingency schedule. b, Hierarchical model for capturing changes in beliefs under task environment volatility. Relating paranoia to task-derived belief updating We administered a probabilistic reversal learning task. Participants chose between options with different reward probabilities to learn the best option (Fig. 1b)7. The best option changed and, part way through the task, the underlying probabilities became more difficult to distinguish, increasing unexpected uncertainty and blurring the distinction between probabilistic errors and errors that signified a shift in the underlying contingencies. Participants were forewarned that the best option may change but not when or how often7. Hence, the task assayed belief formation and updating under uncertainty7. The challenge was to harbour beliefs that are robust to noise but sensitive to real contingency changes7. Before the pandemic, people who were more paranoid (scoring in the clinical range on standard scales6,8) were more likely to switch their choices between options, even after positive feedback5. We compared those data (gathered via the Amazon Mechanical Turk Marketplace in the USA between December 2017 and August 2018; Supplementary Table 1) to a new task version with identical contingencies but framed socially (Fig. 1a). Instead of selecting between decks of cards ('non-social task'), participants (Supplementary Table 1) chose between three potential collaborators who might increase or decrease their score. These data were gathered during January 2020, before the World Health Organization declared a global pandemic. Participants with higher paranoia switched more frequently than participants with low paranoia after receiving positive feedback in both; however, there were no substantial behavioural differences between tasks (Supplementary Fig. 2a; win-switch rate: F(1, 198) = 0.918, P = 0.339, ηP2 = 0.0009, BF10 = 1.07; anecdotal evidence for null hypothesis of no difference between tasks lose-stay rate: F(1, 198) = 3.121, P = 0.08, ηp2 = 0.002, BF10 = 3.24; moderate evidence for the alternative hypothesis, a difference between tasks; Supplementary Fig. 2b). There were also no differences in points (BF10 = 0.163, strong evidence for the null hypothesis) or reversals achieved (BF10 = 0.210, strong evidence for the null hypothesis) between social and non-social tasks. Probabilistic reversal learning involves decision-making under uncertainty. The reasons for decisions may not be manifest in simple counts of choices or errors. By modelling participants' choices, we could estimate latent processes9. We supposed that they continually updated a probabilistic representation of the task (a generative model), which guided their behaviour10,11. To estimate their generative models, we identified: (1) a set of prior assumptions about how events are caused by the environment (the perceptual model); and (2) the behavioural consequences of their posterior beliefs about options and outcomes (the response model10,11). Inverting the response model also entailed inverting the perceptual model and yielded a mapping from task cues to the beliefs that caused participants' responses10,11 (Fig. 1b). The perceptual model (Fig. 1b) consists of three hierarchical layers of belief about the task, represented as probability distributions that encode belief content and uncertainty: (1) reward belief (what was the outcome?); (2) contingency beliefs (what are the current values of the options (decks/collaborators)?); and (3) volatility beliefs (how do option values change over time?). Each layer updates the layer above it in light of evolving experiences, which engender prediction errors and drive learning proportionally to current variance. Each belief layer has an initial mean 𝝁0, which for simplicity we refer to as the prior belief, although strictly speaking the prior belief is the Gaussian distribution with mean 𝝁0 and variance σ0. 𝝎2 and 𝝎3 encode the evolution rate of the environment at the corresponding level (contingencies and volatility). Higher values imply a more rapid tonic level of change. The higher the expected uncertainty (that is, 'I expect variable outcomes'), the less surprising an atypical outcome may be and the less it drives belief updates ('this variation is normal'). 𝜿 captures sensitivity to perceived phasic or unexpected changes in the task and underwrites perceived change in the underlying statistics of the environment (that is, 'the world is changing'), which may call for more wholesale belief revision. The layers of beliefs are fed through a sigmoid response function (Fig. 1b). We made the response model temperature inversely proportional to participants' volatility belief—rendering decisions more stochastic under higher perceived volatility. Using this model we previously demonstrated identical belief-updating deficits in paranoid humans and rats administered methamphetamine5 and that this model better captures participants' responses compared to standard reinforcement learning models5, including models that weight positive and negative prediction errors differently12. For ω3 (evolution rate of volatility) we observed a main effect of group (Fig. 2; F(1, 198) = 4.447, p = 0.036, ηp2 = 0.014) and block (F(1, 198) = 38.89, P < 0.001, ηP2 = 0.064) but no effect of task or three-way interaction. Likewise we found group and block effects, for µ30—the volatility prior—(group: F(1, 198) = 8.566, P = 0.004, ηP2 = 0.035; block: F(1, 198) = 161.845, P < 0.001, ηp2 = 0.11) and κ, the expected uncertainty learning rate (group: F(1, 198) = 21.45, P < 0.001, ηP2 = 0.08; block: F(1, 198) = 30.281, P < 0.001, ηP2 = 0.031) but no effect of task or three-way interactions. We found a group effect (F(1, 198) = 12.986, P < 0.001, ηP2 = 0.053) but no task, block or interaction effects on ω2—the evolution rate of reward contingencies. Thus, we observed an impact of paranoia on behaviour and model parameters that did not differ by the social or non-social framing of the task. People with higher paranoia expected more volatility and reward initially, had a higher learning rate for unexpected events but slower learning from expected uncertainty and reward, regardless of whether they were learning about cards or people. Fig. 2: Prepandemic (n = 202) social and non-social reversal learning. a, Non-social task (n = 72), volatility beliefs, coupling and contingency beliefs. b, Social task (n = 130), volatility beliefs, coupling and contingency beliefs. In both tasks high paranoia subjects exhibit elevated priors for volatility (𝝁30; group: F(1, 198) = 8.566, p=0.004,ηp2 = 0.035; block: F(1, 198) = 161.845, p p2 = 0.11) and contingency (𝝁20; block: F(1, 198) = 36.58, p p2 = 0.042), were slower to update those beliefs (𝝎2; group:F(1, 198) = 12.986, p p2 = 0.053, 𝝎3; group:F(1, 198) = 4.447, p = 0.036,ηp2 = 0.014, block: F(1, 198) = 38.89, p p2 = 0.064) and had higher coupling (𝜿; group: F(1, 198) = 21.45, p p2 = 0.08, block: F(1, 198) = 30.281, p p2 = 0.031) between volatility and contingency beliefs. The centre horizontal lines within the plots represent the median values, the boxes span from the 25th to the 75th percentile and the whiskers extend to 1.5× the interquartile range. How the evolving pandemic impacted paranoia and belief updating After the pandemic was declared, we continued to acquire data on both tasks (19 March 2020–17 July 2020; Supplementary Tables 2 and 3). We did not preregister our experiments. We examined the impact of real-world uncertainty on belief updating in a computerized task. The onset of the pandemic was associated with increased self-reported paranoia from January 2020 through the lockdown, peaking during reopening (Fig. 3a; F(2, 530) = 14.7, P < 0.001, ηP2 = 0.053). Anxiety increased (Supplementary Fig. 1; F(2, 529) = 4.51, P = 0.011, ηP2 = 0.017) but the change was less pronounced than paranoia, suggesting a particular impact of the pandemic on beliefs about others. Fig. 3: Paranoia, state proactivity, task behaviour and belief updating during a pandemic. Paranoia increased as the pandemic progressed (F(2, 530) = 14.7, P < 0.001, ηP2 = 0.053). a, Self-rated paranoia (n = 533) before the pandemic, during lockdown and after reopening. b, We observed a main effect of the pandemic period (F(2, 527) = 4.948, P = 0.007, ηP2 = 0.018) and a state proactivity by period interaction (F(2, 527) = 4.785, P = 0.009, ηP2 = 0.018) for paranoia and win-switch behaviour (main effect: F(2, 527) = 3.270, P = 0.039, ηP2 = 0.012; interaction: F(2, 527) = 8.747, P < 0.001, ηP2 = 0.032) and volatility priors (F(2, 527) = 8.623, P = 0.001, ηP2 = 0.032). We observed significant interactions between pandemic period and the proactivity of policies. The centre horizontal lines within the plots represent the median values, the boxes span from the 25th to the 75th percentile and the whiskers extend to 1.5× the interquartile range. In the USA, states responded differently to the pandemic; some instituted lockdowns early and broadly (more proactive), whereas others closed later and reopened sooner (less proactive) (equation (1) and Supplementary Fig. 3). When they reopened, some states mandated mask-wearing (more proactive) while others did not (less proactive). We conducted exploratory analyses to discern the impact of lockdown and reopening policies on task performance and belief updating. We observed a main effect of the pandemic period (Fig. 3b; F(2, 527) = 4.948, P = 0.007, ηP2 = 0.018) and a state proactivity by period interaction (Fig. 3b; F(2, 527) = 4.785, P = 0.009, ηP2 = 0.018) for paranoia and win-switch behaviour (Fig. 3b; main effect: F(2, 527) = 3.270, P = 0.039, ηP2 = 0.012; interaction: F(2, 527) = 8.747, P < 0.001, ηP2 = 0.032) and volatility priors (Fig. 3b; F(2, 527) = 8.623, P = 0.001, ηP2 = 0.032). Early in the pandemic, vigorous lockdown policies (closing early, extensively and remaining closed) were associated with less paranoia (Fig. 3b; t227 = 2.57, P = 0.011, Cohen's d = 0.334, 95% confidence interval (CI) = 0.071–0.539), less erratic win-switching (Fig. 3b; t216 = 2.73, P = 0.007, Cohen's d = 0.351, 95% CI = 0.019–0.117) and weaker initial beliefs about task volatility (Fig. 3b; t217 = 4.22, P < 0.001, Cohen's d = 0.561, 95% CI = 0.401–1.10) compared to participants in states that imposed a less vigorous lockdown. At reopening, paranoia was highest and participants' task behaviour was most erratic in states that mandated mask-wearing (Fig. 3b; t67 = −2.39, P = 0.02, Cohen's d = 0.483, 95% CI = −0.164 to −0.015). Furthermore, participants in mandate states had higher contamination fear (Supplementary Fig. 4; t101 = −2.89, P = 0.005, Cohen's d = 0.471, 95% CI = −0.655 to −0.121). None of the other pandemic or policy effects on parameters (priors or learning rates) survived false discovery rate (FDR) correction for multiple comparisons. Therefore, we carried win-switch rates and initial beliefs (or priors) about volatility into subsequent analyses. We asked participants in the social task to rate whether or not they believed that the avatars had deliberately sabotaged them. Reopening was associated with an increase in self-reported sabotage beliefs (Fig. 4a; t145 = −2.35, P = 0.02, Cohen's d = 0.349, 95% CI = −1.114 to −0.096). There were no significant main effects or interactions. Given the effects of the pandemic and policies on paranoia and task behaviour, we explored the impact of lockdown policy on behaviour in the social task, specifically. Self-rated paranoia in the real world correlated with sabotage belief in the task (Fig. 4b; r = 0.4, P < 0.001). During lockdown, when proactive state responses were associated with decreased self-rated paranoia, win-switch rate (t216 = 2.73, P = 0.014, Cohen's d = 0.351, 95% CI = 0.019–0.117) and 𝝁30 (t223 = 4.20, P < 0.001, Cohen's d = 4.299, 95% CI = 0–1.647) were significantly lower in participants from states with more vigorous lockdown (Fig. 4b). As paranoia increased with the pandemic, so did task-derived sabotage beliefs about the avatars. Participants in states that locked down more vigorously engaged in less erratic task behaviour and had weaker initial volatility beliefs. Fig. 4: Sabotage belief and the effects of lockdown (social task; n = 280). a, Sabotage belief, the conviction that an avatar-partner deliberately caused a loss in points, increased from prelockdown to reopening (t145 = −2.35, P = 0.02, Cohen's d = 0.349, 95% CI = −1.114 to −0.096). b, Self-rated paranoia in the real world correlated with sabotage belief in the task (Fig. 3b; r = 0.4, P < 0.001). During lockdown, when proactive state responses were associated with decreased self-rated paranoia, the win-switch rate (t216 = 2.73, P = 0.014, Cohen's d = 0.351, 95% CI = 0.019–0.117) and 𝝁30 (t223 = 4.20, P < 0.001, Cohen's d = 4.299, 95% CI = 0–1.647) were significantly lower in participants from states with more vigorous lockdown. Analysis was performed on individuals who responded to the sabotage question. The centre horizontal lines within the plots represent the median values, the boxes span from the 25th to the 75th percentile and the whiskers extend to 1.5× the interquartile range. The grey shaded region in b (leftmost graph) represents the 95% confidence interval for predictions from a linear model. Paranoia is induced by mask-wearing policies Following a quasi-experimental approach to causal inferences (developed in econometrics and recently extended to behavioural and cognitive neuroscience13), we pursued an exploratory difference-in-differences (DiD) analysis (following equation (2)) to discern the effects of state mask-wearing policy on paranoia. A DiD design compares changes in outcomes before and after a given policy takes effect in one area to changes in the same outcomes in another area that did not introduce the policy14 (Supplementary Fig. 5). The data must be longitudinal but they need not follow the same participants14. It is essential to demonstrate that—before implementation—the areas adopting different policies are matched in terms of the trends in the variable being compared (parallel trends assumption). Using the pretreatment outcomes, we cannot reject the null hypothesis that pretreatment trends of the treated and control states developed in parallel (λ = −0.1, P = 0.334). This increases our confidence that the parallel tend assumption also holds in the treatment period. However, such analyses are not robust to baseline demographic differences between treatment groups15. Before pursuing such an analysis, it is important to establish parity between the two comparator locations16 so that any differences can be more clearly ascribed to the policy that was implemented. We believe such parity applies in our case. First, there were no significant differences at baseline in the number of cases or deaths in states that went on to mandate versus recommend mask-wearing (cases, t10 = −1.22, P = 0.25, BF10 = 2.3, anecdotal evidence for null hypothesis; deaths, t10 = −1.14, P = 0.28, BF10 = 2.02, anecdotal evidence for null hypothesis). Furthermore, paranoia is held to flourish during periods of economic inequality17. There were no baseline differences in unemployment rates in April (before the mask policy onset) between states that mandated masks versus states that recommended mask-wearing (t16 = −0.81, P = 0.43, BF10 = 0.42, anecdotal evidence for null hypothesis). We employed a between-participant design, so it is important to establish that there were no demographic differences (age, sex, ethnicity) in participants from states that mandated versus participants from states that recommended mask-wearing (age, t = −1.46, d.f. = 42.5, P = 0.15, BF10 = 0.105, anecdotal evidence for null hypothesis; sex, χ2 = 0.37, d.f. = 1, P = 0.54, BF10 = 0.11, anecdotal evidence for null hypothesis; ethnicity, Fisher's exact test for count data, P = 0.21, BF10 = 0.105, anecdotal evidence for null hypothesis). On these bases, we chose to proceed with the DiD analysis. We implemented a non-parametric cluster bootstrap procedure, which is theoretically robust to heteroscedasticity and arbitrary patterns of error correlation within clusters, and to variation in error processes across clusters18. The procedure reassigns entire states to either treatment or control and recalculates the treatment effect in each reassigned sample, generating a randomization distribution. Mandated mask-wearing was associated with an estimated 40% increase in paranoia (δDiD = 0.396, P = 0.038) relative to states where mask-wearing was recommended but not required (Fig. 5a and Supplementary Fig. 5). This increase in paranoia was mirrored as significantly higher win-switch rates in participant task performance (Fig. 5b; t67 = −2.4, P = 0.039, Cohen's d = 0.483, 95% CI = −0.164 to −0.015) as well as stronger volatility priors (Fig. 5b; t141 = −3.7, P < 0.001, Cohen's d = −3.739, 95% CI = 0 to −1.585). The imposition of a mask mandate appears to have increased paranoia. Fig. 5: Effects of mask policy on paranoia and belief updating. We observed a significant increase in paranoia and perceived volatility, especially in states that issued a state-wide mask mandate. a, Map of the US states colour-coded to their respective mask policy (nrec = 40, nreq = 11) and a DiD analysis (bottom: n = 533, δDiD = 0.396, P = 0.038) of mask rules suggested a 40% increase in paranoia in states that mandated mask-wearing. b, Win-switch rate (left: n = 172, nrec = 120, nreq = 52, t67 = −2.4, P = 0.039, Cohen's d = 0.483, 95% CI = −0.164 to −0.015) and volatility belief (middle: t141 = −3.7, P < 0.001, Cohen's d = −3.739, 95% CI = 0 to −1.585) were higher in mask-mandating states but more protests per day occurred in mask-recommended states (right: n = 110, nrec = 55, nreq = 55, t83 = 3.10, P = 0.0027, Cohen's d = 0.591, 95% CI = 17.458–80.142). c, Effects of CTL in mask-recommended states (left; n = 120, nloose = 38, ntight = 82, t57 = 3.06, P = 0.003, Cohen's d = 0.663, 95% CI = 0.022–0.107) and mask-required states (right; n = 52, nloose = 48, ntight = 4 (t47 = 12.84, P < 0.001, Cohen's d = 1.911, 95% CI = 0.064–0.088) implicating violation of social norms in the genesis of paranoia. d, Follow-up study (n = 405, nlow = 314, nhigh = 91) illustrating that participants with high paranoia are less inclined to wear masks in public (left: t158 = 4.59, P < 0.001, Cohen's d = 0.520, 95% CI = 0.091–0.229), have more promiscuous switching behaviour (middle: t138 = −6.40, P < 0.001, Cohen's d = 1.148, 95% CI = −0.227 to −0.120) and elevated prior beliefs about volatility (right: t138 = −6.04, P < 0.001, Cohen's d = −6.041, 95% CI = 0 to −2.067). In b and d, the centre horizontal lines within the plots represent the median values, the boxes span from the 25th to the 75th percentile and the whiskers extend to 1.5× the interquartile range. Variation in rule following relates to paranoia To unpack the DiD result, we further explored whether any other features might illuminate the variation in paranoia by local mask policy19. There were state-level cultural differences – measured by the index of cultural tightness and looseness (CTL)19—with regards to rule following and tolerance for deviance. Tighter states have more rules and tolerate less deviance, whereas looser states have few strongly enforced rules and greater tolerance for deviance19. This index also provides a proxy for state politics. Tighter states tend to vote Republican, looser states tend towards the Democrats19. Since 2020 was a politically tumultuous time and the pandemic was politicized, we thought it prudent to incorporate politics into our analyses. We also tried to assess whether people were following the mask-wearing rules. We acquired independent survey data gathered in the USA from 250,000 respondents who, between 2 and 14 July, were asked: How often do you wear a mask in public when you expect to be within six feet of another person?20 These data were used to compute an estimated frequency of mask-wearing in each state during the reopening period (Fig. 5c). We found that in culturally tighter states, where mask-wearing was mandated, mask-wearing was lowest (t47 = 12.84, P < 0.001, Cohen's d = 1.911, 95% CI = 0.064–0.088). Furthermore, even in states where mask-wearing was recommended, mask-wearing was lowest in culturally tighter states (t57 = 3.06, P = 0.003, Cohen's d = 0.663, 95% CI = 0.022–0.107). Through backward linear regression with removal (equation (3)), we fitted a series of models attempting to predict individuals' self-rated paranoia from the features of their environment, including whether they were subject to a mask mandate or not, the cultural tightness of their state, state-level mask-wearing and coronavirus cases in their state. In the best-fitting model (F(11, 160) = 1.91, P = 0.04) there was a significant three-way interaction between mandate, state tightness and perceived mask-wearing (t24 = −2.4, P = 0.018). Paranoia was highest in mandate state participants living in areas that were culturally tighter, where fewer people were wearing masks (Fig. 6 and Supplementary Table 4). Taken together, our DiD and regression analyses imply that mask-wearing mandates and their violation, particularly in places that value rule following, may have increased paranoia and erratic task behaviour. Alternatively, the mandate may have increased paranoia in culturally conservative states, culminating in less mask-wearing. Fig. 6: Predicting paranoia from pandemic features. Regression model predictions (n = 172) in states where masks were recommended (left) versus mandated (right). Paranoia predictions based on estimated state mask-wearing (x axis, low to high mask-wearing) and cultural tightness. Red: loose states that do not prize conformity. Blue: states with median tightness. Green: tight states that are conservative and rule-following. Paranoia is highest when mask-wearing is low in culturally tight states with a mask-wearing mandate (F(11, 160) = 1.91, P = 0.04). Values represent high, median and low estimated state tightness. The shaded regions represent the 95% confidence interval. Paranoia relates to beliefs about mask-wearing In a follow-up study, we attempted a conceptual replication, recruiting a further 405 participants (19 March 2020–17 July 2020; Supplementary Table 4), polling their paranoia, their attitudes toward mask-wearing and capturing their belief updating under uncertainty with the probabilistic reversal learning task. Individuals with high paranoia were more reluctant to wear masks and reported wearing them significantly less (Fig. 5d; t158 = 4.59, P < 0.001, Cohen's d = 0.520, 95% CI = 0.091–0.229). Again, the win-switch rate was significantly higher in individuals with high paranoia (Fig. 5d; t138 = −6.40, P < 0.001, Cohen's d = 1.148, 95% CI = −0.227 to −0.120), as was their prior belief about volatility (Fig. 5d; t138 = −6.04, P < 0.001, Cohen's d = −6.041, 95% CI = 0 to −2.067), confirming the links between paranoia, mask hesitancy, erratic task behaviour and expected volatility that our DiD analysis suggested. Our data across the initial study and replication imply that paranoia flourishes when the attitudes of individuals conflict with what they are being instructed to do, particularly in areas where rule following is more common—paranoia may be driven by a fear of social reprisals for one's anti-mask attitudes. Sabotage beliefs in the non-social task Our domain-general account of paranoia5 suggests that performance on the non-social task should be related to paranoia, which we observed previously5 and presently. In the same follow-up study (Supplementary Table 5) we asked participants to complete the non-social probabilistic reversal learning task and, at completion, to rate their belief that the inanimate non-social card decks were sabotaging them. Participants' self-rated paranoia correlated with their belief that the cards were sabotaging them (Supplementary Fig. 6; r = 0.47, P < 0.001), which is consistent with reports that people with paranoid delusions imbue moving polygons with nefarious intentions21. Other changes coincident with the onset of mask policies In addition to the pandemic, other events have increased unrest and uncertainty, notably the protests after the murder of George Floyd. These protests began on 24 May 2020 and continue, occurring in every US state. To explore the possibility that these events were contributing to our results, we compared the number of protest events in mandate and recommended states in the months before and after reopening. There were significantly more protests per day from 24 May through to 31 July 2020 in mask-recommended versus mask-mandating states (t83 = 3.10, P = 0.0027, Cohen's d = 0.591, 95% CI = 17.458–80.142). This suggests that the effect of mask mandates we observed was not driven by the coincidence of protests and reopening. Protests were less frequent in states whose participants had higher paranoia (Fig. 5b). Furthermore, there were no significant differences in cases (t12 = −1.45, P = 0.17, BF10 = 1.63, anecdotal evidence for null hypothesis) or deaths (t11 = −1.64, P = 0.13, BF10 = 6.21, moderate evidence for alternative hypothesis) at reopening in mask-mandating versus mask-recommend states. We compared the change in unemployment from lockdown to reopening in mask-mandating versus mask-recommend states and found no significant difference (t17 = −1.85, P = 0.08, BF10 = 1.04, anecdotal evidence for null hypothesis). Changes in the participant pool did not drive the effects Given that the pandemic has altered our behaviour and beliefs, it is critical to establish that the effects we described above are not driven by changes in sampling. For example, with lockdown and unemployment, more people may have been available to participate in online studies. We found no differences in demographic variables across our study periods (prepandemic, lockdown, reopening, sex: F(2, 523) = 0.341, P = 0.856, ηP2 = 0.001, BF10 = 0.03, strong evidence for null hypothesis; age: F(2, 522) = 2.301, P = 0.404, ηP2 = 0.009, BF10 = 0.19, moderate evidence for null hypothesis; ethnicity: F(2, 520) = 1.10, P = 0.856, ηP2 = 0.004, BF10 = 0.06, strong evidence for null hypothesis; education: F(2,530) = 0.611, P = 0.856, ηP2 = 0.002, BF10 = 0.04, strong evidence for null hypothesis; employment: F(2,529) = 0.156, P = 0.856, ηP2 = 0.0006, BF10 = 0.03, strong evidence for null hypothesis; income: F(2,523) = 1.31, P = 0.856, ηP2 = 0.005, BF10 = 0.08, strong evidence for null hypothesis; medication: F(2,408) = 0.266, P = 0.856, ηP2 = 0.001, BF10 = 0.04, strong evidence for null hypothesis; mental and neurological health: F(2, 418) = 3.36, P = 0.288, ηP2 = 0.016, BF10 = 0.620, anecdotal evidence for null hypothesis; Supplementary Fig. 7). Given that the effects we describe depend on geographical location, we confirm that the proportions of participants recruited from each state did not differ across our study periods (χ2 = 6.63, d.f. = 6, P = 0.34, BF10 = 0.16, moderate evidence for null hypothesis; Supplementary Fig. 8). Finally, to assuage concerns that the participant pool changed as the result of the pandemic, published analyses confirm that it did not22. Furthermore, in collaboration with CloudResearch23, we ascertained location data spanning our study periods from 7,293 experiments comprising 2.5 million participants. The distributions of participants across states match those we recruited and the mean proportion of participants in a state across all studies in the pool for each period correlates significantly with the proportion of participants in each state in the data we acquired for each period: prepandemic, r = 0.76, P = 2.2 × 10−8; lockdown, r = 0.78, P = 5.8 × 10−9; reopening, r = 0.81, P = 8.5 × 10−10 (Supplementary Fig. 7). Thus, we did not, by chance, recruit more participants from mask-mandating states or tighter states, for example. Furthermore, focusing on the data that went into the DiD, there were no demographic differences pre- (age, P = 0.65, BF10 = 0.14, moderate evidence for the null hypothesis; sex, P = 0.77, BF10 = 0.13, moderate evidence for the null hypothesis; ethnicity, P = 0.34, BF10 = 0.20, moderate evidence for the null hypothesis) versus postreopening (age, P = 0.57, BF10 = 0.21, moderate evidence for the null hypothesis; sex, P = 0.77, BF10 = 0.19, moderate evidence for the null hypothesis; ethnicity, P = 0.07, BF10 = 0.55, anecdotal evidence for the null hypothesis) for mask-mandating versus mask-recommended states. Taken together with our task and self-report results, these control analyses increase our confidence that during reopening, people were most paranoid in the presence of rules and perceived rule breaking, particularly in states where people usually tend to follow the rules. Paranoia versus conspiracy theorizing While correlated, paranoia and conspiracy beliefs are not synonymous24. Therefore, we also assessed conspiracy beliefs about a potential COVID vaccine in the follow-up study (Supplementary Table 5). We found that conspiracy beliefs about a vaccine correlated significantly with paranoia (Fig. 7a; r = 0.61, P < 0.001) and that such beliefs were associated with erratic task behaviour (Fig. 7b; win-switch rate: r = 0.44, P < 0.001) and perturbed volatility priors (Fig. 7c; r = 0.34, P < 0.001) in an identical manner to mask concerns and paranoia more broadly. In the UK, early in the pandemic, conspiracy theorizing was associated with higher paranoia and less adherence to public health countermeasures25. We replicated and extended those findings to the USA and provided mechanistic suggestions centred on domain-general belief-updating mechanisms: priors on volatility and learning rates. Fig. 7: Relating vaccine conspiracy beliefs to paranoia and task behaviour. We assayed the COVID-19 vaccine conspiracy beliefs of individuals (n = 403) to investigate the underlying relationships to behaviour. a, Individuals with higher paranoia endorsed more vaccine conspiracies (r = 0.61, P < 0.001). b, COVID conspiracy beliefs were correlated with erratic task behaviour (r = 0.44, P < 0.001). c, Perturbed volatility priors (r = 0.34, P < 0.001). Analysis was performed on individuals who responded to COVID vaccine conspiracy questions. The shaded regions represent the 95% confidence interval. To further address how politics might have contributed to our results, we gathered more data in September 2020 (Supplementary Table 5). We assessed participants' performance on the probabilistic reversal learning task and we also asked them to rate their belief in the QAnon conspiracy theory. QAnon is a right-wing conspiracy theory concerned with the ministrations of the deep-state, prominent left-wing politicians and Hollywood entertainers. Its adherents believe that those individuals and organizations are engaged in child trafficking and murder, for the purposes of extracting and consuming the adrenochrome from the children's brains. They believe Donald Trump is part of a plan with the army to arrest and indict politicians and entertainers. We found that people who identify as Republican had a stronger belief in QAnon. QAnon belief and paranoia more broadly were highly correlated (Fig. 8a; r = 0.5, P < 0.001). Furthermore, QAnon belief correlated with COVID conspiracy theorizing (r = 0.5, P < 0.001). Finally, QAnon endorsement correlated with win-switch behaviour (Fig. 8b; r = 0.44, P < 0.001) and volatility belief (Fig. 8c; r = 0.31, P < 0.001) just like paranoia. Supplementary Fig. 9 depicts the effect of political party affiliation on QAnon belief, paranoia, win-switch behaviour and volatility belief. People who identified as Republican were more likely to endorse the QAnon conspiracy, attested to more paranoia, evinced more win-switching and had stronger initial beliefs about task volatility. Taken together, our data suggest that personal politics, local policies and local political climate all contributed to paranoia and aberrant belief updating. Fig. 8: Relating QAnon beliefs to paranoia and task behaviour. a, Individuals (n = 307) with higher paranoia endorsed more QAnon beliefs (r = 0.5, P < 0.001). b,c, Similarly, QAnon beliefs were strongly correlated with erratic task behaviour (r = 0.44, P < 0.001) (b) and perturbed volatility priors (r = 0.31, P < 0.001) (c). Analysis was performed on individuals who responded to the QAnon questions. The shaded regions represent the 95% confidence interval. The COVID-19 pandemic has been associated with increased paranoia. The increase was less pronounced in states that enforced a more proactive lockdown and more pronounced at reopening in states that mandated mask-wearing. Win-switch behaviour and volatility priors tracked these changes in paranoia with policy. We explored cultural variations in rule following (CTL19) as a possible contributor to the increased paranoia that we observed. State tightness may originate in response to threats such as natural disasters, disease, territorial and ideological conflict19. Tighter states typically evince more coordinated threat responses19. They have also experienced greater mortality from pneumonia and influenza throughout their history19. However, paranoia was highest in tight states with a mandate, with lower mask adherence during reopening. It may be that societies that adhere rigidly to rules are less able to adapt to unpredictable change. Alternatively, these societies may prioritize protection from ideological and economic threats over a public health crisis or perhaps view the disease burden as less threatening. Our exploratory analyses suggest that mandating mask-wearing may have caused paranoia to increase, altering participants' expected volatility in the tasks (𝝁30). Follow-up exploratory analyses suggested that in culturally tighter states with a mask mandate, those rules were being followed less (fewer people were wearing masks), which was associated with greater paranoia. Violations of social norms engender prediction errors26 that have been implicated in paranoia in the laboratory4,27,28,29. Mask wearing is a collective action problem, wherein most people are conditional co-operators, generally willing to act in the collective interest as long as they perceive sufficient reciprocation by others30. Perceiving others refusing to follow the rules and failing to proffer reciprocal protection appears to have contributed to the increase in paranoia we observed. Indeed, paranoia, a belief in others' nefarious intentions, also correlated with reluctance to wear a mask and with endorsement of vaccine conspiracy theories. Finally, people who do not want to abide by the mask-wearing rules might be paranoid about being caught violating those rules. The 2020 election in the USA politicized pandemic countermeasures. In follow-up studies conducted in September 2020, we found that paranoia correlated with endorsement of the far-right QAnon conspiracy theory, as did task-related prior beliefs about volatility. We suggest that the rise of this conspiracy theory was driven by the volatility that people experienced in their everyday lives during the pandemic. This has long been theorized historically. In this study, we present behavioural evidence for a connection between real-world volatility, conspiracy theorizing, paranoia and hesitant attitudes towards pandemic countermeasures. Evidence relating real-world uncertainty to paranoia and conspiracy theorizing has, thus far, been somewhat anecdotal and largely historical. For example, during the Black Death, the conspiratorial antisemitic belief that Jewish people were poisoning wells and causing the pandemic was sadly extremely common17. The acquired immune deficiency syndrome (AIDS) epidemic was associated with a number of conspiracies related to public health measures but less directly. For example, people believed that human immunodeficiency virus was created through the polio vaccination programme in Africa31. More broadly, the early phases of the AIDS epidemic were associated with heightened paranoia concerning homosexuals and intravenous drug users32. Perhaps the closest relative to our mask mandate result involves seat belt laws33. Like masks in a viral pandemic, seat belts are (and continue to be) extremely effective at preventing serious injury and death in road traffic accidents34. However, the introduction of state laws prescribing that they should be worn was associated with public outcry33. People were concerned about the imposition on their freedom33. They complained that seat belts were particularly dangerous when cars accidentally entered bodies of water. The evidence shows seatbelt wearing, like mask-wearing, is not associated with excess fatality. Paranoia is, by definition, a social concern. It must be undergirded by inferences about social features. Our data suggest that paranoia increases greatly when social rules are broken, particularly in cultures where rule following is valued. However, we do not believe this is license to conclude that domain-specific coalitional mechanisms underwrite paranoia as some have argued3. Rather, our data show that both social and non-social inferences under uncertainty (particularly prior beliefs about volatility) are similarly related to paranoia. Further, they are similarly altered by real-world volatility, rules and rule-breaking. We suggest that some social inferences are instantiated by domain-general mechanisms5,35. Our follow-up study demonstrating that people imputed nefarious intentions to the decidedly inanimate card decks tends to support this conclusion (Supplementary Fig. 6). We suggest this finding is consistent with previous reports that people with persecutory delusions tend to evince an intentional bias towards animated polygons21, More broadly, paranoia often relates to domain-general belief-updating biases36 and thence to domain-specific social effects37. Indeed, when tasks have both social and non-social components, there are often no differences in the weightings of these components between patients with schizophrenia and controls38,39. However, we cannot make definitive claims about the domain-general nature of paranoia. Although our social task was not preferentially related to paranoia, it may be that it was not social enough. There are clearly domain-specific social mechanisms40. We should examine the relationships between paranoia and these more definitively social tasks and will do so in future. While we independently (and multiply) replicated the associations between concerns about interventions that might mitigate the pandemic, paranoia and task behaviour—and we showed that our results are not driven by other real-world events or issues with our sampling—there are several important caveats to our conclusions. We did not preregister our experiments, predictions or analyses. Nor did we run a within-subject study through the pandemic periods. Our DiD analysis should be considered exploratory. DiD analyses require longitudinal but not necessarily within-subject or panel data14. Our DiD analysis leveraged some tentative causal claims despite being based on between-subject data14. Mask-recommended states were culturally tighter although of course cultural tightness did not change during the course of our study. Tightness interacted with mandate and adherence to mask-wearing policy (Fig. 6). The baseline difference in tightness would have worked against the effects we observed, not in their favour. Indeed, our multiple regression analysis found no evidence for an effect of tightness on paranoia in states without a mask mandate (Fig. 6). Critically, we do not know if any participant, or anyone close to them, was infected by COVID-19, so our work cannot speak to the more direct effects of infection. There are of course other factors that changed as a result of the pandemic. Unemployment increased dramatically, although not significantly more in mandate states. Historically, conspiracies peak not only during uncertainty but also during periods of marked economic inequality17. Internet searches for conspiracy topics increase with unemployment41. The patterns of behaviour we observed may have also been driven by economic uncertainty, although our data militate against this interpretation somewhat since Gini coefficients42 (a metric of income inequality) did not differ between mandate and recommend states (t19 = −1.60, P = 0.13). Finally, our work is based entirely in the USA. In future work, we will expand our scope internationally. Cultural features43 and pandemic responses vary across nations. This variance should be fertile grounds in which to replicate and extend our findings. We highlight the impact that societal volatility and local cultural and policy differences have on individual cognition. This may have contributed to past failures to replicate in psychological research. If replication attempts were conducted under different economic, political or social conditions (for example, bull versus bear markets), then they may yield different results, not because of inadequacy of the theory or experiment but because participants' behaviour was being modulated by heretofore underappreciated stable and volatile local cultural features. Per predictive processing theories4, paranoia increased with increases in real-world volatility as did task-based volatility priors. Those effects were moderated by government responses. On the one hand, proactive leadership mollified paranoia during lockdown by tempering expectations of volatility. On the other hand, mask mandates enhanced paranoia during reopening by imposing a rule that was often violated. These findings may help guide responses to future crises. All experiments were conducted at the Connecticut Mental Health Center in strict accordance with Yale University's Human Investigation Committee who provided ethical review and exemption approval (no. 2000026290). Written informed consent was provided by all research participants. A total of 1,010 participants were recruited online via CloudResearch, an online research platform that integrates with Mechanical Turk while providing additional security for easy recruitment23. Sample sizes were determined based on our previous work with this task, platform and computational modelling approach. Two studies were conducted to investigate paranoia and belief updating: a pandemic study and a replication study. Participants were randomized to one of two task versions (Behavioural tasks section). Participants were compensated with USD$6 for completion and a bonus of USD$2 if they scored in the top 10% of all respondents. Pandemic study A total of 605 participants were collected and divided into 202 prelockdown participants, 231 lockdown participants and 172 reopening participants. Of the 202, we included the 72 (16 with high paranoia) participants who completed the non-social task (described in a previous publication5). The paranoia of those participants was self-rated with the Structured Clinical Interview for DSM-IV Axis II Personality Disorders (SCID-II) paranoid trait questions, which are strongly overlapping and correlated with the Green et al. scale6. We recruited 130 (20 with high paranoia) participants who completed the social task. Similarly, of the 231, we recruited 119 (20 with high paranoia) and 112 (30 with high paranoia) participants who completed the non-social and social tasks, respectively. Lastly, of the 172, we recruited 93 (35 with high paranoia) and 79 (35 with high paranoia) participants who completed the non-social and social tasks, respectively. In addition to CloudResearch's safeguard from bot submissions, we implemented the same study advertisement, submission review, approval and bonusing as described in our previous study5. We excluded a total of 163 submissions—18 from prelockdown (social only), 34 from lockdown (non-social and social) and 111 from reopening (non-social and social). Of the 18, 17 were excluded based on incomplete/nonsensical free-response submissions and 1 for insufficient questionnaire completion. Of the 34, 29 were excluded based on incomplete/nonsensical free-response submissions and 5 for insufficient questionnaire completion. Of the 111, all were excluded based on incomplete/nonsensical free-response submissions. Submissions with grossly incorrect completion codes were rejected without further review. Replication study We collected a total of 405 participants of which 314 were individuals with low paranoia and 91 were individuals with high paranoia. Similar exclusion and inclusion criteria were applied for recruitment; most notably, we leveraged CloudResearch's newly added Data Quality feature that only allows vetted high-quality participants—individuals who passed their screening measures—into our study. This systematically cleaned all poor participants from our sample pool. Behavioural tasks Participants completed a three-option probabilistic reversal learning task with a non-social (card deck) or social (partner) domain frame. For the non-social domain frame, 3 decks of cards were presented for 160 trials, divided evenly into 4 blocks. Each deck contained different amounts of winning (+100) and losing (−50) cards. Participants were instructed to find the best deck and earn as many points as possible. It was also noted that the best deck could change11. For the social domain frame, 3 avatars were presented for 160 trials, divided evenly into 4 blocks. Participants were advised to imagine themselves as students at a university working with classmates to complete a group project, where some classmates were known to be unreliable—showing up late, failing to complete their work, getting distracted for personal reasons—or deliberately sabotage their work. Each avatar either represented a helpful (+100) or hurtful (−50) partner. We instructed participants to select an avatar (or partner) to work with to gain as many points as possible towards their group project. Like the non-social domain frame, they were instructed that the best partner could change. For both tasks, the contingencies began as 90% reward, 50% reward and 10% reward with the allocation across deck/partner switching after 9 out of 10 consecutive rewards. At the end of the second block, unbeknown to the participants, the underlying contingencies transitioned to 80% reward, 40% reward and 20% reward, making it more difficult to discern whether a loss of points was due to normal variations (probabilistic noise) or whether the best option had changed. After task completion, questionnaires were administered via Qualtrics. We queried demographic information (age, sex, educational attainment, ethnicity) and mental health questions (past or present diagnosis, medication use), SCID-II8, Beck's Anxiety Inventory44, Beck's Depression Inventory45, Dimensional Obsessive-Compulsive Scale46, and critically, the revised Green et al. Paranoid Thoughts Scale6, which separates clinically from non-clinically paranoid individuals based on the receiver operating characteristic curve-recommended cut-off score of 11. We also polled participants' beliefs about the social task (Did any of the partners deliberately sabotage you?) on a Likert scale from 'Definitely not' to 'Definitely yes'. We later added the same item for the non-social task (Did you feel as though the decks were tricking you?) to investigate sabotage belief differences between tasks (Supplementary Fig. 6). In a follow-up study, we adopted a survey47 that investigated individual US consumers' mask attitude and behaviour and a survey25 of COVID-19 conspiracies. The 9-item mask questionnaire was used for our study to calculate mask attitude (values < 0 indicate attitude against mask-wearing and values > 0 indicate attitude in favour of mask-wearing) to identify group differences in paranoia. To compute an individual's coronavirus vaccine conspiracy belief, we aggregated 5 vaccine-related questions from the 48-item coronavirus conspiracy questionnaire: (1) the coronavirus vaccine will contain microchips to control people; (2) coronavirus was created to force everyone to get vaccinated; (3) the vaccine will be used to carry out mass sterilization; (4) the coronavirus is bait to scare the whole globe into accepting a vaccine that will introduce the 'real' deadly virus; (5) the World Health Organization already has a vaccine and are withholding it.We adopted a 7-point scale: (1) strongly disagree; (2) disagree; (3) somewhat disagree; (4) neutral; (5) somewhat agree; (6) agree; and (7) strongly agree. A higher score indicates greater endorsement of a question. To measure beliefs about the QAnon conspiracy, we used a questionnaire that polled respondents' political attitudes48, in particular towards QAnon. Along with the task and questionnaire data, we examined state-level unemployment rates49, confirmed COVID-19 cases50 and mask-wearing20 in the USA. For unemployment, the Carsey School of Public Policy reported unemployment rates for the months of February, April, May and June in 2020. We utilized the rates in April and June as our markers to measure the difference in unemployment between the prepandemic and pandemic periods, respectively. For confirmed cases, the New York Times has published cumulative counts of coronavirus cases since January 2020. Similarly, at the request of the New York Times, Dynata—a research firm—conducted interviews on mask use across the USA and obtained a sample of 250,000 survey respondents between the 2 and 14 July20. Each participant was asked: How often do you wear a mask in public when you expect to be within six feet of another person? The answer choices to the question included never, rarely, sometimes, frequently and always. Mask policies According to the Philadelphia Inquirer (https://fusion.inquirer.com/health/coronavirus/covid-19-coronavirus-face-masks-infection-rates-20200624.html), 11 states mandated mask-wearing in public: California, New Mexico, Michigan, Illinois, New York, Massachusetts, Rhode Island, Maryland, Virginia, Delaware and Maine at the time of our reopening data collection. The other states from which we recruited participants recommended mask-wearing in public. We accessed the publicly available data from the armed conflict location and event data project (https://acleddata.com/special-projects/us-crisis-monitor/), which has been recording the location, participation and motivation of protests in the US since the week of George Floyd's murder in May 2020. Behavioural analysis We analysed tendencies to choose alternative decks after positive feedback (win-switch) and select the same deck after negative feedback (lose-stay). Win-switch rates were calculated as the number of trials where the participant switched after positive feedback divided by the number of trials where they received positive feedback. Lose-stay rates were calculated as the number of trials where a participant persisted after negative feedback divided by the total negative feedback trials. Lockdown proactivity metric We also defined a proactivity metric (or score) to measure how inadequately or adequately a state reacted to COVID-19 (ref. 51). This score was calculated based on when a state's stay-at-home (SAH) order was introduced (I) and when it expired (E): I: number of days from baseline to when the order was introduced (that is, introduced date-baseline date); E: number of days before the order was lifted since it was introduced (that is, expiration date-introduced date) where the baseline date is defined as the date at which the first SAH order was implemented (Supplementary Fig. 3). California was the first to enforce the order on 19 March 2020 (that is, baseline date = 1). We calculated proactivity as follows: $$\rho = \left\{ {\begin{array}{*{20}{c}} {\frac{1}{{1 + \frac{I}{E}}}} & {\mathrm{if}\,E \ge I > 0} \\ 0 & {\mathrm{if}\,E = 0\,{\mathrm{and}}\,I = 0} \end{array}} \right.$$ This function gives states with early lockdown (I→1) and sustained lockdown (E→∞) a higher proactivity score (ρ→1), while giving states that did not issue state-wide SAH orders (E = 0; I = 0) a score of 0. Therefore, our proactivity (ρ) metric—either 0 (never lockdown, less proactive) or ranging from 0.5 (started lockdown, less proactive) to 1 (started lockdown, more proactive)—offers a reasonable approach for measuring proactive state interventions in response to the pandemic. In our analyses, for lockdown we separated less proactive and more proactive states at the median. For reopening, states that mandated mask-wearing were designated more proactive and states that recommended mask-wearing were designated less proactive. We set the proactivity of the prelockdown data to be the proactivity of the lockdown response that would be enacted once the pandemic was declared. Using the reopening proactivity designation for the prelockdown data instead had no impact on our findings (Supplementary Table 6). To measure the attribution of mask policy on paranoia, we adopted a DiD approach. The DiD model we used to assess the causal effect of mask policy on paranoia in states that either recommended or required masks to be worn in public is represented by the following equation: $$P_{it} = \alpha + \beta t_i + \lambda M_i + \delta \left( {t_i \times M_i} \right) + {\it{\epsilon }}_{it}$$ where Pit is the paranoia level for individual i and time t, α is the baseline average of paranoia, β is the time trend of paranoia in the control group, λ is the preintervention difference in paranoia between both control and treatment groups and δ is the mask effect. The control and treatment groups, in our case, represent states that recommend and require mask-wearing, respectively. The interaction term between time and mask policy represents our DiD estimate. Multiple regression analysis We conducted a multiple linear regression analysis, attempting to predict paranoia based on three continuous state variables—number of COVID-19 cases, CTL index and mask-wearing belief—and one categorical state variable, that is, mask policy. We fitted a 15-predictor paranoia model and performed backward stepwise regression to find the model that best explains our data. Below we illustrate the full 15-predictor model and the resulting reduced 11-predictor model. For the full model: $$\begin{array}{l}\hat y = \beta _0 + \beta _1 \times {\mathrm{X}}_{{\mathrm{CASES}}} + \beta _2 \times {\mathrm{X}}_{{\mathrm{POLICY}}} + \beta _3 \times {\mathrm{X}}_{{\mathrm{CTL}}} + \\ \beta _4 \times {\mathrm{X}}_{{\mathrm{MASK}}} + \beta _5 \times {\mathrm{X}}_{{\mathrm{CASES}} \times {\mathrm{POLICY}}} + \beta _6 \times {\mathrm{X}}_{{\mathrm{CASES}} \times {\mathrm{CTL}}} + \\ \beta _7 \times {\mathrm{X}}_{{\mathrm{POLICY}} \times {\mathrm{CLT}}} + \beta _8 \times {\mathrm{X}}_{{\mathrm{CASES}} \times {\mathrm{MASK}}} + \beta _9 \times {\mathrm{X}}_{{\mathrm{CTL}} \times {\mathrm{MASK}}} + \\ \beta _{10} \times {\mathrm{X}}_{{\mathrm{CTL}} \times {\mathrm{MASK}}} + \beta _{11} \times {\mathrm{X}}_{{\mathrm{CASES}} \times {\mathrm{POLICY}} \times {\mathrm{CTL}}} + \\ \beta _{12} \times {\mathrm{X}}_{{\mathrm{CASES}} \times {\mathrm{POLICY}} \times {\mathrm{MASK}}} + \beta _{13} \times {\mathrm{X}}_{{\mathrm{CASES}} \times {\mathrm{CTL}} \times {\mathrm{MASK}}} + \\ \beta _{14} \times {\mathrm{X}}_{{\mathrm{POLICY}} \times {\mathrm{CTL}} \times {\mathrm{MASK}}} + \beta _{15} \times X_{CASES \times POLICY \times CTL \times MASK}\end{array}$$ For the reduced model: $$\begin{array}{l}\hat y = \beta _0 + \beta _1 \times {\mathrm{X}}_{{\mathrm{CASES}}} + \beta _2 \times {\mathrm{X}}_{{\mathrm{POLICY}}} + \beta _3 \times {\mathrm{X}}_{{\mathrm{CTL}}} + \\ \beta _4 \times {\mathrm{X}}_{{\mathrm{MASK}}} + \beta _5 \times {\mathrm{X}}_{{\mathrm{CASES}} \times {\mathrm{POLICY}}} + \beta _6 \times {\mathrm{X}}_{{\mathrm{CASES}} \times {\mathrm{CTL}}} + \\ \beta _7 \times {\mathrm{X}}_{{\mathrm{POLICY}} \times {\mathrm{CTL}}} + \beta _8 \times {\mathrm{X}}_{{\mathrm{POLICY}} \times {\mathrm{MASK}}} + \beta _9 \times {\mathrm{X}}_{{\mathrm{CTL}} \times {\mathrm{MASK}}} + \\ \beta _{10} \times {\mathrm{X}}_{{\mathrm{CASES}} \times {\mathrm{POLICY}} \times {\mathrm{CTL}}} + \beta _{11} \times {\mathrm{X}}_{{\mathrm{POLICY}} \times {\mathrm{CTL}} \times {\mathrm{MASK}}}\end{array}$$ The Hierarchical Gaussian Filter (HGF) toolbox v.5.3.1 is freely available for download in the Translational Algorithms for Psychiatry-Advancing Science package at https://translationalneuromodeling.github.io/tapas10,11. We installed and ran the package in MATLAB and Statistics Toolbox Release 2016a (MathWorks). We estimated perceptual parameters individually for the first and second halves of the task (that is, blocks 1 and 2). Each participant's choices (that is, deck 1, 2 or 3) and outcomes (win or loss) were entered as separate column vectors with rows corresponding to trials. Wins were encoded as 1, losses as 0 and choices as 1, 2 or 3. We selected the autoregressive three-level HGF multi-armed bandit configuration for our perceptual model and paired it with the softmax-mu03 decision model. Statistical analyses and effect size calculations were performed with an alpha of 0.05 and two-tailed P values in RStudio v.1.3.959. Bayes factors (BF10) were reported for non-significant t-tests and analyses of variance (ANOVAs) to provide additional evidence of no effect (or no differences)52 We defined the null hypothesis (H0) as there being no difference in the means of behaviour/demographics between groups (H0: µ1 − µ2 = 0) and the alternative hypothesis (H1) as a difference (H0: µ1 − µ2 ≠ 0). Interpretations of the BF10 were adopted from Lee and Wagenmakers53. Independent samples t-tests were conducted to compare questionnaire item responses between high and low paranoia groups. Distributions of demographic and mental health characteristics across paranoia groups were evaluated by chi-squared exact tests (two groups) or Monte Carlo tests (more than two groups). Correlations were computed with Pearson's r. HGF parameter estimates and behavioural patterns (win-switch and lose-stay rates) were analysed by repeated measures and split-plot ANOVAs (that is, block designated as within-participant factor; pandemic, paranoia group and social versus non-social condition as between-participants factors). Model parameters were corrected for multiple comparisons using the Benjamini–Hochberg54 method with an FDR of 0.05 in ANOVAs across experiments. All data visualization were produced in RStudio. Some were adopted from the raincloud plot theme55. The data that support this paper are available at https://github.com/psuthaharan/covid19paranoia. The code used to analyse the data and generate the figures is available at https://github.com/psuthaharan/covid19paranoia. van Prooijen, J. W. & Douglas, K. M. Conspiracy theories as part of history: the role of societal crisis situations. Mem. Stud. 10, 323–333 (2017). Smallman, S. Whom do you trust? Doubt and conspiracy theories in the 2009 influenza pandemic. J. Int. Glob. Stud. 6, 2 (2015). Raihani, N. J. & Bell, V. An evolutionary perspective on paranoia. Nat. Hum. Behav. 3, 114–121 (2019). Feeney, E. J., Groman, S. M., Taylor, J. R. & Corlett, P. R. Explaining delusions: reducing uncertainty through basic and computational neuroscience. Schizophr. Bull. 43, 263–272 (2017). Reed, E. J. et al. Paranoia as a deficit in non-social belief updating. eLife 9, e56345 (2020). Freeman, D. et al. The revised Green et al., Paranoid Thoughts Scale (R-GPTS): psychometric properties, severity ranges, and clinical cut-offs. Pychol. Med. 51, 244–253 (2021). Soltani, A. & Izquierdo, A. Adaptive learning under expected and unexpected uncertainty. Nat. Rev. Neurosci. 20, 635–644 (2019). Ryder, A. G., Costa, P. T. & Bagby, R. M. Evaluation of the SCID-II personality disorder traits for DSM-IV: coherence, discrimination, relations with general personality traits, and functional impairment. J. Pers. Disord. 21, 626–637 (2007). Corlett, P. R. & Fletcher, P. C. Computational psychiatry: a Rosetta Stone linking the brain to mental illness. Lancet Psychiatry 1, 399–402 (2014). Mathys, C., Daunizeau, J., Friston, K. J. & Stephan, K. E. A Bayesian foundation for individual learning under uncertainty. Front. Hum. Neurosci. 5, 39 (2011). Mathys, C. D. et al. Uncertainty in perception and the hierarchical Gaussian filter. Front. Hum. Neurosci. 8, 825 (2014). Lefebvre, G., Nioche, A., Bourgeois-Gironde, S. & Palminteri, S. Contrasting temporal difference and opportunity cost reinforcement learning in an empirical money-emergence paradigm. Proc. Natl Acad. Sci. USA 115, E11446–E11454 (2018). Marinescu, I. E., Lawlor, P. N. & Kording, K. P. Quasi-experimental causality in neuroscience and behavioural research. Nat. Hum. Behav. 2, 891–898 (2018). Angrist, J. D. & Pischke, J.-S. Mostly Harmless Econometrics (Princeton Univ. Press, 2008). Jaeger, D. A., Joyce, T. J. & Kaestner, R. A. A cautionary tale of evaluating identifying assumptions: did reality TV really cause a decline in teenage childbearing? J. Bus. Econ. Stat. 38, 317–326 (2020). Goodman-Bacon, A. & Marcus, J. Using difference-in-differences to identify causal effects of COVID-19 policies. Surv. Res. Methods 14, 153–158 (2020). Cohn, N. The Pursuit of the Millenium (Oxford Univ. Press, 1961). Cameron, A. C. & Miller, D. L. A practitioner's guide to cluster-robust inference. J. Hum. Resour. 50, 317–372 (2015). Harrington, J. R. & Gelfand, M. J. Tightness-looseness across the 50 United States. Proc. Natl Acad. Sci. USA 111, 7990–7995 (2014). Katz, J., Sanger-Katz, M. & Quealy, K. Estimates from The New York Times, based on roughly 250,000 interviews conducted by Dynata from July 2 to July 14 (The New York Times and Dynata, 2020); https://github.com/nytimes/covid-19-data/tree/master/mask-use Blakemore, S. J., Sarfati, Y., Bazin, N. & Decety, J. The detection of intentional contingencies in simple animations in patients with delusions of persecution. Psychol. Med. 33, 1433–1441 (2003). Moss, A. J., Rosenzweig, C., Robinson, J. & Litman, L. Demographic stability on Mechanical Turk despite COVID-19. Trends Cogn. Sci. 24, 678–680 (2020). Litman, L., Robinson, J. & Abberbock, T. TurkPrime.com: a versatile crowdsourcing data acquisition platform for the behavioral sciences. Behav. Res. Methods 49, 433–442 (2017). Imhoff, R. & Lamberty, P. How paranoid are conspiracy believers? Toward a more fine‐grained understanding of the connect and disconnect between paranoia and belief in conspiracy theories. Eur. J. Soc. Psychol. 48, 909–926 (2018). Freeman, D. et al. Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England. Psychol. Med. https://doi.org/10.1017/S0033291720001890 (2020). Colombo, M. Two neurocomputational building blocks of social norm compliance. Biol. Philos. 29, 71–88 (2014). Corlett, P. R. et al. Disrupted prediction-error signal in psychosis: evidence for an associative account of delusions. Brain 130, 2387–2400 (2007). Corlett, P. R., Taylor, J. R., Wang, X.-J., Fletcher, P. C. & Krystal, J. H. Toward a neurobiology of delusions. Prog. Neurobiol. 92, 345–369 (2010). Romaniuk, L. et al. Midbrain activation during Pavlovian conditioning and delusional symptoms in schizophrenia. Arch. Gen. Psychiatry 67, 1246–1254 (2010). Ostrom, E. Collective action and the evolution of social norms. J. Econ. Perspect. 14, 137–158 (2000). Worobey, M. et al. Origin of AIDS: contaminated polio vaccine theory refuted. Nature 428, 820 (2004). Gonsalves, G. & Staley, P. Panic, paranoia, and public health—the AIDS epidemic's lessons for Ebola. N. Engl. J. Med. 371, 2348–2349 (2014). Giubilini, A. & Savulescu, J. Vaccination, risks, and freedom: the seat belt analogy. Public Health Ethics 12, 237–249 (2019). PubMed PubMed Central Google Scholar Robertson, L. Road death trend in the United States: implied effects of prevention. J. Public Health Pol. 39, 193–202 (2018). Heyes, C. & Pearce, J. M. Not-so-social learning strategies. Proc. R. Soc. B 282, 20141709 (2015). Freeman, D. et al. Concomitants of paranoia in the general population. Psychol. Med. 41, 923–936 (2011). Pot-Kolder, R., Veling, W., Counotte, J. & van der Gaag, M. Self-reported cognitive biases moderate the associations between social stress and paranoid ideation in a virtual reality experimental study. Schizophr. Bull. 44, 749–756 (2018). Henco, L. et al. Bayesian modelling captures inter-individual differences in social belief computations in the putamen and insula. Cortex 131, 221–236 (2020). Henco, L. et al. Aberrant computational mechanisms of social learning and decision-making in schizophrenia and borderline personality disorder. PLoS Comput. Biol. 16, e1008162 (2020). Heyes, C. Précis of cognitive gadgets: the cultural evolution of thinking. Behav. Brain Sci. 42, E169 (2019). DiGrazia, J. The social determinants of conspiratorial ideation. Socius 3, 237802311668979 (2017). American Community Survey (United States Census, 2017); https://www.census.gov/acs/www/data/data-tables-and-tools/data-profiles/2017/ Gelfand, M. J. et al. Differences between tight and loose cultures: a 33-nation study. Science 332, 1100–1104 (2011). Beck, A. T., Epstein, N., Brown, G. & Steer, R. A. An inventory for measuring clinical anxiety: psychometric properties. J. Consult. Clin. Psychol. 56, 893–897 (1988). Beck, A. T., Ward, C. H., Mendelson, M., Mock, J. & Erbaugh, J. An inventory for measuring depression. Arch. Gen. Psychiatry 4, 561–571 (1961). Abramowitz, J. S. et al. Assessment of obsessive-compulsive symptom dimensions: development and evaluation of the Dimensional Obsessive-Compulsive Scale. Psychol. Assess. 22, 180–198 (2010). Knotek, E. 2nd et al. Consumers and COVID-19: survey results on mask-wearing behaviors and beliefs. Economic Commentary https://doi.org/10.26509/frbc-ec-202020 (2020). Enders. A. et al. Who supports QAnon? A case study in political extremism https://www.joeuscinski.com/uploads/7/1/9/5/71957435/qanon_2-4-21.pdf (2021) Ettlinger, M. & Hensley, J. COVID-19 economic crisis: by state. Carsey School of Public Policy https://carsey.unh.edu/COVID-19-Economic-Impact-By-State (2021). An ongoing repository of data on coronavirus cases and deaths in the U.S. (The New York Times, 2020); https://github.com/nytimes/covid-19-data Status of lockdown and stay-at-home orders in response to the coronavirus (COVID-19) pandemic. Ballotpedia https://ballotpedia.org/Status_of_lockdown_and_stay-at-home_orders_in_response_to_the_coronavirus_(COVID-19)_pandemic,_2020 (2020). Gelman, A. & Stern, H. The difference between 'significant' and 'not significant' is not itself statistically significant. Am. Stat. 60, 328–331 (2006). Lee, M. D & Wagenmakers, E.-J. Bayesian Cognitive Modeling: A Practical Course (Cambridge Univ. Press, 2013). Hochberg, Y. & Benjamini, Y. More powerful procedures for multiple significance testing. Stat. Med. 9, 811–818 (1990). Allen, M. et al. Raincloud plots: a multi-platform tool for robust data visualization. Wellcome Open Res. 4, 63 (2021). This work was supported by the Yale University Department of Psychiatry, the Connecticut Mental Health Center and Connecticut State Department of Mental Health and Addiction Services. It was funded by an International Mental Health Research Organization/Janssen Rising Star Translational Research Award, an Interacting Minds Center (Aarhus) Pilot Project Award, National Institute of Mental Health (NIMH) grant no. R01MH12887 (P.R.C.), NIMH grant no. R21MH120799-01 (P.R.C. and S.M.G.) and an Aarhus Universitets Forskningsfond Starting Grant (C.D.M.). E.J.R. was supported by the National Institutes of Health (NIH) Medical Scientist Training Program training grant no. GM007205, National Institute of Neurological Disorders and Stroke Neurobiology of Cortical Systems grant no. T32 NS007224 and a Gustavus and Louise Pfeiffer Research Foundation Fellowship. S.U. received funding from an NIH T32 fellowship (no. MH065214). S.M.G. and J.R.T. were supported by a National Institute on Drug Abuse grant no. DA DA041480. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. L.L., J.R. and A.J.M. are employees of CloudResearch. These authors contributed equally: Praveen Suthaharan, Erin J. Reed. Department of Psychiatry, Connecticut Mental Health Center, Yale University, New Haven, CT, USA Praveen Suthaharan, Pantelis Leptourgos, Joshua G. Kenney, Jane R. Taylor, Stephanie M. Groman & Philip R. Corlett Interdepartmental Neuroscience Program, Yale School of Medicine, New Haven, CT, USA Erin J. Reed Yale MD-PhD Program, Yale School of Medicine, New Haven, CT, USA Booth School of Business, University of Chicago, Chicago, IL, USA Stefan Uddenberg Interacting Minds Center, Aarhus University, Aarhus, Denmark Christoph D. Mathys Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich, Zurich, Switzerland Swiss Federal Institute of Technology Zurich, Zurich, Switzerland Scuola Internazionale Superiore di Studi Avanzati, Trieste, Italy CloudResearch, New York, NY, USA Leib Litman, Jonathan Robinson & Aaron J. Moss Department of Psychology, Yale University, New Haven, CT, USA Jane R. Taylor & Philip R. Corlett Wu Tsai Institute, Yale University, New Haven, CT, USA Philip R. Corlett Praveen Suthaharan Joshua G. Kenney Leib Litman Jonathan Robinson Aaron J. Moss Jane R. Taylor Stephanie M. Groman P.S., P.R.C., E.J.R., J.R.T. and S.M.G. conceived the study. E.J.R., J.G.K., C.D.M., P.L. and S.U. contributed the task materials and analysis code. A.J.M., J.R. and L.L. contributed the data. P.S. acquired the data. P.S. and P.R.C. analysed the data. All authors wrote and edited the manuscript. Correspondence to Philip R. Corlett. Peer review information Nature Human Behaviour thanks Ryan Balzan, Michael Moutoussis and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Supplementary Figs. 1–9 and Supplementary Tables 1–6. Suthaharan, P., Reed, E.J., Leptourgos, P. et al. Paranoia and belief updating during the COVID-19 crisis. Nat Hum Behav 5, 1190–1202 (2021). https://doi.org/10.1038/s41562-021-01176-8 Meta-analysis of human prediction error for incentives, perception, cognition, and action Jessica A. Mollick Hedy Kober Neuropsychopharmacology (2022) Verschwörungstheorien und paranoider Wahn: Lassen sich Aspekte kognitionspsychologischer Modelle zu Entstehung und Aufrechterhaltung von paranoiden Wahnüberzeugungen auf Verschwörungstheorien übertragen? Stephanie Mehl Forensische Psychiatrie, Psychologie, Kriminologie (2022) "COVID-19 spreads round the planet, and so do paranoid thoughts". A qualitative investigation into personal experiences of psychosis during the COVID-19 pandemic Minna Lyons Ellen Bootes Luna Centifanti Current Psychology (2021) COVID-19 and human behaviour Nature Human Behaviour (Nat Hum Behav) ISSN 2397-3374 (online)
CommonCrawl
Diversity and communities of culturable endophytic fungi from the root holoparasite Balanophora polyandra Griff. and their antibacterial and antioxidant activities Chunyin Wu1 na1, Wei Wang2 na1, Xiaoqing Wang2, Hamza Shahid1, Yuting Yang1, Yangwen Wang1, Shengkun Wang3 & Tijiang Shan ORCID: orcid.org/0000-0003-4607-53771 Balanophora polyandra Griff. is a holoparasitic medicinal plant that produces compounds with antibacterial and antioxidant activities. Plant endophytic fungi are an abundant reservoir of bioactive metabolites for medicinal exploitation, and an increasing number of novel bioactive compounds are being isolated from endophytic fungi. The present study investigated the diversity of culturable endophytic fungi from the roots of holoparasite B. polyandra to explore active strains and metabolites. In addition, the antibacterial and antioxidant activities of 22 strains cultured from B. polyandra were also evaluated. The endophytic fungi were identified according to their colony morphology and ITS-5.8S rDNA sequencing. TLC-MTT-Bioautography assays and DPPH radical scavenging assays were employed to assess the antibacterial and antioxidant activities of ethyl acetate extracts of the endophytic fungi. One hundred and twenty-five endophytic strains were isolated from the roots of B. polyandra, including 70 from female samples and 55 from male samples. Of them, twenty-two distinct isolates representing 15 genera and 22 species based on their ITS-rDNA genomic sequence were successfully identified from female and male samples of B. polyandra. The genus Calonectria was the most prevalent genus, with a CF% of 18.3, followed by the genera Clonostachys and Botryosphaeria, with CF% values of 13.4 and 10.0, respectively. Interestingly, the fungal extracts exhibited broad-spectrum antibacterial activities against gram-positive and gram-negative bacteria, as well as potential antioxidant activities with IC50 values ranging from 0.45 to 6.90 mg/mL. Among them, endophytes Bpf-10 (Diaporthe sp.) and Bpf-11 (Botryosphaeria sp.) showed the strongest biological activities and more abundant secondary metabolites. This study reported the diversity of endophytic fungi from the roots of B. polyandra and the antibacterial and antioxidant activities of the crude extracts for the first time. The results revealed that B. polyandra contains diverse culturable endophytic fungi that potentially produce natural antibacterial and antioxidant compounds with great value to the agriculture and pharmaceutical industries. Plant endophytic fungi are microorganisms that grow inside plant tissues without causing negative symptoms to the host and produce biologically active substances (Sheik and Chandrashekar 2018). They have been regarded as a novel source of natural bioactive compounds with tremendous applications in medicine, agriculture, and the food industry. In the past few years, many valuable bioactive compounds with anticancer, insecticidal, antimicrobial, and cytotoxic activities have been successfully isolated from endophytic fungi (Ascêncio et al. 2014; Jia et al. 2016; Atiphasaworn et al. 2017; Bedi et al. 2017). Endophytic fungi produce bioactive compounds similar to the host plant (Venieraki et al. 2017). Thus, endophytic fungi can be used to isolate active metabolites and reduce the large-scale utilization of plants as a method to protect the environment. Balanophora polyandra Griff., belonging to the family Balanophoraceae, is a natural medicinal parasitic plant that lives in the root system of many Fagaceae plants and is mainly distributed in southern China, Japan, Nepal, India, and Burma (Wang et al. 2006; Tao et al. 2009). The whole plant has been used as a folk medicine due to its antipyretic, antidotal, and hemostatic properties. Moreover, B. polyandra has also been used as a traditional Chinese medicine, especially for treating gonorrhea, syphilis, wounds, and bleeding of the alimentary tract (Wang et al. 2013). Previous studies have shown the presence of potentially active metabolites in B. polyandra with antioxidant, immunosuppressive, hypoglycemic, antitumor, and antibacterial activities (Wang et al. 2013; Ouyang et al. 2017). However, no reports have discussed endophytic fungi and their biological activities. Therefore, the diversity of endophytic fungi of B. polyandra with biological effects must be elucidated. The threat of drug-resistant pathogens has become a major concern worldwide (Nafis et al. 2018). Ralstonia solanacearum is a soil-borne bacterium that causes bacterial wilt in eucalyptus plantations worldwide, and efficient control measures are still limited (Mao et al. 2021). Similarly, the oxidative stress caused by free radicals is known to be involved in pathophysiological events, especially in some human diseases, such as diabetes mellitus, aging, atherosclerosis, Alzheimer's disease, and Parkinson's disease (Gunasekaran et al. 2017). The main characteristic of antioxidant compounds is capturing and stabilizing free radicals. The development of effective and safe drugs to combat human, animal, and plant diseases is now urgently needed (Patil et al. 2016). One approach to solving these problems is to search for new antibacterial and antioxidant metabolites. Endophytic fungi broaden the scope of new antibiotics, chemotherapeutic agents, and agrochemicals with high efficiency and low toxicity (Hateet 2016; Das et al. 2017; Zhong et al. 2017). A promising future for developing a new drug exists by exploiting and utilizing endophytic fungi resources from medicinal plants. Hence, this study aims to screen potential endophytic fungi with significant antibacterial and antioxidant activities from the roots of B. polyandra. Molecular and morphological approaches were used for the isolation, characterization, and analysis of the diversity of endophytic fungi. Furthermore, the antibacterial and antioxidant activities of endophytic fungal extracts were assessed using TLC-MTT-Bioautography assays and DPPH radical scavenging assays. Finally, the chemical compositions of the crude extracts that had significant biological activities were analyzed using HPLC. Isolation, identification, and phylogenetic analysis of endophytic fungi isolated from B. polyandra A total of 125 endophytic fungi were isolated from 60 samples of B. polyandra (30 female samples and 30 male samples). Seventy fungal isolates were obtained from female samples, and fifty-five were obtained from male samples. According to their colony morphology (shape of conidia, mycelial growth rate, colony color and texture, etc.), 22 distinct fungal isolates (Bpf-1~Bpf-22) were selected for further molecular and microscopic identification. The colonies of Bpf-1~Bpf-22 grown on PDA medium are shown in Fig. 1. The obtained ITS sequences were compared with those in GenBank to identify the fungi. They were identified as members of fifteen genera, including Clonostachys (Bpf-1 and Bpf-15), Gliocladiopsis (Bpf-2), Calonectria (Bpf-3, Bpf-5, Bpf-9, Bpf-15 and Bpf-18), Gliocephalotrichum (Bpf-4), Pestalotiopsis (Bpf-6), Botryosphaeria (Bpf-7 and Bpf-11), Trichoderma (Bpf-8), Diaporthe (Bpf-10), Myrothecium (Bpf-12), Cylindrocladium (Bpf-13), Fusarium (Bpf-14 and Bpf-16), Colletotrichum (Bpf-17 and Bpf-20), Mucor (Bpf-19), Lasiodiplodia (Bpf-20), and Neofusicoccum (Bpf-22) (Table 1). Calonectria was the most prevalent genus, with a colonization frequency (CF) of 18.3%, followed by Clonostachys and Botryosphaeria with CF values of 13.4% and 10.0%, respectively. The genetic identities of 22 isolates were greater than 98%. The obtained ITS sequences of 22 isolates were submitted to GenBank to obtain their accession numbers (MH378888 and MH378889; MH397479-MH397498) and the closest related species were identified from a BLASTn analysis. The identified fungi with their accession numbers, the closest related species, the percentage of identity, and colonization frequency are presented in Table 1. Front views of the colonies of endophytic fungi isolated from B. polyandra. a~v Bpf-1 to Bpf-22, respectively Table 1 Colonization frequency (CF) of the endophytic fungi isolated from B. polyandra and their closest relatives based on the data from the BLASTn analysis Assessment of endophytic fungal diversity The detailed results of the analysis of endophytic fungal diversity in male and female samples associated with B. polyandra are listed in Table 2. Larger values indicated the richness of the endophytic fungi in the samples. According to the Margalef abundance index (D′), female samples (2.354) had a higher value than male samples (2.246), showing the richness of the endophytic fungi in female samples compared to male samples. Moreover, Simpson's (D) and Shannon's (H') diversity indices in female samples were relatively higher than those in male samples (D = 0.881 and 0.869; H' = 2.247 and 2.126, respectively), suggesting that these endophytic fungi preferentially colonize female samples. However, the Pielou species evenness index (J) of female samples (0.923) was similar to that of male samples (0.937), indicating a uniform species composition across both hosts. The values of D′, D, H', and J in the whole tissue were 2.900, 0.898, 2.469, and 0.912, respectively. These values indicate the higher diversity of endophytic fungi in B. polyandra. Table 2 Diversity of endophytic fungi isolated from B. polyandra Antibacterial activity of the endophytic fungal extracts The antibacterial activity of the fungal extracts against five test bacteria (Escherichia coli, Pseudomonas lachrymans, Xanthomonas vesicatoria, Ralstonia solanacearum and Bacillus subtilis) are summarized in Table 3. All 22 endophytic fungal extracts showed antibacterial activity against all the test bacteria to different degrees. For instance, the extracts of endophytes Bpf-1, Bpf-3, Bpf-4, Bpf-8, Bpf-9, Bpf-10, Bpf-11, Bpf-12, Bpf-14, and Bpf-22 were more active as antibacterial agents than the extracts of other endophytes. Among them, Bpf-1, Bpf-11, and Bpf-14 showed the highest inhibitory activity against all the test bacteria with inhibition zone diameters exceeding 10 mm. On the other hand, some fungal extracts (i.e., Bpf-7, Bpf-15, and Bpf-19) also exhibited inhibitory activity against all five test bacteria, and the inhibition zone diameters mainly ranged from 5 mm to 10 mm. The extracts of three endophytes (Bpf-5, Bpf-6, and Bpf-13) showed comparatively weak or no inhibition against all test bacteria. Among the test bacteria, E. coli was less susceptible to the endophytic fungal extracts, except for five samples (Bpf-1, Bpf-10, Bpf-11, Bpf-12 and Bpf-14), while R. solanacearum and X. vesicatoria were generally more susceptible. Table 3 Antibacterial activities of crude extracts of endophytic fungi isolated from B. polyandra The magnitude of the Rf value allowed us to determine the polarity of the compounds separated through our elution system. Based on the results of the antibacterial activity results, the Rf values of extracts exhibiting antibacterial activity all ranged from 0.00 to 0.65. Thus, the secondary metabolites of these endophytic fungi were mainly small to moderately polar substances. In this investigation, the gram-negative bacteria were generally more sensitive to the 22 endophytic fungal extracts than the gram-positive bacteria. Antioxidant activity of the endophytic fungal extracts The antioxidant activities of 22 endophytic fungal extracts from B. polyandra were evaluated using a DPPH radical scavenging assay. As shown in Fig. 2, all the extracts showed antioxidant activity to varying extents, with IC50 values ranging from 0.45 to 6.9 mg/mL. The extracts of endophytes Bpf-10 (Diaporthe sp.) and Bpf-11 (Botryosphaeria sp.) showed a stronger ability to inhibit DPPH radicals, with IC50 values of 0.46 and 0.45 mg/mL, respectively. However, their activities were lower than that of the standard BHT (0.02 mg/mL). The extracts of endophytes Bpf-12, Bpf-13, and Bpf-14 did not show any significant differences, and their IC50 values were 0.75, 0.73, and 0.71 mg/mL, respectively. The extract of the endophyte Bpf-21 showed the lowest antioxidant activity, with an IC50 value of 6.90 mg/mL. Antioxidant activities of crude extracts of endophytic fungi isolated from B. polyandra HPLC analysis of the extracts of endophytes with significant activities Based on the results of strong antibacterial and antioxidant activities, the extracts of endophyte Bpf-10 (Diaporthe sp.) and Bpf-11 (Botryosphaeria sp.) were selected for further HPLC analysis (Fig. 3). These secondary metabolites mainly had retention times between 2 min and 6 min and between 10 min and 19 min, indicating that endophytes Bpf-10 and Bpf-11 contained major compounds with different polarities. Several fractions suggested that more than one compound produced by two endophytes was responsible for the bioactivity. However, further experiments are required to confirm whether the compounds detected in extracts mediate the antibacterial and antioxidant activities. The obtained HPLC chromatogram provides a theoretical reference for the further isolation, purification and identification of active components from endophytes Bpf-10 and Bpf-11. HPLC-UV chromatograms of crude extracts of endophytes Bpf-10 and Bpf-11 at 210 nm Endophytic fungi, which are potential producers of medicinal substances, have attracted increasing attention in recent years (Wei et al. 2020). Medicinal plants provide a unique eco-environment for their endophytic fungi. A plethora of previous studies reported that special eco-environmental endophytic fungi might produce special bioactive natural products (Jia et al. 2016). Based on these considerations, we investigated the application of endophytic fungi, especially those isolated from B. polyandra, to evaluate their diversity and biological properties. Endophytic fungi are detected in different medicinal plants worldwide. Their phylogenetic diversity has been reported in various forms to describe the interaction of fungi with the host plant (Tejesvi et al. 2011; Murdiyah 2017). In the present study, 125 endophytic fungi were isolated from B. polyandra, and 22 isolates were identified successfully based on morphological features and a sequence analysis of the ITS regions. These isolates showed 98.48–100.00% similarity to their assigned taxa. These fungal isolates belonged to one phylum, three classes, six orders and fifteen genera, showing the phylogenetic diversity of the endophytic fungi. Calonectria was the dominant genus and has also been reported in other plants, such as Acacia persea, Sarcococca hookeriana, and Buxus sempervirens (Dann et al. 2012; Wight et al. 2016). Members of a variety of common endophytic genera observed in the present study, such as Trichoderma, Fusarium and Colletotrichum, were typically isolated from different hosts (Hidayat et al. 2016; Ntuba-Jua et al. 2017). In the whole tissue, the large values of H' 2.469 and D' 2.900 revealed that B. polyandra hosted rich and diverse endophytic fungi. Some research groups have reported the antibacterial activity of endophytic fungi from medicinal plants against various pathogenic microbes (Sathish et al. 2014; Liu et al. 2016; Wahab et al. 2017). Another objective of this study was to assess the antibacterial and antioxidant activities of endophytic fungal extracts. All extracts (Bpf-1 to Bpf-22) exhibited broad-spectrum antibacterial activities against E. coli, P. lachrymans, X. vesicatoria, R. solanacearum, and B. subtilis. Among them, Bpf-10 and Bpf-11 showed the strongest antibacterial activities, suggesting that they could be used as a potential source of antibacterial agents. Our results are supported by previous studies that demonstrated Diaporthe LGMF907 had potent antibacterial activity against E. coli, Saccharomyces cerevisiae, methicillin-sensitive Staphylococcus aureus, and methicillin-resistant S. aureus (de Medeiros et al. 2018). In addition, Botryosphaeria MGN23-3 also displayed strong antibacterial activity against Bacillus cereus and B. subtilis (da Silva et al. 2022). The remaining fungal extracts showed different activities toward the different tested bacteria, which might result from morphological differences in the cell walls of these pathogens (Gunasekaran et al. 2017). R. solanacearum and X. vesicatoria were sensitive to most fungal extracts. This result corroborated the findings reported by Ouyang (Ouyang et al. 2017) that extracts of B. polyandra showed significant antibacterial activity against R. solanacearum. In the present study, all endophytic fungal extracts showed different levels of antioxidant activity. The extracts of endophytes Bpf-10 (Diaporthe sp.) and Bpf-11 (Botryosphaeria sp.) showed the best antioxidant activities based on the reduction of DPPH. As a proof of their efficiency, Diaporthe sp. MFLUCC16-0682 and Botryosphaeria MGN23-3 were reported to have notable antioxidant activities (Tanapichatsakul et al. 2017; da Silva et al. 2022). In addition, altenusin and djalonensone isolated from Botryosphaeria sp. had antioxidant activities (Xiao et al. 2014). Many previous studies have proven that some polyphenol, flavonoid and tannin compounds seem to play an important role in reducing peroxidation (Mazandarani et al. 2014; Kada et al. 2017). However, polyphenols, flavonoids, and tannins compounds were found to be absent from fungal extracts (Sharma et al. 2022). Hence, further confirmation is needed to determine whether a positive correlation exists between the extracts and these compounds. In the present study, all activities of fungal extracts were lower than that of the standard butylated hydroxytoluene (BHT). However, negative results do not mean an absence of the bioactive constituents in all fungal extracts, and they may contain other active chemical components that produce a definite physiological action. In this study, two endophytic fungi Bpf-10 (Diaporthe sp.) and Bpf-11 (Botryosphaeria sp.) were screened for the first time from medicinal plant B. polyandra, which showed strong antibacterial and antioxidant activities. Furthermore, there are few studies on the biological activities of the secondary metabolites of Gliocladiopsis (Bpf-2), Calonectria (Bpf-3, Bpf-5, Bpf-9, Bpf-15 and Bpf-18), Gliocephalotrichum (Bpf-4), and Cylindrocladium (Bpf-13). In this study, the antibacterial and antioxidant activities of ethyl acetate extracts of the culturable endophytic fungi from the root of the holoparasitic plant B. polyandra were reported. A total of 125 endophytic fungal isolates were isolated from 60 samples of B. polyandra (30 female samples and 30 male samples). Of them, twenty-two distinct isolates (Bpf-1~Bpf-22) were selected for identification and characterization using molecular and morphological analyses. Fifteen genera were identified, among which Calonectria, Clonostachys, and Botryosphaeria were dominant. The crude extracts of Diaporthe neotheicola Bpf-10 and Botryosphaeria dothidea Bpf-11 showed potent inhibitory activities against DPPH radicals and pathogenic bacteria (E. coli, P. lachrymans, X. vesicatoria, R. solanacearum, and B. subtilis). Moreover, the HPLC chromatogram showed the presence of secondary metabolites with different polarities in the Bpf-10 and Bpf-11 extracts. These findings indicated that endophytic fungi from the holoparasitic plant B. polyandra have great potential to produce antioxidant and antibacterial compounds. Subsequent research will focus on the isolation and identification of the antibacterial and antioxidant compounds from these fungi, as well as on their applications as biocontrol agents. Whole plants of B. polyandra were collected from Chebaling National Nature Reserve, Guangdong Province of China, in September 2015. All the samples were placed in a plastic bag and immediately transported to the laboratory for further study. The plant specimens were authenticated by Dr. Mingxuan Zheng at South China Agricultural University. The plant materials were stored in sealed plastic bags at 4 °C until further use. Isolation of the endophytic fungi The endophytic fungi were isolated using the method described by Shan et al. with some modifications (Shan et al. 2019). Plant materials were thoroughly washed for 20 min with running tap water and surface-sterilized with 75% ethanol for 30 s, followed by three rinses with sterilized distilled water. They were then treated with 0.2% mercuric chloride for 20 min and then washed thrice with sterile distilled water. The surface-sterilized tissues were dried on sterile filter papers under aseptic conditions. Finally, each tissue sample was cut into 5 × 5 mm pieces and placed on a potato dextrose agar (PDA) plate containing streptomycin sulfate (500 μg/L). The culture plates were incubated at 28 °C for 1–3 weeks and observed daily. The emerging colonies were subcultured several times on fresh PDA plates to obtain pure isolates. Finally, the pure isolate was transferred into a PDA slant and stored at 4 °C until further use. The colonization frequency (CF %) of each pure isolate was calculated using the following formula: $$\mathrm{CF}=\left(\mathrm{NCOL}/\mathrm{Nt}\right)\times 100\%$$ where "NCOL" represents the number of segments colonized by the emerging fungus, and "Nt" represents the total number of samples segment (Shan et al. 2020). Morphological characterization of endophytic fungi For morphological identification, all endophytic fungi that varied in shape, growing area, exudate drop color, growth rates, surface texture, reverse color, radial lines, and concentric were observed and identified according to standard taxonomic manuals and textbooks (Praptiwi et al. 2016). Molecular characterization of endophytic fungi DNA was extracted from fresh mycelium using a fungal genomic DNA extraction kit (Shanghai Biological Engineering Co., China) according to the manufacturer's method. The ITS region of rDNA was amplified by polymerase chain reaction (PCR) and subsequently sequenced with the universal primers ITS4 (5′-TCCTCCGCTTATTGATATGC-3′) and ITS5 (5′-GGAAGTAAAAGTCGTAACAAGG-3′). The PCR system was as follows: 25 μL of PCR Master Mix, 21 μL of double-distilled H2O, 2 μL of template DNA, 1 μL of forward primer ITS4, and 1 μL of reverse primer ITS5. The PCR conditions were set as follows: predenaturation at 95 °C for 2 min; 30 cycles of denaturation at 94 °C for 40 s, annealing at 56 °C for 40 s, extension at 72 °C for 1 min and 20 s; and extension at 72 °C for 10 min. The PCR products were sequenced and purified by Shanghai Biological Engineering Co., China. The obtained DNA sequences were submitted to GenBank and compared with a BLASTn analysis (Shan et al. 2019). Assessment of endophytic fungi diversity The diversity of endophytic fungi at each site was estimated according to Margalef's abundance index (D'), Simpson index (D), the Shannon-Wiener diversity index (H'), and the Pielou species evenness index (J). The following formula was used: D' = (S-1)/ln N, where "S" represents the number of species and "N" is the number of individuals in the sample (Cosoveanu et al. 2018). D = 1 − Σ Pi2, where the ratio "Pi" is the frequency of colonization of the taxon in the sample (Zheng et al. 2013). H' = - Σ Pi ln Pi, where "H'" was used to show the diversity of the endophytic fungal species (Sadeghi et al. 2019). J = H'/ln(S), where "J" denotes the uniformity of the endophytic fungi (Zheng et al. 2013). Preparation of crude extracts of endophytic fungi The pure isolates of endophytic fungi were cultivated on PDA plates for 4–7 days. Afterward, 3 to 4 agar plugs with mycelia were inoculated into a 50-mL conical flask containing 20 mL of potato dextrose broth (PDB) (3 flasks for each fungus). All flasks were incubated at 150 rpm on a rotary shaker at 28 °C in the dark for 5–7 days. Each fungal broth and mycelia were inoculated into two flasks containing 20 g of sterile rice for 60 days under aseptic conditions. The fermented product was extracted thrice with ethyl acetate under sonication. Finally, the solvent was evaporated and concentrated using a rotary evaporator to obtain crude extracts. The ethyl acetate crude extracts were stored at 4 °C until use. Detection of the antibacterial activity of the endophytic fungal extracts The antibacterial activity of the endophytic fungal extracts was detected using a thin-layer chromatography (TLC)-bioautography assay (Shan et al. 2012). One gram-positive (B. subtilis) and four gram-negative (E. coli, P. lachrymans, X. vesicatoria, and R. solanacearum) bacterial strains were used as test bacteria. All the bacterial cultures were reactivated in Luria-Bertani (LB) broth medium for 12 h at 28 °C, followed by streaking on LB agar plates. The bacterial suspension (108 CFUs/mL) was mixed with molten semisolid LB medium (with 0.5% agar) before use. Five microliter of the ethyl acetate extracts was spotted onto a silica TLC plate, and then 5 μL of a streptomycin sulfate (CK+) solution (0.2 mg/mL) was spotted onto the lower right of the TLC plate. The prepared TLC plates were developed in a glass tank with the petroleum ether to acetone (4:1) (v/v) solvent system. The prepared bacterial suspension was poured uniformly over the TLC plate and incubated at 25 °C for 12 h under humid conditions. Last, the TLC plate was sprayed with a 5 mg/mL solution of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) and then incubated for 2 h. The antibacterial activity of endophytic extracts was assayed by measuring the white inhibition zone diameter on the purple background. Assessment of the antioxidant activity of the endophytic fungal extracts The DPPH radical scavenging assay was employed to examine the antioxidant activity of the endophytic fungal extracts using the microtiter plate (96-well) spectrophotometric method (Shan et al. 2019) with some modifications. Briefly, 20 mg of DPPH were dissolved in 100 mL of ethanol to obtain a 0.2 mg/mL DPPH solution (0.004% w/v) for this assay. The stock solutions of test samples were prepared separately by dissolving 0.1 g of ethyl acetate extract in 1 mL of ethanol. A series of stock solutions of different concentrations (20, 10, 5, 2.5, 1.25, 0.625, 0.3125, and 0.15625 mg/mL) were prepared by diluting the solutions with ethanol. Butylated hydroxytoluene (BHT) was used as a positive control and prepared using the same method. The final concentrations of BHT were 0.4, 0.2, 0.1, 0.05, 0.025, 0.0125, 0.00625, and 0.003125 mg/mL. After completing the preparation of all solutions, 80 μL of DPPH were added to each well containing 20 μL of sample solutions or BHT solutions at different concentrations. These reaction mixtures were homogenized well and incubated in the dark for 10 min and then incubated for 30 min in a water bath at 37 °C. The absorbance was measured at 517 nm using a spectrophotometer, and tests were performed in triplicate. Ethanol was used as a reference standard. DPPH inhibition was calculated using the following equation: DPPH inhibition (%) = [(A517 nm of control − A517 nm of sample)/A517 nm of control] × 100, where A is the absorbance obtained for a sample or the control. The median inhibitory concentration (IC50) value was calculated using a linear relationship between the percentage of DPPH inhibition results and their respective results and predictive equations. HPLC analysis of the extracts of superior endophytes High-performance liquid chromatography (HPLC) analysis of the fungal extracts was performed using the gradient elution method (Shan et al. 2020) with an XB-C18 reverse-phase (250 mm × 10 mm, 10 μm, Welch, Shanghai, China). Commercial grade water with 0.01% trifluoroacetic acid was used as mobile phase A, and acetonitrile with 0.01% trifluoroacetic acid was used as mobile phase B. The HPLC analysis was performed using a gradient of water to acetonitrile (0–2 min, 20% B, 2–15 min, 20–50% B, 15–16 min 50–100% B, 16–30 min, 100% B) at a flow rate of 1 mL/min, temperature of 40 °C; and UV detection at 210 nm. The generated nucleotide sequence of the endophytic fungal isolates (isolation number Bpf-1 ~ Bpf-22) can be accessed in GenBank under accession numbers MH378888, MH378889, MH397479 to MH397488 (https://blast.ncbi.nlm.nih.gov/Blast.cgi). The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request. Ascêncio PGM, Ascêncio SD, Aguiar AA, Fiorini A, Pimenta RS (2014) Chemical assessment and antimicrobial and antioxidant activities of endophytic fungi extracts isolated from Costus spiralis (Jacq.) Roscoe (Costaceae). Evid Based Complement Alternat Med 1:1–10 Atiphasaworn P, Monggoot S, Gentekaki E, Brooks S, Pripdeevech P (2017) Antibacterial and antioxidant constituents of extracts of endophytic fungi isolated from Ocimum basilicum var. thyrsiflora leaves. Curr Microbiol 74:1185–1193 Bedi A, Adholeya A, Deshmukh SK (2017) Novel anticancer compounds from endophytic fungi. Curr Biotechnol 6:1–17 Cosoveanu A, Sabina SR, Cabrera R (2018) Fungi as endophytes in Artemisia thuscula: juxtaposed elements of diversity and phylogeny. J Fungi 4:2–21 da Silva AA, Polonio JC, Bulla AM, Polli AD, Castro JC, Soares LC et al (2022) Antimicrobial and antioxidant activities of secondary metabolites from endophytic fungus Botryosphaeria fabicerciana (MGN23-3) associated to Morus nigra L. Nat Prod Res. https://doi.org/10.1080/14786419.2021.1947272 Dann EK, Cooke AW, Forsberg LI, Pegg KG, Tan YP, Shivas RG (2012) Pathogenicity studies in avocado with three nectriaceous fungi, Calonectria ilicicola, Gliocladiopsis sp. and Ilyonectria liriodendri. Plant Pathol 61:896–902 Das M, Prakash HS, Nalini MS (2017) Antioxidative and antibacterial potentials of fungal endophytes from Justicia Wynaadensis Heyne: an ethnomedicinal rain forest species of western Ghats. Asian J Pharm Clin Res 10:203–209 de Medeiros AG, Savi DC, Mitra P, Shaaban KA, Jha AK, Thorson JS et al (2018) Bioprospecting of Diaporthe terebinthifolii LGMF907 for antimicrobial compounds. Folia Microbiol 63:499–505 Gunasekaran S, Sathiavelu M, Arunachalam S (2017) In vitro antioxidant and antibacterial activity of endophytic fungi isolated from Mussaenda luteola. J Appl Pharma Sci 7:234–238 Hateet RR (2016) Antibacterial and antioxidant activites of secondary metabolites of endophytic fungus Stemphylium radicinum (Meier, Drechs and Eddy). Ori Pharm Exp Med 57:558–563 Hidayat I, Radiastuti N, Rahayu G, Achmadi SS, Okane I (2016) Endophytic fungal diversity from Cinchona calisaya based on phylogenetic analysis of the ITS ribosomal DNA sequence data. Curr Res Environ Appl Mycol 6:132–142 Jia M, Chen L, Xin HL, Zheng CJ, Rahman K, Han T, Qin LP (2016) A friendly relationship between endophytic fungi and medicinal plants: a systematic review. Front Microbiol 7:1–14 Kada S, Bouriche H, Senator A, Demirtas I, Ozen T, Toptanci BC, Kizil G, Kizil M (2017) Protective activity of Hertia cheirifolia extracts against DNA damage, lipid peroxidation and protein oxidation. Pharm Biol 55:330–337 Liu YH, Hu XP, Li W, Cao XY, Yang HR, Lin ST, Xu CB, Liu SX, Li CF (2016) Antimicrobial and antitumor activity and diversity of endophytic fungi from traditional Chinese medicinal plant Cephalotaxus hainanensis Li. Genet Mol Res 15:1–11 Mao Z, Zhang W, Wu C, Feng H, Peng Y, Shahid H, Cui Z, Ding P, Shan T (2021) Diversity and antibacterial activity of fungal endophytes from Eucalyptus exserta. BMC Microbiol 21:155 Mazandarani M, Ghafourian M, Khormali A (2014) Ethnopharmacology, antibacterial and antioxidant activity of dittrichia graveolens (L.) W. Greuter. which has been used as remedies antirheumatic, anti-inflammation and antiinfection against leishmaniasis in the traditional medicine of Gorgan, Iran. Crescent J Med Biol Sci 1:125–129 Murdiyah S (2017) Endophytic fungi of various medicinal plants collected from evergreen forest Baluran national park and its potential as laboratory manual for mycology course. J Pendidikan Biologi Indonesia 3:64–71 Nafis A, Kasrati A, Azmani A, Ouhdouch Y, Hassani L (2018) Endophytic actinobacteria of medicinal plant Aloe vera: Isolation, antimicrobial, antioxidant, cytotoxicity assays and taxonomic study. Asian Pac J Trop Biomed 8:513–518 Ntuba-Jua GM, Mih AM, Bechem EET (2017) Diversity and distribution of endophytic fungi in different Prunus africana (Hook. f.) Kalkman provenances in Cameroon. Biosci Plant Biol 4:7–23 Ouyang J, Wu C, Wang X, Wang S, Wu H, Wang J, Shan T (2017) The crude extracts of Balaophora polyandra and their antimicrobial activities. Chin J Trop Agric 37:61–65 Patil RH, Patil MP, Maheshwari VL (2016) Bioactive secondary metabolites from endophytic fungi: a review of biotechnological production and their potential applications. Stud Nat Prod Chem 1:189–205 Praptiwi, Palupi KD, Fathoni A, Wulansari D, Ilyas M, Agusta A (2016) Evaluation of antibacterial and antioxidant activity of extracts of endophytic fungi isolated from Indonesian Zingiberaceous plants. Nusantara Biosci 8:306–311 Sadeghi F, Samsampour D, Seyahooei MA, Bagheri A, Soltani J (2019) Diversity and spatiotemporal distribution of fungal endophytes associated with Citrus reticulata cv, Siyahoo. Curr Microbiol 76:279–289 Sathish L, Pavithra N, Ananda K (2014) Evaluation of antimicrobial activity of secondary metabolites and enzyme production from endophytic fungi isolated from Eucalyptus citriodora. J Pharm Res 8:269–276 Shan T, Duan Z, Wu C, Li Z, Wang S, Mao Z (2020) Secondary metabolites of symbiotic fungi isolated from Blaptica dubia and their biological activities. J Environ Entomol 42:170–179 Shan T, Qin K, Xie Y, Zhang W, Mao Z, Wang J (2019) Secondary metabolites of endophytic fungi isolated from Casuarina equisetifolia and their bioactivities. J S China Agr Univ 40:67–74 Shan T, Sun W, Lou J, Gao S, Mou Y, Zhou L (2012) Antibacterial activity of the endophytic fungi from medicinal herb, Macleaya cordata. Afr J Biotechnol 11:4354–4359 Sharma A, Sagar A, Rana J, Rani R (2022) Green synthesis of silver nanoparticles and its antibacterial activity using fungus Talaromyces purpureogenus isolated from Taxus baccata Linn. Micro Nano Syst Lett 10:2–12 Sheik S, Chandrashekar KR (2018) Fungal endophytes of an endemic plant Humboldtia brunonis Wall. of western Ghats (India) and their antimicrobial and DPPH radical scavenging potentiality. Ori Pharm Exp Med. 18:115–125 Tanapichatsakul C, Monggoot S, Gentekaki E, Pripdeevech P (2017) Antibacterial and antioxidant metabolites of Diaporthe spp. isolated from flowers of Melodorum fruticosum. Curr Microbiol 75:476–483 Tao RY, Ye F, He YB, Tian JY, Liu GG, Ji TF, Su YL (2009) Improvement of high-fat-diet-induced metabolic syndrome by a compound from Balanophora polyandra Griff in mice. Eur J Pharmacol 616:314–340 Tejesvi MV, Kajula M, Mattila S, Pirttilä AM (2011) Bioactivity and genetic diversity of endophytic fungi in Rhododendron tomentosum Harmaja. Fungal Divers 47:97–107 Venieraki A, Dimou M, Katinakis P (2017) Endophytic fungi residing in medicinal plants have the ability to produce the same or similar pharmacologically active secondary metabolites as their hosts. Hell Plant Prot J 10:51–66 Wahab MAA, Bahkali AHA, Gorban AME, Hodhod MS (2017) Natural products of Nothophoma multilocularis sp. nov. an endophyte of the medicinal plant Rhazya stricta. Mycosphere. 8:1185–1200 Wang KJ, Zhang YJ, Yang CR (2006) New Phenolic Constituents from Balanophora polyandra with radical-Scavenging activity. Chem Biodivers 3:1317–1324 Wang YG, Yang JB, Wang AG (2013) Hydrolyzable tannins from Balanophora polyandra. Acta Pharm Sin B 3:46–50 Wei J, Chen F, Liu YM, Abudoukerimu A, Zheng Q, Zhang XB, Sun YP, Yimiti D (2020) Comparative metabolomics revealed the potential antitumor characteristics of four endophytic fungi of brassica rapa L. Am Chem Soc 5:5939–5950 Wight MM, Salazar CS, Demers JE, Clement DL, Rane KK, Crouch JA (2016) Sarcococca blight: use of whole genome sequencing for fungal plant disease diagnosis. Plant Dis 100:1093–1100 Xiao J, Zhang Q, Gao YQ, Tang JJ, Zhang AL, Gao JM (2014) Secondary metabolites from the endophytic Botryosphaeria dothidea of Melia azedarach and their antifungal, antibacterial, antioxidant, and cytotoxic activities. J Agric Food Chem 62:3584–3590 Zheng JH, Kang JC, Lei BX, Li QR, Wen TC, Meng ZB (2013) Diversity of endophytic fungi associated with Ginkgo biloba. Mycosystema. 32:671–681 Zhong LY, Zou L, Tang XH, Li WF, Li X, Zhao G, Zhao JL (2017) Community of endophytic fungi from the medicinal and edible plant Fagopyrum tataricum and their antimicrobial activity. Trop J Pharm Res 16:387–396 We thank Dr. Mingxuan Zheng from the College of Forestry and Landscape Architecture, South China Agricultural University, for the taxonomic identification of the plant materials. This research was co-financed by the National Natural Science Foundation of China (32071766), the Natural Science Foundation of Guangdong Province (2019A1515011554; 2022A1515010944), and the Key Research and Development Projects of Guangdong Province (2020B020214001). Chunyin Wu and Wei Wang contributed equally to this work and are co-first authors. College of Forestry and Landscape Architecture, South China Agricultural University, Guangzhou, No. 483, Wushan Road, Tianhe District, Guangzhou, 510642, Guangdong, China Chunyin Wu, Hamza Shahid, Yuting Yang, Yangwen Wang & Tijiang Shan College of Plant Protection, South China Agricultural University, Guangzhou, 510642, Guangdong, China Wei Wang & Xiaoqing Wang Research Institute of Tropical Forestry, Chinese Academy of Forestry, No. 682, Guangshanyi Road, Tianhe District, Guangzhou, 510520, Guangdong, China Shengkun Wang Chunyin Wu Hamza Shahid Yuting Yang Yangwen Wang Tijiang Shan W.W, S.W. and T.S. collected the plant material. C.W., W.W, X.W., and T.S. performed the isolation and identification of the endophytic fungi. C.W., W.W, X.W., and T.S. evaluated the antimicrobial activity. C.W., H.S., Y.Y, and Y.W. performed the antioxidant activity. C.W., H.S., and Y.Y contributed in the diversity analysis of endophytic fungi. H.S., Y.Y, and Y.W. performed the HPLC analysis of the crude extract. C.W., W.W, H.S., Y.Y, and T.S. prepared the figures and tables. S.W. and T.S. designed the research. All the authors contributed in writing, editing, and revising the manuscript. The author(s) read and approved the final manuscript. Correspondence to Shengkun Wang or Tijiang Shan. The healthy plants of B. polyandra were collected in September 2015 from Chebaling National Nature Reserve, Guangdong Province, China. The taxonomic identification of the plant materials was performed by Dr. Mingxuan Zheng of College of Forestry and Landscape Architecture (SCAU), where the voucher specimen (SCAULPMH-1509015) of the plant was deposited. All experiments were approved by the College of Forestry and Landscape Architecture (SCAU) and were strictly evaluated in accordance with the IUCN Policy Statement on Research Involving Species at Risk of Extinction and the Convention on the Trade in Endangered Species of Wild Fauna and Flora. Wu, C., Wang, W., Wang, X. et al. Diversity and communities of culturable endophytic fungi from the root holoparasite Balanophora polyandra Griff. and their antibacterial and antioxidant activities. Ann Microbiol 72, 19 (2022). https://doi.org/10.1186/s13213-022-01676-6 Balanophora polyandra Griff. Endophytic fungi Antibacterial activity
CommonCrawl
Evolution of DNA Methylation Across Ecdysozoa Jan Engelhardt ORCID: orcid.org/0000-0003-4934-21351,2,3,4, Oliver Scheer2,3, Peter F. Stadler1,3,5,7,8,9 & Sonja J. Prohaska2,3,5,6 Journal of Molecular Evolution volume 90, pages 56–72 (2022)Cite this article DNA methylation is a crucial, abundant mechanism of gene regulation in vertebrates. It is less prevalent in many other metazoan organisms and completely absent in some key model species, such as Drosophila melanogaster and Caenorhabditis elegans. We report here a comprehensive study of the presence and absence of DNA methyltransferases (DNMTs) in 138 Ecdysozoa, covering Arthropoda, Nematoda, Priapulida, Onychophora, and Tardigrada. Three of these phyla have not been investigated for the presence of DNA methylation before. We observe that the loss of individual DNMTs independently occurred multiple times across ecdysozoan phyla. We computationally predict the presence of DNA methylation based on CpG rates in coding sequences using an implementation of Gaussian Mixture Modeling, MethMod. Integrating both analysis we predict two previously unknown losses of DNA methylation in Ecdysozoa, one within Chelicerata (Mesostigmata) and one in Tardigrada. In the early-branching Ecdysozoa Priapulus caudatus, we predict the presence of a full set of DNMTs and the presence of DNA methylation. We are therefore showing a very diverse and independent evolution of DNA methylation in different ecdysozoan phyla spanning a phylogenetic range of more than 700 million years. DNA methylation is prominent in vertebrates, where it is considered a fundamental part of epigenetic programming (Lyko 2018). In human, about 70-80% of CpGs are methylated. Several non-vertebrate model organisms, such as Drosophila melanogaster, Caenorhabditis elegans and Saccharomyces cerevisiae (Zemach et al. 2010; Raddatz et al. 2013) lack DNA methylation. It was discovered early-on, however, that some insects must have a DNA methylation mechanism (Devajyothi and Brahmachari 1992). Since then, several studies have investigated the heterogenous distribution of DNA methylation in insects (Field et al. 2004; Bewick et al. 2017; Provataris et al. 2018) and other arthropods (de Mendoza et al. 2019b; Gatzmann et al. 2018). These showed that most insect orders have kept some amount of DNA methylation. The most prominent counterexample are Diptera which include the genus Drosophila. In nematodes, DNA methylation has only been identified in a few species. The highest levels are found in Romanomermis culicivorax and low amounts in Trichinella spiralis, Trichuris muris and Plectus sambesii (Gao et al. 2012; Rošić et al. 2018) suggesting an early loss during nematode evolution, prior to the separation of the nematode clades III, IV, and V. In most non-bilaterian metazoans, DNA methylation is present, with the exception of placozoans (de Mendoza et al. 2019a; Xu et al. 2019). DNA methylation is a crucial mechanism in vertebrate gene regulation that plays a major role in cell fate decision making but their role in invertebrate gene regulation is much less clear. It appears that its function might differ significantly in different invertebrate groups. In the last years, several experimental methods for detecting genomic DNA methylation have been developed. Nevertheless, they are still more expensive compared to sequencing the unmodified genome only. This can be problematic if one wants to widen the phylogenetic range of DNA methylation studies and include a large number of species. Another problem is that some of the lesser studied taxa are difficult to collect and culture which makes them less available for extensive experimental work. Bioinformatic studies such as the present one can help design such experimental studies. Relying on available public data we can make detailed predictions about the presence or absence of DNA methylation and the respective enzymes. Using these computational results, one can decide more efficiently which taxa are most valuable to study to gain a new insight into the evolution of DNA methylation in invertebrates. In animals, DNA methylation predominantly occurs at CG sites (Goll and Bestor 2005; Lyko 2018). Two different sub-classes of enzymes are responsible for establishing DNA methylation. DNA methyltransferase 1 (DNMT1) reestablishes methylation on both DNA strands after a cell division. It preferentially targets hemi-methylated sites. DNA methyltransferase 3 (DNMT3) can perform de novo methylation of unmethylated CpGs in the DNA. In vertebrates, DNMT3 is mainly active during embryonic development. However, the view of a clear separation of tasks has been challenged (Jeltsch and Jurkowska 2014; Lyko 2018). Not only does DNMT3 contribute to the maintenance of DNA methylation, DNMT1 has a notable de novo activity, as well. In addition, DNMT1 might have other functions outside of DNA methylation (Yarychkivska et al. 2018; Schulz et al. 2018) but they have not been studied extensively. Other functions are difficult to investigate, mainly because DNMT1 or DNMT3 knock-outs in human embryonic stem cells or mouse embryos have catastrophic consequences, e.g., cell death or embryonic lethality (Liao et al. 2015). DNMT2 has been believed to be a DNA methyltransferase as well until it was discovered that it recognizes tRNAs as a substrate. It methylates cytosine C38 of tRNA(Asp) in human and therefore is actually an RNA methyltransferase (Goll et al. 2006). DNA methyltransferases are believed to have emerged in bacterial systems from "ancient RNA-modifying enzymes" (Iyer et al. 2011). Subsequently, six distinct clades of DNA methyltransferases have been acquired by eukaryotic organisms through independent lateral transfer (Iyer et al. 2011). The DNMT clades thus do not have a common ancestor within the eukaryotes. DNMT1 and DNMT2 can be detected in most major eukaryotic groups, including animals, fungi and plants. Fungi lack DNMT3 but retained DNMT4 and DNMT5 similar to some, but not all, Chlorophyta (green algae). Embryophyta (land plants) lack DNMT4 and DNMT5 but harbor chromomethylase (Cmt), an additional DNA methyltransferase related to DNMT1 (Huff and Zilberman 2014). In Metazoa, only DNMT1, DNMT2 and DNMT3 can be found. Although DNA methylation clearly is an ancestral process, it is not very well conserved among Protostomia. All DNA methyltransferases (DNMTs) have a catalytic domain at their C-terminus. It transfers a methyl group from the substrate S-AdoMet to the C5 atom of an unmethylated cytosine (Lyko 2018). However, the different families of DNMTs can be distinguished by their regulatory domains and conserved motifs in the catalytic domain (Jurkowski and Jeltsch 2011). With five domains, DNMT1 has the most regulatory domains, see Fig. 1 for an overview. The DMAP-binding domain binds DMAP1, a transcriptional co-repressor. Also HDAC2, a histone deacetylase, establishes contact to the N-terminal region of DNMT1 (Rountree et al. 2000). The RFTS domain (or RFD) targets the replication foci and directs DMAP1 and HDAC2 to the sites of DNA synthesis during S phase (Rountree et al. 2000). The CXXC domain is a zinc-finger domain that can be found in several chromatin-associated proteins and binds to unmethylated CpC dinucleotides (Bestor 1992). The two BAH (bromo-adjacent homology) domains have been proposed to act as modules for protein-protein interaction (Song et al. 2011; Yarychkivska et al. 2018). DNMT3 has only two regulatory domains, a PWWP domain, named after the conserved Pro-Trp-Trp-Pro motif, and an ADD domain. Both mediate binding to chromatin. For the PWWP domain of (murine and human) DNMT3A, recognition of histone modifications H3K36me3 and recently also H3K36me2 has been reported (Dhayalan et al. 2010; Weinberg et al. 2019). The ADD domain, is an atypical PHD finger domain, shared between ATRX, DNMT3, and DNMT3L, and has been shown to interact with histone H3 tails that are unmethylated at lysine 4 (Zhang et al. 2010; Ooi et al. 2007). DNMT2 has no regulatory domains (Lyko 2018). Conserved domains of animal DNA methyltransferases. Scaling and numbers refer to the human homologs Methylated DNA is subject to spontaneous deamination of 5-methylcytosine, which leads to the formation of thymine and, consequently, to T\(\cdot\)G mismatches. Over time, this results in C to T transition mutations predominantly in the context of CpG sites and CpG depletion in frequently methylated regions of the DNA. This changes the number of CpGs observed relative to the number expected from the C/G content of the genome. The observed/expected CpG distribution has been used in several studies to infer the presence of DNA methylation (Bewick et al. 2017; Provataris et al. 2018; Aliaga et al. 2019; Thomas et al. 2020). In Apis mellifera, it has been shown that its genes can be divided in two classes, depending on whether they exhibit a low or a high amount of CpG dinucleotides. This was explained by the depletion of CpG dinucleotides if DNA methylation is present. The highly methylated (low CpG) genes were associated with basic biological processes, while lowly methylated (high CpG) genes were enriched with functions associated with developmental processes (Elango et al. 2009). This "bimodal distribution" of CpG dinucleotides can be used to predict the presence of DNA methylation. In invertebrates, gene bodies and especially exons are methylated more heavily than other parts of the genome. Higher methylation levels should lead to a stronger statistical signal and therefore make it easier to decide if DNA methylation is present or not. Therefore, exons have recently been in the focus of studies investigating DNA methylation in invertebrates. Several different criteria have been developed to distinguish the patterns of methylated and unmethylated DNA. Mixture distribution modeling is used in biology since the nineteenth century (Schork et al. 1996) but also in a wide array of other scientific fields, see McLachlan et al. (2019) and Ghojogh et al. (2019) for an introduction. It is a form of unsupervised learning which tries to assign data points to different subpopulations. Several recent studies have used Gaussian mixture modeling (GMM) to predict the presence of DNA methylation. In that case, one assumes that the underlying subpopulation are normal (Gaussian) distributions. If the number of subpopulations, i.e., modes or components, is known, expectation maximization (EM) can be used as an efficient way to estimate the parameters of the mixture model. It can be difficult to know the number of expected components (McLachlan and Rathnayake 2014). In the case of DNA methylation, most studies use two components due to the presence of methylated and unmethylated genes. The EM for GMM with two components will estimate the mean and variance for both components. The normal distributions defined by the estimated parameters can then be further investigated. Different studies used varying approaches for deciding if the resulting distributions indicate the presence of DNA methylation or not. Bewick et al. (2017) use GMMs with two components. Subsequently, they compare the 95% confidence intervals (CI) of the means. If they are overlapping they assumed a unimodal distribution, otherwise a bimodal one. In case of a bimodal distribution, the presence of DNA methylation is assumed. Provataris et al. (2018) use the same GMM modeling. They define three different modes: "Bimodal depleted", if the difference between both means is \(>0.25\) and the distribution with the lower O/E CpG ratio has a mean \(<0.7\), and the smaller component contains a proportion of the data \(>0.1\); "unimodal, indicative of DNA methylation", if they do not fall in the first category but the portion of data which falls in the distribution with the lower O/E CpG ratio is \(\ge 0.36\) (this cutoff represents the corresponding value in Bombyx mori). All other cases are classified as "unimodal, not indicative of DNA methylation". Aliaga et al. (2019) use a method based on kernel density estimations. They define four clusters based on the mode number (n), mean of the modes, skewness (sk) and standard deviation (sd). Three of the clusters are defined, among other parameters, as having one mode: "Ultra-low gene body methylation", "Low gene body methylation" and "Gene body methylation". Cluster with two modes (or 1 mode with skewness \(<-0.04\)) are defined as "Mosaic DNA methylation type". The predictions of the different methods are largely consistent although they may differ in individual cases and do not always match the observed presence or absence of DNMTs, see section "Discussion" below. Overview of the metazoan phylogeny with a focus on Ecdysozoa. The number of species per group used in this study is given in brackets. Lophotrochozoa and Deuterostomia are shown for orientation only In this paper, we present a detailed investigation of the presence and absence of DNA methyltransferases (DNMTs) across five ecdysozoan phyla, see Fig. 2. Most of the 138 species analyzed here are from the phyla Arthropoda and Nematoda. However, we also include less commonly studied groups such as Tardigrada, Onychophora and Priapulida. We identify at which points of the ecdysozoan evolution DNMTs were lost and investigate whether there are common patterns between the phyla. In addition, we present an easy-to-use statistical approach for predicting the presence of genomic DNA methylation based on coding sequence data and apply it to our species of interest. The results of the predictions are compared with available experimental data. Identification of DNA Methyltransferases Proteome-Based Search The predicted proteins of the species analyzed were downloaded from different sources, see supplementary Table 1. For 82 and 42 species, data were taken from NCBI (Sayers et al. 2019) and Wormbase (Harris et al. 2020), respectively. Data for seven species each were retrieved from ENSEMBL (Yates et al. 2020 and Laumer et al. 2019). The protein domain models for DNA_methylase (PF00145), ADD_DNMT3 (PF17980), CH (PF00307), PWWP (PF00855), BAH (PF01426), DMAP_binding (PF06464), DNMT1-RFD (PF12047) and zf-CXXC (PF02008) were downloaded from the "Pfam protein families database" (El-Gebali et al. 2019). Initially, only the DNA_methylase model was used to identify DNA methyltransferase (DNMT) candidates in the set of proteins predicted using hmmsearch from the HMMER software http://hmmer.org/ version 3.2.1. Proteins with a predicted DNA_methylase domain and a full sequence e-value \(<0.001\) were further considered as candidates. For these, all before mentioned protein domains were annotated. Finally, each DNMT candidate was classified into one of three classes using custom perl scripts. A DNMT1 candidate was required not to have a PWWP or ADD_DNMT3 domain. In addition, having a DNMT1_RFD, zf-CXXC and BAH domain it was considered a full DNMT1 candidate, with only one of them a partial DNMT1 candidate. A DNMT3 candidate was required not to have a DNMT1_RFD, zf-CXXC or BAH domain. With both, a PWWP and an ADD_DNMT3 domain, it was considered a full DNMT3 candidate, with only one of them a partial DNMT3 candidate. A DNMT2 candidate, was required to have only a DNA_methylase domain and none of the other domains mentioned above. An overview of the required domains during the classification can be found in Supplementary Table 7. In a last step, the classification of the DNMT candidates was checked manually. The sequences of the DNA methylase domain of each candidate were extracted and aligned using Clustal Omega (Sievers et al. 2011) version 1.2.4. A phylogenetic network was computed with SplitStree4 (Huson and Bryant 2006) version 4.10 and inspected manually for phylogenetic congruence of gene and species phylogeny. We opted for using a phylogenetic network, as it displays conflicting phylogenetic information that may result from non-tree like evolution, misassembly or partial misalignment. In case of contradicting results, the specific conserved sequence motifs of the methylase domain were inspected manually and the candidate reassigned to a different class or discarded if it did not contain the proper sequence motifs (Jurkowski and Jeltsch 2011). Genome-Based Search For selected subgroups an additional genome-based search for DNA methyltransferase (DNMT) candidates was performed. This was the case when the previously described workflow showed an unexpected absence of DNMTs in individual species. For example, a DNMT enzymes is detected in most species of a subgroup but is missing in one or two species. The groups that have been analyzed in addition were: Coleoptera for DNMT1 and DNMT3, Hymenoptera for DNMT3, Hemiptera for DNMT3, Chelicerate for all three DNMTs and Nematoda for all DNMTs. For each group, the DNMTs detected in the group, were used as queries. The programm BLAT (Kent 2002) was used to search the query proteins against the species genome whenever the respective DNMT could not be found in the proteome. The script pslScore.pl (https://genome-source.gi.ucsc.edu/gitlist/kent.git/raw/master/src/utils/pslScore/pslScore.pl) available from the UCSC genome browser was used to assign a score to each genomic hit. The resulting bed-file was post-processed with the tools of the suite bedtools (Quinlan and Hall 2010). All hits were clustered using bedtools cluster. If there were overlapping hits, only the best-scoring one was kept. Using blast-type output files from BLAT, the genomic sequence to which the query was aligned could be extracted to get the full amino acid sequence corresponding to the hit. The full-length protein candidates were aligned using Clustal Omega. A phylogenetic network was computed with SplitStree4 and inspected manually for phylogenetic congruence of gene and species phylogeny. Candidate proteins were discarded if they did not contain the methylase domain-specific, conserved sequence motifs. Otherwise they were kept as DNMT candidates. This method allowed us to identify six additional DNMT enzymes in five species: Asbolus verrucosus DNMT1, Soboliphyme baturini DNMT2, Acromyrmex echinatior DNMT3, Laodelphax striatellus DNMT3, Trichonephila clavipes DNMT1 and DNMT3. Inference of DNA Methylation from CpG O/E Value Distributions Coding sequences (CDS) for all species were downloaded from NCBI, Wormbase and ENSEMBL according to Supplementary Table 1. For the 7 species from Laumer et al. (2019), these data were not available. We used two different datasets: the actual CDS data and shuffled CDS data. For the shuffled CDS data, we performed a mononucleotide shuffling of the CDS data of each species using MethMod. The following analysis was performed for both the actual and the shuffled data. For each CDS the Observed–Expected CpG ratio was calculated using the formula: $$\begin{aligned} O/E_{CpG} = \frac{CG \times l}{C \times G} \end{aligned}$$ with C, G, and CG being the number of the respective mono- and dinucleotides in the given CDS and l being the length of the CDS. CDS shorter than 100 nucleotides or with more than 5% of N's in the sequence were excluded. We used a Gaussian Mixture Model (GMM) to identify possible subpopulations in the O/E CpG distribution. The Expectation Maximization algorithm in the python module 'sklearn' from the library scikit-learn (Pedregosa et al. 2011) version 0.23.1 was used to estimate the parameters. The GMM was modeled with one or two components. For the GMM with one component, we calculated the Akaike information criterion (AIC). For the GMM with two components, we calculated the AIC and in addition the mean of each component, the distance d of the component means and the relative amount of data points in each component, see supplementary Tables 2 and 3. For the distribution of O/E CpG values, the distribution mean, the sample standard deviation, and the skewness were calculated as well. All pairs of parameters were analyzed using two-dimensional scatterplots generated with R. We used the distance between the component means as an indicator for DNA methylation. If the distance is greater or equal to 0.25, we assume DNA methylation is present, otherwise it is absent. Ecdysozoan Phylogeny The topology of the ecdysozoan phylogeny, used for display only, is a composite of phylogenetic information compiled from several studies. The topology of Arthropoda was based on Misof et al. (2014) and combined with phylogenetic information for the taxa Coleoptera (Zhang et al. 2018), Lepidoptera (Kawahara et al. 2019), Hymenoptera (Peters et al. 2017), Hemiptera (Johnson et al. 2018), Aphididae (von Dohlen et al. 2006; Kim et al. 2011; Nováková et al. 2013), Crustacea (Schwentner et al. 2017), Copepoda (Khodami et al. 2017), Chelicerata (Howard et al. 2020; Sharma et al. 2012), Aranea (Fernández et al. 2018), and Acari (Arribas et al. 2020). The topology of the nematode phylogeny was based on Consortium et al. (2019) and combined with phylogenetic information for the genera Plectus (Rošić et al. 2018), Trichinella (Korhonen et al. 2016), Caenorhabditis (Stevens et al. 2019), and Diploscapter (Fradin et al. 2017). Presence and Absence of DNA Methyltransferases in Ecdysozoa Species We investigated the presence of DNMTs in 138 species using a carefully designed homology search strategy (see Materials and Methods) aiming at minimizing false negatives. Candidate sequences were then curated carefully to avoid overprediction. Most of the available genomes belong to the Nematoda (42) and Arthropoda (85). Of the arthropod species, 56 are Hexapoda (insects) and 29 belong to other subphyla. Only 6 species are from Ecdysozoa groups outside of Nematoda or Arthropoda. In addition, 5 species from groups outside of Bilateria have been included. In seven species, the arthropods Calanus finmarchicus, Eudigraphis taiwaniensis, Glomeris marginata, Anoplodactylus insignis and all three Onychophora species, no genome data were available but only proteins predicted from transcriptomic data. The respective species are indicated in the text by stating that they have a "transcriptome only" (t.o.). Our findings are summarized in Figs. 3, 4, 5 and supplementary Fig. 1. Potential losses of DNMT1, DNMT2, and DNMT3 are marked with stars in the respective colors. Species with a transcriptome only (t.o.) are indicated by triangles. In the following paragraphs, we discuss the results of our annotation efforts in more detail. Arthropoda are an extremely species-rich and frequently studied group of invertebrates. The most prominent subphylum is Hexapoda, which contains, among others, all insects. Several (emerging) model organism belong to insects, e.g., the fruit fly Drosophila melanogaster (Diptera), the silk moth Bombyx mori (Lepidoptera), the red flour beetle Tribolium castaneum (Coleoptera) or the honey bee Apis mellifera (Hymenoptera). The group of Crustacea (crabs, shrimp, lobster) is currently believed to be paraphyletic (Schwentner et al. 2017). Multicrustacea consists of most of the "crustacean" species, e.g., the white leg shrimp Penaeus vannamei (Decapoda) or the amphipod Hyalella azteca (Amphipoda). Branchipoda with the frequently studied water flea Daphnia pulex (Cladocera) are currently placed more closely related to Hexapoda. The sister group to all of the aforementioned groups are Myriapoda (millipedes, centipedes). The earliest branching group of Arthropoda are the Chelicerate. A diverse subgroup of Chelicerata are Arachnida (e.g., spiders, scorpions, ticks) but they also contain the Atlantic horseshoe crab Limulus polyphemus (Xiphosura) and sea spiders (Pantopoda). We analyzed 85 species of the phylum Arthropoda. They belong to 28 different taxonomic orders. An overview of the results can be found in Fig. 3. The subphylum Hexapoda was the largest group analyzed with 11 different orders. Two had a full set of DNMTs: Blattodea (3 species) and Thysanoptera (1). In four orders, only DNMT1 and DNMT2 are present: Siphonaptera (1), Trichoptera (1), Lepidoptera (8) and Phthiraptera (1). In two only DNMT2 could be identified: Diptera (3) and Entomobryomorpha (2). In the remaining three orders, the occurrence of DNMT enzymes is heterogeneous suggesting secondary losses within the order. Coleopetera (11 species) have all DNMTs, DNMT1, and DNMT2 or only DNMT2. Hymenoptera (12) mostly have all DNMTs but in two species of the genus Polistes, DNMT3 could not be detected. In three species of Hemiptera (14), we did not find DNMT3, as well. Presence and absence of DNMT family members in Arthropoda indicated by filled and open symbols, respectively for DNMT1 (red), DNMT2 (green), and DNMT3 (blue). Data sources are indicated by symbol shape: filled circle—proteome, filled square—genome, filled triangle—transcriptome. The rightmost column (golden circles) shows the presence and absence of DNA methylation as predicted from the O/E CpG ratio. Absence of golden circle indicates missing data. The species list is given on turquoise background with alternating shades indicating the order membership. The name of the order (or suitable higher group marked with an asterisk *) is given in bold. Alternating shades of brown indicate (from top to bottom) Chelicerata, Myriapoda, Multicrustacea, Branchiopoda, and Hexapoda. Stars in the species tree denote proposed loss events inferred from absence of a DNMT in all species of a subtree comprising at least two leaves, disregarding absences in species with transcriptomic data only (Color figure online) The subphylum Crustacea is currently believed to be paraphyletic (Schwentner et al. 2017) but the following species are considered part of it. In two species of the Daphnia genus, all DNMTs have been found. They belong to the order Cladocera in the class Branchiopoda, formerly part of the subphylum Crustacea. Six additional orders of the former subphylum, belonging to the group of Multicrustacea have been studied. In Amphipoda (1) and Decapoda (1), all three DNMTs have been found, as well. In the orders Calanoida (2 species), Harpacticoida (1) and Siphonostomatoida (1), DNMT3 was not identified. In the calanoida Lepeophtheirus salmonis DNMT2 could not be identified as well. In Isopoda (1), DNMT1 and DNMT3 could not be detected. In the subphylum Myriapoda, three different orders have been analyzed with one species each. All of them showed a full set of three DNMT enzymes. Seventeen species of the subphylum Chelicerata were analyzed. They belong to 8 different orders. We detected all three DNMTs in Xiphosura (1 species), Scorpiones (1), Aranea (3) and Ixodida (1). The same was the case for Trombidiformes (3) with the exception of Tetranychus urticae for which DNMT2 could not be found. In Sarcoptiformes (3), only DNMT3 was not detectable. In Mesostigmata (4), this was the case for DNMT1 and DNMT3. In the one species of Pantopoda (1) Anoplodactylus insignis (t.o.) DNMT1 could not be found. Nematoda Nematoda are, next to Arthropoda, the best-studied group of Ecdysozoa. Developing a complete nematode systematics is still an ongoing process. Most available genome data come from the clades I, III, IV and V. Clade V contains the most well-known nematode species Caenorhabditis elegans. Forty-two nematodes species of five clades were analyzed. Of the 17 species in clade V most had no DNMTs, in 5 species DNMT2 could be detected. In clade III for 8 out of 10 species, DNMT2 was present but not the other DNMTs. Clade IV with six species showed no signs of DNMT at all. In Plectus sambesii, the only representative of its clade, DNMT3 could not be found. In clade I, in 6 of the 8 species only DNMT2 and DNMT3 were detected. For one species all three DNMTs have been identified. In another one species, only DNMT3 is present but DNMT2 could not be found. An overview of the results can be found in Fig. 4. Presence and absence of DNMT family members in Nematoda. See Fig. 3 for detailed legend. Instead of order names, clade names are given (in bold) Priapulida, Onychophora and Tardigrada These groups are not often in the focus of scientific studies. Tardigrada, commonly known as water bears, gained some interest because they can survive in very harsh conditions, such as extreme temperature, radiation, pressure, dehydration and even in outer space (Jönsson et al. 2008). Onychophora or velvet worms are the sister taxon to Arthropoda+Tardigrada. Some species can bear live offsprings (Ostrovsky et al. 2016). Priapulida (penis worms) are believed to be among the earliest branching Ecdysozoa and therefore are of great interest for comparative studies. Unfortunately, genomic data so far is only available for one species. In the Onychophora (3) (t.o.) Peripatoides sp. and Peripatopsis overbergiensis DNMT1 and DNMT2 was detected in Peripatus sp. DNMT2 and DNMT3. In Tardigrada (2), only DNMT2 could be identified. In the single member of the Priapulida, all DNMTs were detected. An overview of the results can be found in Fig. 5. Presence and absence of DNMT family members in Priapulida, Onychophora and Tardigrada and early-branching Metazoa. See Fig. 3 for detailed legend Early-Branching Metazoa The systematics of early-branching Metazoa is difficult to resolve and currently still heavily discussed. The Cnidaria (jellyfish, sea anemones, corals) are believed to be the closest relatives to bilateral animals. Placozoa are a more distant taxa with Trichoplax as the most prominent genus. They are tiny and delicate marine animals. For a long time, only Trichoplax adhaerens was known along with a number of haplotypes. Only recently two more species have been described. Porifera, or sponges, are (together with Ctenophora) a contender for being the earliest branching phylum of Metazoa. In Placozoa (2), only DNMT2 was detected, while in Cnidaria (2) and Porifera (1), all DNMT enzymes were found. DNA Methylation Inferred from CpG O/E Value Distributions The ratio of observed and expected CpGs serves as an indicator for the presence of DNA methylation. In invertebrates, often only a subset of genes is subject to CpG methylation (Zemach et al. 2010; Lewis et al. 2020). Therefore, we assume that the observed distribution is a mixture of two gaussian distributions. Similar to previous work, we use an expectation maximization (EM) algorithm to estimate the parameters of this Gaussian Mixture Model (GMM) (Bewick et al. 2017; Provataris et al. 2018). The results outlined below were used to revise the parameters reliably indicating bimodality and thus the presence of DNA methylation. Coding sequence (CDS) data were available for all species except Calanus finmarchicus, Glomeris marginata, Eudigraphis taiwaniensis, Anoplodactylus insignis, Peripatopsis overbergiensis , Peripatoides sp., Peripatus sp., whose data were from Laumer et al. (Laumer et al. 2019). For five species (C. sinica, C. tropicalis, S. flava, M. sacchari, A. verrucosus) the genome was not published, yet, therefore they have been excluded from this genome-wide analysis. Hence, we were able to analyze O/E CpG ratios for the CDS of 126 species. We performed Gaussian Mixture Modeling (GMM) with the actual CDS data and a mononucleotide shuffled version of the CDS data. The later served as a negative control since CpG dinucleotide depletion is not to be expected. To evaluate whether a model with one or two components better represents the observed CpG O/E distribution, we first applied the Akaike Information Criterion (AIC), which is a measure of relative goodness of fit. For 94 of 128 species a model with two components was favored over a model with one component. In contrary to our expectation, the AIC also favored a two-component model for 94 of 128 shuffled CDS data. Table 1 Summary of the Gaussian Mixture Modeling for real and shuffled data This indicates that the CDS may also fall into two classes distinguished by overall GC content, not only by relative CpG abundance. Although the AIC is generally accepted for GMMs, empirically, we find that the AIC is a poor decision criterion for our purposes. Features directly derived from the two components, such as the component means and the relative amount of data points corresponding to each component clearly proved to better separate real and shuffled data. Table 1 shows that the mean distance between the two components is much larger in the real data compared to the shuffled data. Hence we use the difference between the means of the two Gaussians as an indicator of CpG depletion. As the distance is continuous, ranging from 0.00 to 0.63 in our data, it is necessary to determine the threshold above which the difference of two means is interpreted as indicative of DNA methylation. Naively, species having neither DNMT1 nor DNMT3 should be less likely to contain DNA methylation, while species in which one or both of the enzymes are present should be more likely to have kept genomic DNA methylation. Of the 126 species analyzed, in 45 the DNMT1 and DNMT3 enzymes have been found, while in 46 neither was found. In 28 species, only DNMT1 was detected and in 7 species only DNMT3, see Table 2. Figure 6 shows the means of both GMM components for all analyzed species, marked by different colors and symbols according to their set of DNMT1/3 enzymes and their taxonomic group. The threshold value \(d\ge 0.25\) is able to separate almost all of the species with no DNMT1/3 from the others. We have chosen this conservative threshold in order to avoid false-positive prediction of DNA methylation. In our data, 55 of 126 species had a distance greater or equal to 0.25 indicative of DNA methylation. The other 71 species had a distance smaller than 0.25. Table 2 Relationship between the combination of DNMT candidates and the predicted methylation level. Shown is the amount of species for which DNA methylation is predicted to be present or absent classified by the presence of DNMT enzyme combinations Each point shows one species analyzed by Gaussian Mixture Modeling (GMM). The axes are the means of the two components. The taxonomic group is indicated by the style of the point. The color represents if both, DNMT1 and DNMT3 (green), have been found in the species, only DNMT1 (red), only DNMT3 (black) or neither one nor the other (blue). The diagonal lines indicate the distance between the mean of both GMM components. The dotted line indicates a distance of \(d=0\), the dashed one \(d=0.2\) and the solid line \(d=0.25\) (selected threshold). 'EBM' stands for 'Early-branching metazoa', i.e., Porifera, Placozoa and Cnidaria (Color figure online) To our knowledge, this study is the phylogenetically most diverse analysis of DNA methylation in Ecdysozoa, to date. Several recent projects have investigated DNA methylation in species of Ecdysozoa but they have focused on different subgroups, i.e., Hexapoda (Bewick et al. 2017; Provataris et al. 2018), Arthropoda (Lewis et al. 2020; Thomas et al. 2020) and Nematoda (Rošić et al. 2018). In our study, we have investigated a similar number of orders from the before mentioned groups. Of the arthropod subphylum Chelicerata we included a larger number of orders and therefore were able to predict an additional loss of DNA methylation. In addition, we included species from Priapulida, Onychophora, and Tardigrada. The presence of DNA methylation has been investigated in none of these phyla before. In Tardigrada, we predict an additional, previously unknown, loss of DNA methylation. All of our data were analyzed with the same computational pipeline for detecting DNA methyltransferase enzymes and predicting DNA methylation based on the CpG ratios. The results are therefore comparable over a large phylogenetic range, spanning more than 700 million years (Kumar et al. 2017) of ecdysozoan evolution. Our analysis of five out of seven Ecdysozoa phyla confirms that the evolution of DNA methylation in Ecdysozoa proceeds independently in each phylum. It is therefore of great interest to perform experimental studies in each of these phyla to discover different evolutionary adaptations DNA methylation might have undergone. Presence and Absence of DNA Methyltransferases Overall, our data show that both individual DNMTs and DNA methylation as a process have been lost independently in multiple lineages. Since the absence of an enzyme is difficult to prove conclusively, we rely on data from related species and invoke parsimonious patterns to identify loss events with confidence: the lack of evidence for a DNMT in an entire clade of related species makes a loss event a very plausible explanation. There are several reasons why a DNMT may escape detection. The most prominent cause is a low quality, fragmented genome assembly. Not finding a homolog in a species with a high quality, completed genome assembly, in particular in model organisms such as Caenorhabditis elegans and Drosophila melanogaster makes a negative search result more reliable. It is also possible that a protein has diverged so far that it is no longer recognizable as a homolog in the target organism by the search method used. This explanation becomes more likely in groups, such as Tardigrada or Nematoda, where the closest known homolog of DNMT enzymes is quite far away. If they have diverged extensively it is more likely to miss existing DNMTs. Nevertheless, as long as the catalytic domain of the enzymes still performs the same function we should be able to find the enzyme. The predicted phyletic pattern of DNMT losses is quite different in Arthropoda and Nematoda. DNMT1 is found in most arthropod species analyzed in our study. Three independent loss events of DNMT1 are suggested by our data (Fig. 3). In Nematoda, only two events of DNMT1 loss are suggested but they occur earlier in the evolution of the studied nematode species. Therefore, only in two species DNMT1 can still be detected. DNMT2 is most likely present in all Arthropoda. The absence in two individual species is probably a technical artifact since DNMT2 enzymes are present in closely related species in both cases. In Nematoda, absence of DNMT2 enzymes is fare more frequent. Given the near perfect conservation of DNMT2 in other metazoan species, this is rather unexpected. Interestingly, the candidate DNMT2 sequences are clearly more divergent compared to those in Arthropoda, which may hint at false-positive predictions of 13 DNMT2 enzymes. In this case, a single loss event either after divergence of clade I or both, clade I and clade P, is plausible. DNMT3 seems to be the most dispensable member of the DNMT family. According to our data, it was lost eight times in Arthropoda. It only occurs in combination with DNMT1 and is lost prior to or simultaneously with loss of DNMT1. In Nematoda, DNMT3 is present in all members of clade I and absent in all other clades. Interestingly, in all but one species of clade I, we detected a DNMT3 in the absence of DNMT1. Absence of DNMT3 in the presence of DNMT1 is frequently associated with low levels of CpG depletion. The weak bimodality of the CpG ratio distribution may be the consequence of a return to an unbiased, unimodal distribution caused by decaying methylation levels due to failure to (re-)establish and maintain methylation. Under certain conditions, DNMT1 may have weak de novo activity (Dahlet et al. 2020). The molecular mechanism involves binding to unmethylated CpGs via the CXXC domain and auto-inhibition of de novo methylation (Song et al. 2011). Via its regulatory domains DNMT1 interacts with epigenetic factors which may be involved in regulating DNMT1 de novo activity. The loss events as defined in this study are well supported by the absence of the enzymes in related species, see the colored stars in Figs. 3, 4 and supplementary Fig. 1. More precisely, a loss is only inferred if the respective DNMT could not be found in all species of the respective subtree and if it contains at least 2 species. Considering the problems in gene detection, these rules remove cases where the poor quality of single genomes may prevent the detection of DNMTs. In Arthropoda, all members of the DNMT family can be identified in several species of each subphylum. Therefore it is unlikely that the negative predictions are caused by extreme divergence of protein sequences that might have rendered them undetectable by homology search methods. The N50 value (that is, 50% of the genome is covered by contigs with a length of at least N50) serves a good measure of assembly quality for our purposes. In Arthropoda, five species are missing DNMT1 or DNMT3 and are not covered by the loss events we propose. The genomes of Diaphorina citri (Hemiptera), Armadillidium vulgare (Multicrustacea) and Oryctes borbonicus (Coleoptera) are the 13th, 8th and 7th worst assemblies in Arthropoda according to the N50 value, see supplementary Table 1. The N50 for D. ponderosae (Coleoptera) is around average and for Anoplodactylus insignis (Chelicerata) only a transcriptome is available. It is difficult therefore, to interpret these potential loss events. A more reliable prediction will be possible when better genomes or data from more closely related species becomes available. The DNMT1/DNMT3 losses in Nematoda are more difficult to evaluate since there are so few positive findings. Their absence in clade III, IV and V is supported by the findings of Rošić et al. (2018). These groups contain several high quality genomes, such as the model organism C. elegans. The most likely reason for missing existing proteins would therefore be that they are already too diverged. However, DNA methylation has been verified to be absent in several of them and no findings of DNMT enzymes have ever been reported. Therefore, it seems reasonable to conclude that DNA methylation and both DNA methyltransferases are absent from Nematoda of clade III, IV, and V. In clade I, DNMT3 is evidently present. However, it seems that DNMT1 is absent in all but a single species examined. This pattern cannot be seen in any other ecdysozoan group. The exception is the earliest branching nematode Romanomermis culicivorax, which possesses both, DNMT1 and DNMT3, as well as DNMT2. The case of Plectus sambesii, the sole member of clade P, is quite interesting because DNMT1 is present, while DNMT3 is absent. However, the genome of P. sambesii is the 3rd worst of all nematodes putting the loss of DNMT3 into question. We can therefore suggest two possible scenarios, either DNMT3 was lost in the stem lineage of clade P and the clades III, IV and V, i.e., before the loss of DNMT1 or after branching of clade P and clades III, IV and V and simultaneously with the loss of DNMT1. The two missing DNMT2 in Arthropoda are likely to be false negatives since homologs of DNMT2 were detected in all other arthropods. Likely, this is also the case in the nematode Trichuris trichiura since in the two other species of its genus DNMT2 was found. In clade III, IV, and IV, the pattern is not very parsimonious and our analysis reports three independent DNMT2 loss events. In addition, we did not detect DNMT2 candidates in two more species in clade III. Visual inspection of the DNMT2 alignment revealed that DNMT2 candidates of clades III and V are highly divergent. In conclusion, it remains questionable whether these enzymes are still functional DNA methyltransferases. Supplementary Tables 4, 5 and 6 summarize our results and provide a comparison with five recent studies. We analyzed 138 species in total, of which 37 and 34 have been previously examined by Bewick et al. (2017) and Provataris et al. (2018), respectively. The evolutionary history of DNMT1 within Hymenoptera, including paralogization, is described in detail by Bewick et al. (2017). We have focused on determining if at least one copy of DNMT1/2/3 is present in a genome since we wanted to mainly study losses of DNA methylation in Ecdysozoa. To the largest part, the results of all studies are in concordance. We were able to identify DNMTs in seven species, i.e., DNMT1 in two species (P. vannamei and N. nevadensis) and DNMT3 candidates in five species (P. vannamei, I. scapularis, B. germanica, N. lugens and H. halys), respectively, which have been missed in at least one other study. We, on the other hand, miss no DNMT enzyme reported by Bewick et al. (2017) or Provataris et al. (2018). Two subsequent studies de Mendoza et al. (2019a) and Lewis et al. (2020) have analyzed fewer Hexapoda but included other arthropods and some non-bilaterian species. We share 16 and 20 species with these studies. The results for detecting DNMTs are almost identical we find DNMT1 in one less species, A. vulgare, but DNMT3 in one more, I. scapularis, compared to Lewis et al. (2020). Of the 42 Nematoda analyzed in our study, Rošić et al. (2018) investigated a subset of 14. The results for the presence/absence of DNMT enzymes in these 14 species are identical. Concordant with the existing literature a loss of DNMT3 is much more prevalent. But even in the absence of DNMT3, DNA methylation has been found to be present with DNMT1 only (Bewick et al. 2017). This shows that DNMT1 must have a de novo activity which keeps methylation present, at least at a low amount. Therefore, extending its classification as "maintenance" methyltransferase known from vertebrate studies. Notable exceptions are the nematodes T. muris and T. spiralis for which the presence of DNA methylation has been reported (Rošić et al. 2018) but only DNMT3 could be found. If this means they only act as "de novo" methyltransferases or also fulfill the role of "maintenance" is currently unclear. Functional studies of invertebrate DNA methyltransferases could lead to a better understanding of their different roles which seems to differ compared to vertebrate DNMTs. Traditionally, a computational prediction for the presence of DNA methylation is considered to be much weaker evidence than an experimental verification, e.g., by bisulfite sequencing. In principle, we agree that an experimental verification leads to a better insight about the actual distribution of DNA methylation in a genome. Nevertheless, aside from the additional work required to gain genomic DNA for each species and perform the experiments, there are fundamental differences between the results of experiments and our prediction. The results of bisulfite sequencing are specific for the tissue which was used to extract the genomic DNA, e.g., whole organisms, body parts or particular developmental stages. Strictly speaking the results are only valid for the analyzed tissue. With our method of predicting the DNA methylation from the O/E CpG rates we basically analyze the DNA methylation in the germline. Only mutations (caused by deamination) which happen in the germline will be kept in the next generations. DNA methylation of germ cells is rarely measured experimentally in invertebrates due to the additional difficulties collecting enough genomic material. Therefore, contrary to most experimental approaches, we actually predict germline DNA methylation. Over evolutionary time, the distribution of CpG dinucleotides is influenced by DNA methylation, which gives rise to an increased rate of C to T mutations and, consequently, CpG depletion. In case of genome-wide DNA methylation, as in vertebrates, the signal is easy to detect. The situation is more challenging in invertebrates, where methylation is often concentrated to a subset of coding regions (Zemach et al. 2010; Lewis et al. 2020). A two-component Gaussian Mixture modeling (GMM) approach is used to model the populations of methylated and unmethylated coding sequences. As we could show, the distance d between the component means is a more reasonable measure for the level of DNA methylation in Ecdysozoa than using the Akaike information criterion (AIC). The AIC favored a two-component model even after shuffling the nucleotides and in fact more components improved the AIC even further. We therefore believe that the AIC leads to an overfitted model in this specific application partly due to the high number of data points. Using \(d\ge 0.25\) as threshold, we could confirm the previously reported absence of notable DNA methylation in several species, such as the fruit fly Drosophila melanogaster (\(d=0.01\)), the red flour beetle Tribolium castaneum (\(d=0.08\)) or the nematode Caenorhabditis elegans (\(d=0.20\)). Furthermore, we predicted the presence of DNA methylation in a number of species such as, the insects Bombyx mori (\(d=0.39\)), Nicrophorus vespilloides (\(d=0.37\)), Apis mellifera (\(d=0.58\)), Acyrthosiphon pisum (\(d=0.49\)), Blattella germanica (\(d=0.30\)), the water flea Daphnia pulex (\(d=0.32\)) or the nematode Romanomermis culicivorax (\(d=0.58\)), which is in concordance with the literature. Unfortunately, the number of studies which used experimental methods to verify the presence of DNA methylation in Ecdysozoa is quite limited, in particular outside of Hexapoda. Our data suggest several losses of DNA methylation which can not be supported by evidence other than the computationally calculated O/E CpG ratio. Due to the predicted presence of DNA methylation in closely related species some "species-specific" losses seem questionable, e.g., Danaus plexippus (\(d=0.11\)) and Acromyrmex echinatior (\(d=0.24\)). Conversely, some of the positive findings are likely to be false predictions, e.g., the nematodes Caenorhabditis angaria (\(d=0.36\)), Loa loa (\(d=3.55\)) and Strongyloides ratti (\(d=0.25\)). For many other species there is currently no experimental verification available. The reason for the incorrect predictions is currently not easy to explain. Most likely, there are other, presently unknown factors that influence the distribution in CpGs in the genome. Such effects are difficult to distinguish from the effects of DNA methylation. Nine species in which we detected DNMT1 and DNMT3 were predicted to not have DNA methylation. The chelicerata I. scapularis (\(d=0.20\)), T. urticae (\(d=0.06\)) and L. deliense (\(d=0.22\)), the amphipod H. azteca (\(d=0.18\)), the hemiptera N. lugens (\(d=0.20\)) and L. striatellus (\(d=0.20\)) and the hymenoptera C. cinctu (\(d=0.22\)), O. abietinus (\(d=0.17\)), A. echinatior (\(d=0.23\)). Tribolium castaneum is one example where DNMT1 was kept despite the loss of DNA methylation (Schulz et al. 2018) but there is currently no example known where both DNA methyltransferases (DNMTs) are kept despite the loss of DNA methylation. Therefore, one would assume species with both DNA methyltransferases are likely to have DNA methylation as well. Nevertheless, only for one of the nine species, I. scapularis, this was experimentally verified. It is likely that most of these cases are false negatives but without additional information one can not be sure. In species closely related to the chelicerata T. urticae and L. deliense, we detected several losses of DNMTs as well as DNA methylation. The situation is similar for the hemiptera N. lugens and L. striatellus. It is possible that DNA methylation has been significantly reduced in these groups and therefore can not be detected by our prediction method anymore. Another shortcoming of the proposed method that one has to keep in mind, is that it can not detect a loss of DNA methylation immediately after it occured. After the loss of DNA methylation, spontaneous deamination does not happen anymore but it will take time until random mutations in the germline lead to an increase of CpGs in the genome. Therefore, only after enough mutations occured, the proposed method will be able to detect the loss of DNA methylation. One example for a recent loss of DNA methylation which is supported by experimental data are in Tribolium castaneum (\(d=0.09\)). The closest relative with verified DNA methylation is Nicrophorus vespilloides (\(d=0.36\)). Their pairwise divergence time is appr. 268 million years (Kumar et al. 2017). In that case, this was enough time to increase the number CpGs up to the expected level. Computational predictions of methylation status have been performed with different methods by Bewick et al. (2017) and Provataris et al. (2018). Supplementary Table 5 provides a summary of their findings and the respective results from our study. Compared to Bewick et al. (2017), there are three cases where we predict no DNA methylation, while they predict DNA methylation: R. prolixus (\(d=0.14\)), O. abietinus (\(d=0.17\)) and A. glabripennis (\(d=0.21\)). In one case, M. cinxia (\(d=0.27\)) we predict DNA methylation while they do not. Compared to Provataris et al. (2018), there are five cases where we predict DNA methylation while they do not: S. maritima (\(d=0.35\)), H. saltator (\(d=0.44\)), A. cephalotes (\(d=0.27\)), P. xylostella (\(d=0.28\)) and M. cinxia (\(d=0.27\)). In one case, D. plexippus, they predict DNA methylation while we do not. In total, these are 9 species in which our methylation predictions disagree with at least one of the other two papers. In the case of S. maritima and H. saltator, there is experimental evidence for DNA methylation so our prediction is backed up by that. For the other species no such data are available. The prediction of the presence of DNA methylation in M. cinxia is the only case where both other studies agree on contradicting our prediction. This species would be the only exception in Lepidoptera without DNA methylation, therefore our prediction appears to be more likely. In A. glabripennis, we predict no DNA methylation, while Bewick et al. (2017) does but there is no further evidence available. The other 5 species are part of all three studies and in all cases our prediction is supported [three times (Bewick et al. 2017), two times (Provataris et al. 2018)] by one study and contradicted by the other. There is no case where our predictions are clearly worse than those of competing methods. In the single case of A. glabripennis, there is no further evidence to resolve a contradicting result. For 32 of the species examined, experimental data on the presence (25) and absence (7) of DNA methylation is available. Using a distance threshold of \(d\ge 0.25\), we correctly predict the presence and absence of DNA methylation for 19 and 7 species, respectively, totaling to 26 out of 32. The remaining six predictions are false negatives. Note that there are no false-positive predictions given the experimental dataset at hand. Among the species corresponding to the false-negative predictions are three arthropod species, I. scapularis (\(d=0.2\)), T. urticae (\(d=0.06\)) and A. vulgare (\(d=0.21\)), and three nematode species T. spiralis (\(d=0.24\)), T. muris (\(d=0.08\)) and P. sambesii (\(d=0.15\)), see also supplementary Table 4, 5 and 6. According to Lewis et al. (2020), the level of DNA methylation in A. vulgare is very low which is likely the reason why our prediction method fails. In case of T. urticae, only 5 out of 330 analyzed CpGs have been found to be methylated (Grbić et al. 2011). A genome-wide investigation to verify the presence of DNA methylation would be helpful to evaluate the results in this species. There is no obvious explanation why we miss DNA methylation in I. scapularis. In the three nematodes, notable levels of DNA methylation are mostly present at repeats in intergenic regions, which cannot be captured by our method. According to Rošić et al. (2018), only the nematode R. culicivorax shows a bimodal distribution for DNA methylation across genes. We also evaluated if a lower or a higher threshold d would improve the predictions, see supplementary Table 7. A lower cutoff of \(d\ge 0.2\) would improve the predictions supported by experimental data to 28/32 species by adding three true and one false-positive (C. elegans). A higher threshold of \(d\ge 0.3\) would introduce three more false negatives without any improvements. Given the phylogenetic range we studied, a higher threshold is therefore not recommended. We decided to use the intermediate cutoff of \(d\ge 0.25\) to prevent several false-positive predictions in Nematoda. Depending on the studied phylogenetic range, e.g., only Arthropods, a lower threshold could increase sensitivity without losing specificity. The amount of genomic and transcriptomic data from a wide range of species is constantly increasing. Often only a relatively small phylogenetic range is analyzed simultaneously. The analysis of "universal" evolutionary patterns, however, requires that the same analysis is applied to widely different groups of species. With this study we provide the most diverse analysis of DNA methyltransferase enzymes in Ecdysozoa, to date, spanning a phylogenetic range of more than 700 million years. Previous studies have focused on specific subgroups in particular Arthropoda (Bewick et al. 2017; Provataris et al. 2018; Lewis et al. 2020; Thomas et al. 2020) and Nematoda (Rošić et al. 2018) and covered only selected phyla. We combined data for five ecdysozoan phyla (Priapulida, Nematoda, Onychophora, Tardigrada and Arthropoda) and identified DNMT1, DNMT2 and DNMT3 in four out of these phyla. The only exception are Tardigrada, where neither DNMT1 and DNMT3 was detected. This suggests the absence of DNA methylation in, at least the currently sequenced, tardigrade species. Our data show that DNA methyltransferases evolved independently and differently in the studied phyla of Ecdysozoa. We proposed an adapted method (MethMod) to predict the DNA methylation status in a given species based on coding sequence (CDS) data. It was optimized over a wide phylogenetic range and requires only a single decisive parameter (the distance between the component means of a Gaussian Mixture Modeling) to achieve high specificity. Naturally, the method is limited if changes in the methylome have not yet altered the underlying genome significantly or if methylation is only present in small amounts. MethMod is available as a stand-alone python script and can be easily applied to emerging model organisms since only coding sequence data are required. The data presented here will help to guide future projects to experimentally study DNA methylation in non-model Ecdysozoa species. The proposed analysis should be a worthwhile addition to newly sequenced genomes. It allows to expand their scope from the genomic to the epigenomic level. The data underlying this article are available in the article and in its online supplementary material. Code Availability MethMod which was used to predict the DNA methylation status is available on github: https://github.com/JanLeipzig/MethMod/ Aliaga B, Bulla I, Mouahid G, Duval D, Grunau C (2019) Universality of the DNA methylation codes in Eucaryotes. Sci Rep 9(1):1–11 Arribas P, Andújar C, Moraza ML, Linard B, Emerson BC, Vogler AP (2020) Mitochondrial metagenomics reveals the ancient origin and phylodiversity of soil mites and provides a phylogeny of the Acari. Mol Biol Evol 37(3):683–694 Bestor TH (1992) Activation of mammalian DNA methyltransferase by cleavage of a Zn binding regulatory domain. EMBO J 11(7):2611–2617 Bewick AJ, Vogel KJ, Moore AJ, Schmitz RJ (2017) Evolution of DNA methylation across insects. Mol Biol Evol 34(3):654–665 Consortium IHG (2019) Comparative genomics of the major parasitic worms. Nature Genet 51(1):163 Dahlet T, Lleida AA, Al Adhami H, Dumas M, Bender A, Ngondo RP, Tanguy M, Vallet J, Auclair G, Bardet AF et al (2020) Genome-wide analysis in the mouse embryo reveals the importance of DNA methylation for transcription integrity. Nat Commun 11(1):1–14 de Mendoza A, Hatleberg WL, Pang K, Leininger S, Bogdanovic O, Pflueger J, Buckberry S, Technau U, Hejnol A, Adamska M et al (2019a) Convergent evolution of a vertebrate-like methylome in a marine sponge. Nature Ecol Evol 3(10):1464–1473 de Mendoza A, Pflueger J, Lister R (2019b) Capture of a functionally active methyl-CpG binding domain by an arthropod retrotransposon family. Genome Res 29(8):1277–1286 Devajyothi C, Brahmachari V (1992) Detection of a CpA methylase in an insect system: characterization and substrate specificity. Mol Cell Biochem 110(2):103–111 Dhayalan A, Rajavelu A, Rathert P, Tamas R, Jurkowska RZ, Ragozin S, Jeltsch A (2010) The Dnmt3a PWWP domain reads histone 3 lysine 36 trimethylation and guides DNA methylation. J Biol Chem 285(34):26114–26120 Elango N, Hunt BG, Goodisman MA, Soojin VY (2009) DNA methylation is widespread and associated with differential gene expression in castes of the honeybee, Apis mellifera. Proc Natl Acad Sci USA 106(27):11206–11211 El-Gebali S, Mistry J, Bateman A, Eddy SR, Luciani A, Potter SC, Qureshi M, Richardson LJ, Salazar GA, Smart A et al (2019) The Pfam protein families database in 2019. Nucleic Acids Res 47(D1):D427–D432 Fernández R, Kallal RJ, Dimitrov D, Ballesteros JA, Arnedo MA, Giribet G, Hormiga G (2018) Phylogenomics, diversification dynamics, and comparative transcriptomics across the spider tree of life. Curr Biol 28(9):1489–1497 Field L, Lyko F, Mandrioli M, Prantera G (2004) DNA methylation in insects. Insect Mol Biol 13(2):109–115 Fradin H, Kiontke K, Zegar C, Gutwein M, Lucas J, Kovtun M, Corcoran DL, Baugh LR, Fitch DH, Piano F et al (2017) Genome architecture and evolution of a unichromosomal asexual nematode. Curr Biol 27(19):2928–2939 Gao F, Liu X, Wu XP, Wang XL, Gong D, Lu H, Xia Y, Song Y, Wang J, Du J et al (2012) Differential DNA methylation in discrete developmental stages of the parasitic nematode Trichinella spiralis. Genome Biol 13(10):R100 Gatzmann F, Falckenhayn C, Gutekunst J, Hanna K, Raddatz G, Carneiro VC, Lyko F (2018) The methylome of the marbled crayfish links gene body methylation to stable expression of poorly accessible genes. Epigenet Chromatin 11(1):57 Ghojogh B, Ghojogh A, Crowley M, Karray F (2019) Fitting a mixture distribution to data: tutorial. arXiv:190106708 Goll MG, Bestor TH (2005) Eukaryotic cytosine methyltransferases. Annu Rev Biochem 74:481–514 Goll MG, Kirpekar F, Maggert KA, Yoder JA, Hsieh CL, Zhang X, Golic KG, Jacobsen SE, Bestor TH (2006) Methylation of trNAasp by the DNA methyltransferase homolog Dnmt2. Science 311(5759):395–398 Grbić M, Van Leeuwen T, Clark RM, Rombauts S, Rouzé P, Grbić V, Osborne EJ, Dermauw W, Ngoc PCT, Ortego F et al (2011) The genome of tetranychus urticae reveals herbivorous pest adaptations. Nature 479(7374):487–492 Harris TW, Arnaboldi V, Cain S, Chan J, Chen WJ, Cho J, Davis P, Gao S, Grove CA, Kishore R et al (2020) WormBase: a modern model organism information resource. Nucleic Acids Res 48(D1):D762–D767 Howard RJ, Puttick MN, Edgecombe GD, Lozano-Fernandez J (2020) Arachnid monophyly: morphological, palaeontological and molecular support for a single terrestrialization within chelicerata. Arthropod Struct Dev 59:100997 Huff JT, Zilberman D (2014) Dnmt1-independent CG methylation contributes to nucleosome positioning in diverse eukaryotes. Cell 156(6):1286–1297 Huson DH, Bryant D (2006) Application of phylogenetic networks in evolutionary studies. Mol Biol Evol 23(2):254–267 Iyer LM, Abhiman S, Aravind L (2011) Natural history of eukaryotic DNA methylation systems. In: Progress in molecular biology and translational science, vol 101, Elsevier, pp 25–104 Jeltsch A, Jurkowska RZ (2014) New concepts in DNA methylation. Trends Biochem Sci 39(7):310–318 Johnson KP, Dietrich CH, Friedrich F, Beutel RG, Wipfler B, Peters RS, Allen JM, Petersen M, Donath A, Walden KK et al (2018) Phylogenomics and the evolution of hemipteroid insects. Proc Natl Acad Sci 115(50):12775–12780 Jönsson KI, Rabbow E, Schill RO, Harms-Ringdahl M, Rettberg P (2008) Tardigrades survive exposure to space in low Earth orbit. Curr Biol 18(17):R729–R731 Jurkowski TP, Jeltsch A (2011) On the evolutionary origin of eukaryotic DNA methyltransferases and Dnmt2. PLoS ONE 6:11 Kawahara AY, Plotkin D, Espeland M, Meusemann K, Toussaint EF, Donath A, Gimnich F, Frandsen PB, Zwick A, dos Reis M et al (2019) Phylogenomics reveals the evolutionary timing and pattern of butterflies and moths. Proc Natl Acad Sci USA 116(45):22657–22663 Kent WJ (2002) BLAT-the BLAST-like alignment tool. Genome Res 12(4):656–664 Khodami S, McArthur JV, Blanco-Bercial L, Arbizu PM (2017) Molecular phylogeny and revision of copepod orders (Crustacea: Copepoda). Sci Rep 7(1):1–11 Kim H, Lee S, Jang Y (2011) Macroevolutionary patterns in the Aphidini aphids (Hemiptera: Aphididae): diversification, host association, and biogeographic origins. PLoS ONE 6(9):e24749 Korhonen PK, Pozio E, La Rosa G, Chang BC, Koehler AV, Hoberg EP, Boag PR, Tan P, Jex AR, Hofmann A et al (2016) Phylogenomic and biogeographic reconstruction of the Trichinella complex. Nat Commun 7(1):1–8 Kumar S, Stecher G, Suleski M, Hedges SB (2017) Timetree: a resource for timelines, timetrees, and divergence times. Mol Biol Evol 34(7):1812–1819 Laumer CE, Fernández R, Lemer S, Combosch D, Kocot KM, Riesgo A, Andrade SC, Sterrer W, Sørensen MV, Giribet G (2019) Revisiting metazoan phylogeny with genomic sampling of all phyla. Proc R Soc B 286(1906):20190831 Lewis S, Ross L, Bain S, Pahita E, Smith S, Cordaux R, Miska E, Lenhard B, Jiggins F, Sarkies P (2020) Widespread conservation and lineage-specific diversification of genome-wide DNA methylation patterns across arthropods. PLoS Genet 16(6):e1008864 Liao J, Karnik R, Gu H, Ziller MJ, Clement K, Tsankov AM, Akopian V, Gifford CA, Donaghey J, Galonska C et al (2015) Targeted disruption of DNMT1, DNMT3A and DNMT3B in human embryonic stem cells. Nat Genet 47(5):469–478 Lyko F (2018) The DNA methyltransferase family: a versatile toolkit for epigenetic regulation. Nat Rev Genet 19(2):81 McLachlan GJ, Rathnayake S (2014) On the number of components in a gaussian mixture model. Wiley Interdiscip Rev 4(5):341–355 McLachlan GJ, Lee SX, Rathnayake SI (2019) Finite mixture models. Annu Rev Stat Appl 6:355–378 Misof B, Liu S, Meusemann K, Peters RS, Donath A, Mayer C, Frandsen PB, Ware J, Flouri T, Beutel RG et al (2014) Phylogenomics resolves the timing and pattern of insect evolution. Science 346(6210):763–767 Nováková E, Hypša V, Klein J, Foottit RG, von Dohlen CD, Moran NA (2013) Reconstructing the phylogeny of aphids (Hemiptera: Aphididae) using DNA of the obligate symbiont Buchnera aphidicola. Mol Phylogenet Evol 68(1):42–54 Ooi SK, Qiu C, Bernstein E, Li K, Jia D, Yang Z, Erdjument-Bromage H, Tempst P, Lin SP, Allis CD et al (2007) Dnmt3L connects unmethylated lysine 4 of histone H3 to de novo methylation of DNA. Nature 448(7154):714–717 Ostrovsky AN, Lidgard S, Gordon DP, Schwaha T, Genikhovich G, Ereskovsky AV (2016) Matrotrophy and placentation in invertebrates: a new paradigm. Biol Rev 91(3):673–711 Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830 Peters RS, Krogmann L, Mayer C, Donath A, Gunkel S, Meusemann K, Kozlov A, Podsiadlowski L, Petersen M, Lanfear R et al (2017) Evolutionary history of the Hymenoptera. Curr Biol 27(7):1013–1018 Provataris P, Meusemann K, Niehuis O, Grath S, Misof B (2018) Signatures of DNA methylation across insects suggest reduced DNA methylation levels in Holometabola. Genome Biol Evol 10(4):1185–1197 Quinlan AR, Hall IM (2010) BEDTools: a flexible suite of utilities for comparing genomic features. Bioinformatics 26(6):841–842 Raddatz G, Guzzardo PM, Olova N, Fantappié MR, Rampp M, Schaefer M, Reik W, Hannon GJ, Lyko F (2013) Dnmt2-dependent methylomes lack defined DNA methylation patterns. Proc Natl Acad Sci USA 110(21):8627–8631 Rošić S, Amouroux R, Requena CE, Gomes A, Emperle M, Beltran T, Rane JK, Linnett S, Selkirk ME, Schiffer PH et al (2018) Evolutionary analysis indicates that DNA alkylation damage is a byproduct of cytosine DNA methyltransferase activity. Nat Genet 50(3):452–459 Rountree MR, Bachman KE, Baylin SB (2000) DNMT1 binds HDAC2 and a new co-repressor, DMAP1, to form a complex at replication foci. Nat Genet 25(3):269–277 Sayers EW, Agarwala R, Bolton EE, Brister JR, Canese K, Clark K, Connor R, Fiorini N, Funk K, Hefferon T et al (2019) Database resources of the national center for biotechnology information. Nucleic Acids Res 47(Database issue):D23 Schork NJ, Allison DB, Thiel B (1996) Mixture distributions in human genetics research. Stat Methods Med Res 5(2):155–178 Schulz NK, Wagner CI, Ebeling J, Raddatz G, Diddens-de Buhr MF, Lyko F, Kurtz J (2018) Dnmt1 has an essential function despite the absence of CpG DNA methylation in the red flour beetle Tribolium castaneum. Sci Rep 8(1):1–10 Schwentner M, Combosch DJ, Nelson JP, Giribet G (2017) A phylogenomic solution to the origin of insects by resolving crustacean-hexapod relationships. Curr Biol 27(12):1818–1824 Sharma PP, Schwager EE, Extavour CG, Giribet G (2012) Evolution of the chelicera: a dachshund domain is retained in the deutocerebral appendage of Opiliones (Arthropoda, Chelicerata). Evol Dev 14(6):522–533 Sievers F, Wilm A, Dineen D, Gibson TJ, Karplus K, Li W, Lopez R, McWilliam H, Remmert M, Söding J et al (2011) Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega. Mol Syst Biol 7(1):539 Song J, Rechkoblit O, Bestor TH, Patel DJ (2011) Structure of DNMT1-DNA complex reveals a role for autoinhibition in maintenance DNA methylation. Science 331(6020):1036–1040 Stevens L, Félix MA, Beltran T, Braendle C, Caurcel C, Fausett S, Fitch D, Frézal L, Gosse C, Kaur T et al (2019) Comparative genomics of 10 new Caenorhabditis species. Evol Lett 3(2):217–236 Thomas GW, Dohmen E, Hughes DS, Murali SC, Poelchau M, Glastad K, Anstead CA, Ayoub NA, Batterham P, Bellair M et al (2020) Gene content evolution in the arthropods. Genome Biol 21(1):1–14 von Dohlen CD, Rowe CA, Heie OE (2006) A test of morphological hypotheses for tribal and subtribal relationships of Aphidinae (Insecta: Hemiptera: Aphididae) using DNA sequences. Mol Phylogenet Evol 38(2):316–329 Weinberg DN, Papillon-Cavanagh S, Chen H, Yue Y, Chen X, Rajagopalan KN, Horth C, McGuire JT, Xu X, Nikbakht H et al (2019) The histone mark H3K36me2 recruits DNMT3A and shapes the intergenic DNA methylation landscape. Nature 573(7773):281–286 Xu X, Li G, Li C, Zhang J, Wang Q, Simmons DK, Chen X, Wijesena N, Zhu W, Wang Z et al (2019) Evolutionary transition between invertebrates and vertebrates via methylation reprogramming in embryogenesis. Natl Sci Rev 6(5):993–1003 Yarychkivska O, Shahabuddin Z, Comfort N, Boulard M, Bestor TH (2018) BAH domains and a histone-like motif in DNA methyltransferase 1 (DNMT1) regulate de novo and maintenance methylation in vivo. J Biol Chem 293(50):19466–19475 Yates AD, Achuthan P, Akanni W, Allen J, Allen J, Alvarez-Jarreta J, Amode MR, Armean IM, Azov AG, Bennett R et al (2020) Ensembl 2020. Nucleic Acids Res 48(D1):D682–D688 Zemach A, McDaniel IE, Silva P, Zilberman D (2010) Genome-wide evolutionary analysis of eukaryotic DNA methylation. Science 328(5980):916–919 Zhang Y, Jurkowska R, Soeroes S, Rajavelu A, Dhayalan A, Bock I, Rathert P, Brandt O, Reinhardt R, Fischle W et al (2010) Chromatin methylation activity of Dnmt3a and Dnmt3a/3L is guided by interaction of the ADD domain with the histone H3 tail. Nucleic Acids Res 38(13):4246–4253 Zhang SQ, Che LH, Li Y, Liang D, Pang H, Ślipiński A, Zhang P (2018) Evolutionary history of Coleoptera revealed by extensive sampling of genes and species. Nat Commun 9(1):1–11 Open Access funding enabled and organized by Projekt DEAL. This work was supported by DFG STA 16/1 and 16/2 to PFS. JE was supported by Joachim Herz Stiftung. Bioinformatics Group, Department of Computer Science, University of Leipzig, Härtelstraße 16-18, 04107, Leipzig, Germany Jan Engelhardt & Peter F. Stadler Computational EvoDevo Group, Department of Computer Science, University of Leipzig, Härtelstraße 16-18, 04107, Leipzig, Germany Jan Engelhardt, Oliver Scheer & Sonja J. Prohaska Interdisciplinary Centre for Bioinformatics, University of Leipzig, Härtelstraße 16-18, 04107, Leipzig, Germany Jan Engelhardt, Oliver Scheer, Peter F. Stadler & Sonja J. Prohaska Department of Evolutionary Biology, University of Vienna, Djerassiplatz 1, 1030, Vienna, Austria Jan Engelhardt The Santa Fe Institute, 1399 Hyde Park Rd., Santa Fe, NM, 87501, USA Peter F. Stadler & Sonja J. Prohaska Complexity Science Hub Vienna, Josefstädter Str. 39, 1080, Vienna, Austria Sonja J. Prohaska Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103, Leipzig, Germany Peter F. Stadler Institute for Theoretical Chemistry, University of Vienna, Währingerstraße 17, 1090, Vienna, Austria Facultad de Ciencias, Universidad National de Colombia, Sede Bogotá, Colombia Oliver Scheer Correspondence to Jan Engelhardt. The authors have no conflict of interests to declare. Handling editor: Nicolas Rodrigue. Additional file 1 (DOCX 36202 kb) Additional file 2 (DOCX 1927 kb) Engelhardt, J., Scheer, O., Stadler, P.F. et al. Evolution of DNA Methylation Across Ecdysozoa. J Mol Evol 90, 56–72 (2022). https://doi.org/10.1007/s00239-021-10042-0 Evolutionary epigenetics Gaussian mixture modeling Observed/expected CpG ratio
CommonCrawl
Observation of elastic topological states in soft materials Shuaifeng Li1,3, Degang Zhao2,3, Hao Niu1,3, Xuefeng Zhu2,3 & Jianfeng Zang ORCID: orcid.org/0000-0002-1775-46051,3 Nature Communications volume 9, Article number: 1370 (2018) Cite this article Topological elastic metamaterials offer insight into classic motion law and open up opportunities in quantum and classic information processing. Theoretical modeling and numerical simulation of elastic topological states have been reported, whereas the experimental observation remains relatively unexplored. Here we present an experimental observation and numerical simulation of tunable topological states in soft elastic metamaterials. The on-demand reversible switch in topological phase has been achieved by changing filling ratio, tension, and/or compression of the elastic metamaterials. By combining two elastic metamaterials with distinct topological invariants, we further demonstrate the formation and dynamic tunability of topological interface states by mechanical deformation, and the manipulation of elastic wave propagation. Moreover, we provide a topological phase diagram of elastic metamaterials under deformation. Our approach to dynamically control interface states in soft materials paves the way to various phononic systems involving thermal management and soft robotics requiring better use of energy. Topology describes the properties of space under continuous deformation in mathematics. The concept has been used to explain band structures in condensed matter physics, resulting in the theoretical predication and experimental observation of topological insulator in electronic system1,2, and recently also in photonic3,4,5,6 and phononic systems7,8,9,10,11,12,13. Topologically protected wave propagation possesses prominent applications in quantum computation and communication field due to its remarkable characteristic: the robust defect-immune transport1. Recently, significant research efforts devoted in phononic topological insulators provide a new way to manipulate sound propagation, such as vibration isolation and particle manipulation. Topological insulator has been successfully demonstrated in airborne acoustics by deliberate design of materials parameters, realizing backscattering-immune one-way airborne sound transport10. Besides, topological transport for phonon has been theoretically modeled and numerically simulated through constructing spring and mass system in discrete solids12, 14,15,16. Topologically protected helical edge state is further numerically realized in continuum solids7. However, it remains a challenge to experimentally realize elastic topological states in real materials, which limits its further applications on elastic devices, such as elastic energy storing17 and elastic wave guiding technology18. Elastic waves, also known as small oscillations in solids, have potential applications in information carrying19 as well as seismic monitoring20. Through creating bandgaps in architected materials with periodic porous structures, elastic waves can be dramatically attenuated, which is particularly useful in vibration isolation21. Distinct from acoustic waves and electromagnetic waves, elastic waves are complicated and hard to control because of their richer polarizations22, while topological insulator opens a new avenue to control elastic waves. The backscattering-free nature of topological transport opens a possibility for large-scale phononic circuits7. Thanks to the development of advanced fabrication technique, such as directional solidification, elastically anisotropic and isotropic materials can be successfully fabricated now, generating peculiar elastic properties in selected directions. Such materials usually keep in a fixed structure and geometry after fabrication, resulting in fixed properties and functionalities. Whereas, soft material is capable of reversible mechanical deformation over its global and partial structure, providing a new degree of freedom to tune the properties or functionalities of the system. A variety of soft tunable acoustic devices have been reported. For example, the width and position of the phononic bandgap can be tuned through deforming elastomeric helix array23 and/or buckling of elastomeric beam connected with local resonantor24. Besides, the programmable mechanical behaviors have been achieved in a mechanical metamaterial, which may inspire new tunable devices25. Moreover, the intrinsic property of nonlinear mechanics in soft materials has enabled novel functions that do not exhibit in traditional elastic systems26,27. The tunable topological zero-energy motions based on Maxwell framework consisting of rods and hinges have been put forward. While they have the impact on novel machines and robots, the transport of elastic waves has not yet been directly revealed and the framework can hardly be considered as the continuous medium28,29. The combination of soft materials with high-frequency topological states offers unprecedented opportunities, which requires insight exploration. Here we present an experimental observation and numerical simulation of tunable topological state in soft elastic metamaterials. The on-demand reversible switch of topological phase has been achieved by changing filling ratio, stretching, and/or compressing soft elastic metamaterials. We further demonstrate the dynamical tunability of topological state by mechanical deformation, including switch modulation and frequency modulation, as well as manipulation of elastic wave propagation. Moreover, we provide a topological phase diagram as a general scheme to design tunable topological states in soft elastic metamaterials. Our research provides a way to manipulate elastic waves artificially and opens an avenue to the development of soft topological insulator. Design of soft metamaterials Figure 1a presents the soft metamaterial with periodic honeycomb holes of air in a rectangle silicon rubber (Ecoflex) slab of 180 mm × 52 mm × 10 mm. The sample can be stretched or compressed resulting in the rearrangement of lattice and the reshaping of the air cylinder scatterers as illustrated in the inset of Fig. 1a. We select a hexagonal unit cell, which is shown in Fig. 1b. Through adjusting the nearest coupling, namely, manipulating d/R to adjust the filling ratio, a twofold Dirac cone at M point can be formed by accidental degeneracy, as presented in Fig. 1c. The open and reopen of the Dirac cone can be controlled by continuously changing the filling ratio of the honeycomb lattice. When d/R = 0.7156, a Dirac cone appears at the edge of the Brillouin zone (M point). As d/R is reduced to be 0.6 or increased to be 0.78, the Dirac cones are opened to be a bandgap along ΓM direction and all directions, respectively. Thus, as the increasing of the filling ratio, bandgap along ΓM direction exhibits the open, close, and reopen evolutionary process, as presented in Fig. 1c. Design of topological elastic metamaterials and two band inversion processes. a Soft elastic metamaterial with periodic honeycomb holes of air in a rectangle silicon rubber (Ecoflex). The inset shows the stretchability of the soft metamaterial. b The schematic of our soft elastic metamaterials. Red dashed hexagon is the primitive cell with the hexagon edge length R and hole's diameter d. \(\overrightarrow {{\mathbf{a}}_1}\) and \(\overrightarrow {{\mathbf{a}}_2}\) are the basic vectors. c Band inversion process as a function of filling ratio of d/R without applied strain. The filling ratios of d/R chosen are 0.6, 0.7156, and 0.78. Dashed lines indicate longitudinal wave bands. Solid lines and dotted lines indicate transverse wave bands in ΓM direction and other directions, respectively. Inset is the irreducible Brillouin zone. d Band inversion process as a function of strain with fixed filling ratio of d/R = 0.68. Three strains chosen from compression to tension are −4.44, −1.53, and 3.16%. The calculated Zak phase could be 0 or π, which is marked on the corresponding bulk band in ΓM direction (solid lines) in c and d. The calculated bandgap signs ς are marked in the corresponding bandgap. All band structures are from numerical simulation The controlled bandgap evolutionary process can also be achieved by stretching or compressing the soft material. When the soft elastic metamaterial (6 × 6 unit cells) is subjected to tension or compression strain, each hexagonal cluster (enclosed by six air pillars) and each individual air pillar undergo shape change. A two-dimensional (2D) model with plane strain condition is used to analyze the shape changes of the unit cell and the strain responses of the elastomer. Here the elastomer is delineated by a nearly incompressible Yeoh hyperelastic model. The strain-dependent shape changes are presented in Supplementary Fig. 1. When the elastic metamaterial is uniaxially stretched, the length of unit cell increases and the width of it decreases due to Poisson effect. Conversely, under the compressive strain, the length of unit cell decreases and the width of it increases. In both circumstances, circular air holes become to be elliptic. We set d/R = 0.68 and calculate band structures as a function of applied strain, as presented in Fig. 1d. When the system is under compression strain ε = −4.44%, an absolute bandgap is observed between the first and the second bands. As the applied strain increases to ε = −1.53%, a Dirac dispersion relation can be observed at M point. As the elastic system is stretched to ε = 3.16%, the Dirac cone previously formed at M point disappears, shifting to MK direction, resulting in a bandgap along ΓM direction. It should be noted that the longitudinal wave bands are also exhibited in band structure (dashed lines in Fig. 1c, d). But in this work we only focus on the transverse modes. Here the applied strain on the elastic metamaterial is small enough so that the effect of stress on elastic wave propagation is negligible30, as well as the effect of pesudomagnetic field due to the gradient of strain31,32,33. Therefore, we only consider the effect of the different geometry configurations of soft lattice on wave propagation. To investigate the topological properties of this system and verify the fact of topological phase transition, we calculate the topological invariant, namely Zak phase of each band in ΓM direction, using symmetry analysis method34,35. Given the mirror symmetry of our physical system, the Zak phase is quantized and ensured to characterize the topology of the bulk. For each bulk band, the Zak phase should be π if the eigenmode at center of Brillouin zone possesses different symmetries with that at edge. Otherwise, the Zak phase is 0. As an example, Supplementary Fig. 2 presents eigen displacement field distributions of transverse modes at the band edges when d/R = 0.6 and 0.78 without applied strain. Similar analysis can also be carried out to determine the Zak phase in deformation process. Through analyzing the symmetry properties, we obtain the Zak phase of each band along ΓM direction, marked in Fig. 1c, d, exhibiting a distinct topological phase transition, which is also known as band inversion. We can similarly obtain the sgn (ς) of bandgap associated with Zak phase by a simple expression36: $${\mathrm{sgn}}\left( {\varsigma ^{\left( n \right)}} \right) = \left( { - 1} \right)^n\left( { - 1} \right)^le^{i\mathop {\sum }\limits_{m = 0}^{n - 1} \theta _m^{\mathrm {Zak}}}$$ where n is the sequence of bandgap and l is the number of crossing points beneath this bandgap. The resultant bandgap signs are marked in Fig. 1c, d. The analysis details are seen in Supplementary note 1. Topological phase diagram The topological phase transition at M point has been achieved either by filling ratio or by mechanical deformation of elastic metamaterials. Further, we demonstrate the variation of topological properties as a function of d/R or strain. As shown in Fig. 2a, the frequency range of the bandgap in ΓM direction decreases monotonically as the d/R increases from 0.5 to 0.9, with a Dirac point formed at d/R = 0.7156. If we set the filling ratio of d/R to be fixed values, e.g., d/R = 0.6, 0.68, or 0.78, we can get a diagram with three plots of bandgap variation, experiencing a similar topological phase transition in the strain range of −8.89 ~ 8.89%, as presented in Fig. 2b. Three topological transition points are observed at strain of −5.71%, −1.375%, and 2.66% for d/R = 0.6, 0.68, and 0.78, respectively. If we draw a dash-dotted line to connect all of the transition points together, we can divide the diagram into two regions, with the bandgap sign ς > 0 and <0, respectively. We call the dash-dotted line in Fig. 2b as a topological phase transition line. Since the bandgap sign is related to the reflection phase and further associated to the surface impedance Z(ω, k || ), the topological phase diagram can be considered as the surface impedance diagram as a function of strain. One region (yellow region) with ς < 0 has Z(ω, k || ) < 0 and the other region (cyan region) with ς > 0 has Z(ω, k||) > 0. The real parts of eigen vertical displacement fields of the two distinct regions are shown in Fig. 2c, indicating topological phase transition with even and odd Bloch modes alternation. Topological phase diagram. a The frequencies of two states at M point as a function of filling ratio of d/R without applied strain. The bandgap signs of the cyan and yellow regions are ς > 0 and <0, respectively. b Topological phase diagram as a general design scheme consists of two domains separated by topological transition line (black dash-dotted line) that is formed by connecting the topological transition points of different filling ratio systems. Green, blue, and red solid lines are obtained by investigating the frequencies of two band-edge states at M point as a function of strain. The bandgap signs ς are marked in the yellow and cyan regions. c Real parts of eigen vertical vibration modes at M point inversion represents topological phase inversion in solid. The vibration modes correspond to the stars marked in b. The imaginary part of surface impedance and bandgap sign are marked between two vibration modes. All data are from numerical simulation The topological phase diagram shown in Fig. 2b provides a general scheme to design topological interface states in soft elastic metamaterials. In order to form topological interface states, the bandgaps of two metamaterials must share an overlapped frequency range with a band inversion. According to the phase diagram, we can construct a topological system by combining two metamaterials with different topological phases sharing the overlapped frequency range, denoted with (d1/R, ε1|d2/R, ε2). The possible constructions could be but not limited to (0.6, 5.56%|0.68, −8.89%) in the frequency range of 263 ~ 283 Hz, (0.6, 0%|0.68, −8.89%) in the frequency range of 272 ~ 293 Hz, and (0.6, −1.11%|0.68, −8.89%) in the frequency range of 277 ~ 295 Hz. The uniaxial strains we mentioned above are along horizontal direction. Actually, the topological phase can also be inverted by changing the strain along vertical direction. The phase diagram using the uniaxial strains along vertical direction is illustrated in Supplementary Fig. 3. Observation of topological interface state When two elastic systems with different topological invariants are edge-to-edge joint, the topological interface states can be predicted to emerge according to the surface impedance match condition Z L (ω, k || ) + Z R (ω, k || ) = 0. As a typical example, we demonstrate the numerical observation of topological interface state by constructing a ribbon with two elastic metamaterials, (0.6, 5.56%|0.68, −8.89%), which is presented in Fig. 3a. By combining two elastic metamaterials together, a 30 × 1 supercell is formed to calculate the projected band structure. As shown in Fig. 3b, a flat band of transverse mode at the frequency of 274 Hz within the bulk bandgap is observed in the projected band structure along k x direction. The real-space distribution of displacement field is displayed in Fig. 3a with a transverse input excitation of 274 Hz applied on one side of the ribbon. We find that vibrations locate mainly at the interface between two elastic metamaterials and attenuate dramatically into the bulk, which coincides with the field distribution of the eigenmode in the flat topological band (Supplementary Fig. 4). Thus, the flat band is the topological interface mode independent from bulk modes. Figure 3c presents the transmission spectra as a function of different input excitation angles, with coexistence of transverse excitation and longitudinal excitation. As the excitation angle θ increases, the fraction of input transverse wave increases. No transmission peak of transverse wave is observed when θ is 0°. As the θ changes to 30°, 60°, and 90°, a transverse wave transmission peak emerges at the frequency 274 Hz and gradually grows to maximum. No evident peak is found in the longitudinal wave transmission spectra in this process, and the small peak at θ = 90° may be attributed to the conversion of longitudinal wave to transverse wave. Numerical observation of topological interface states. a The numerical calculation uses a ribbon with periodic boundary condition, consisting with two elastic metamaterials, (0.6, 5.56%|0.68, −8.89%). The harmonic force with tunable angles is applied on the edge of the ribbon. b The simulated projected band structure along k x direction indicated by \(\bar \Gamma {\bar{\mathrm M}}\) with transverse interface modes. The bars above Г and M are used to distinguish the Γ and M from band structure of unit cell. Red line indicates interface state independent of bulk state (magenta region) and gray lines indicate longitudinal wave modes. c The simulated transmission spectra as a function of excitation angle (θ = 0°, 30°, 60°, and 90°). The transmission spectra for transverse modes (marked by s) and longitudinal modes (marked by p) are presented for comparison The topological interface states have been numerically achieved by combining two metamaterials with different filling ratios according to the phase diagram in Fig. 2b. Actually, we can even obtain the topological interface states by using two metamaterials with the same filling ratio if we consider both the phase diagram with horizontal strain in Fig. 2b and the phase diagram with vertical strain in Supplementary Fig. 3. For example, the projected band structure and eigenmode of the elastic system of filling ratio of d/R = 0.68 with strains in both horizontal direction and vertical direction are presented in Supplementary Fig. 5. In order to experimentally observe the topological interface states, we apply the excitation force along the interface between two metamaterials, as presented in Fig. 4a. We calculate the projected band structure along k y direction using the same supercell as calculation in Fig. 3b. As shown in Supplementary Fig. 6, a topological flat band (red dots) is observed near Γ point and a series of discrete modes are found above and below the flat band. It is interesting to note that when elastic waves with different frequencies enter the elastic metamaterial, the waves with frequencies above and below the flat band will be separated. The elastic wave with the frequency as same as that of flat band will be localized on the interface, exhibiting the topological interface state. While the elastic waves with lower frequency or higher frequency, they will propagate in right or left direction, respectively. This splitting propagation is clearly revealed by the vertical displacement field distribution at three typical frequencies, 261, 274, and 286 Hz, as shown in Fig. 4b. Experimental observation of topological interface state and demonstration of elastic wave splitter. a Experimental setup of a Ecoflex slab consisting with two elastic metamaterials, (0.6, 5.56%|0.68, −8.89%). Magenta, red, and blue lines schematically indicate elastic waves propagation directions as a function of input frequency. b Numerical simulations of vertical displacement fields with different transverse wave propagations at three input frequencies: 261, 274, and 286 Hz, from top to bottom. c Experimental observation of topological interface state at 274 Hz corresponding to the second panel in b, by measuring the displacement of the 24 holes marked by cyan line. Values of the measured displacements represent the mean of n tests (n = 5, error bars are defined as s.d.). Simulation results are shown in blue dashed line. Sequence numbers marked in red indicate the hole numbers. The black dashed line indicates the position of interface between two metamaterials. d Experimentally measured displacement on the right side at the magenta hole is presented by the magenta dashed curve. Experimentally measured displacement on the left side at the blue hole is presented by the blue dashed curve. The displacement ratio of the left side over the right side, L/R is presented as a function of input frequency (black solid curve). Magenta domain indicates the right propagation mode, while the blue domain reveals the left propagation mode. Gray region is the intermediate mode defined in main text We fabricate an elastic metamaterial sample comprised of air hole array (seen in Methods) with the setup of (0.6, 5.56%|0.68, −8.89%) as presented in Fig. 4a and Supplementary Fig. 9. A shaker to excite the elastic wave is placed on the interface of the sample and an accelerometer is placed in 24 holes one by one along the cyan line marked in Fig. 4b to detect the displacement. When the frequency of excitation signal is 274 Hz, the detected displacements at 24 holes are summarized in Fig. 4c. We find that the magnitude of displacement reaches maximum near the interface, and declines sharply away from interface. It is noted that the vibration at the left of the interface drops slower than that at the right, which is consistent with the simulation result in Fig. 4c and the field distribution in Fig. 4b. Note that inserting an accelerometer into the hole to measure the displacement will bring added mass to the sample. The accelerometer method has been employed as an effective way to detect vibration of elastic metamaterials21,24. Considering the stable characteristic of topological interface state, Majorana edge states have already been observed by using accelerometers37. In our case, the further simulations and experiments in Supplementary Fig. 7 and Supplementary note 2 confirm the added mass effect can be neglected so that the estimated displacement field is valid. To demonstrate the elastic wave splitter, the accelerometer is placed in the hole on the left or right side of the propagation pathway of the elastic wave according to the simulation results in Fig. 4b. The experimental setup is presented in Supplementary Fig. 9. Figure 4d presents the collected displacement of the left part (magenta dashed curve) and the right part (blue dashed curve) in the frequency range of 250 ~ 320 Hz. The calculated displacement ratio of the left part over the right part is illustrated in the black curve as shown in Fig. 4d. We define the area of the displacement ratio between 0.9 and 1.1 as intermediate mode (gray region in Fig. 4d), which corresponds to the most abrupt area in ratio curve in the frequency range of 293 ~ 295 Hz. The displacement with a ratio above 1.1 is regarded as left propagation mode, which is above 295 Hz. While the displacement with a ratio below 0.9 is regarded as right propagation mode, which is below 293 Hz. This feature may find application in phonon frequency splitter due to different group velocities as presented in the projected band structure (Supplementary Fig. 6), which differs from the chiral propagation in time-reversal breaking system38. The slight disagreement between the frequency of the intermediated mode (293 ~ 295 Hz) and that of the flat band (274 Hz) may be attributed to two reasons. First, the detection position where we put the accelerometer may affect the magnitude of the displacement. The vibration is mainly concentrated on the matrix rather than around the holes, so the measured displacement is slightly smaller than the simulation results. Second, the fixtures we put on the two edges of the interface line between two metamaterials may constrict the vibration at the vicinity of the clippers. In addition, a tiny intermediate region between two metamaterials arisen from the different strains setup may also affect the measurement of the displacement. Dynamical manipulation of topological interface states Tunable elastic topological state is important and may find application in large-scale phononic circuits. The zero-frequency adaptive behavior controlled by external mechanical loads has been displayed, with floppy modes and the states of self-stress39. Here we present a soft topological metamaterial with dynamically tunable topological properties. Figure 5a presents a combination of two elastic metamaterials, (0.6, ε1|0.68, −8.89%), in which strain ε1 is a variable in the range of −5.56 ~ 8.89%. According to the topological phase diagram (Fig. 2b), the metamaterial (0.68, −8.89%) at right side has a relatively large bandgap in the frequency range of 263 ~ 298 Hz. While the metamaterial (0.6, ε1) at left side has a common gap with that at right side when ε1 changes in the range of −1.11 ~ 8.89%. Figure 5b presents the numerically calculated topological interface states only emerged within the solid line in the frequency range of 271 ~ 285 Hz when strain ε1 changes in the range of −1.11 ~ 8.89%. We selectively choose four topological interface states in the solid line (Fig. 5b) where strains ε1 = 1.11%, 3.33%, 5.56%, and 7.78%, corresponding to frequency of 279, 278, 274, and 272 Hz, respectively. Tunability of soft topological system. a The schematic of tunable topological system with two metamaterials, (0.6, ε1|0.68, −8.89%), where strain ε1 is variable in the range of −5.56 ~ 8.89%. b The numerically simulated frequencies of topological interface state as a function of strain. The topological state emerges at a certain frequency and the frequency decreases as the strain increases. c The simulated transverse wave transmission peaks for four selected strains ε1 = 1.11, 3.33, 5.56, and 7.78%, corresponding to the four colored symbols marked in b. d The experimentally measured vertical displacement field distributions at four selected strains. The markers and colors have their correspondences in b. e The experimentally measured displacement field distributions at five selected frequencies at strain of 5.56% along the purple dashed line in b, 272, 273, 274, 275, and 276 Hz. f A snapshot of experimental demonstration of dynamical manipulation of topological interface state. The black dashed lines in a, d, and e indicate the position of interface between two metamaterials Figure 5c presents the simulated transmission spectra of transverse wave for the four selected strain levels, and all of them have sharp transmission peaks. The transmission peak shifts to lower frequency or higher frequency when the metamaterial is stretched or compressed. Corresponding experimental displacement field distributions are displayed in Fig. 5d to confirm the existence of topological interface states using the experimental setup detailed in Supplementary Fig. 9. Besides, when we change the frequency of input excitation along the dotted line in Fig. 5b while keeping ε1 fixed at 5.56%, the topological interface state can only be observed at the frequency of 274 Hz, as presented in the experimental displacement field distribution in Fig. 5e. The explicit comparison of experiment results and simulation results is displayed in Supplementary Fig. 8, where the good agreement of displacement field can be observed. The nature of elastomeric elastic system makes the deformation process continuously reversible and repeatable, suggesting that elastic device can perform well after thousands of utilities. The mechanical deformation induced appearance and disappearance of the strong field localization feature of interface states can be dynamically manipulated, as shown in Supplementary Movie. Figure 5f shows a snapshot of the movie, indicating the experimental setup and dynamical process. When the input vibration is fixed at frequency of 274 Hz, the topological interface states emerge at the strain of 5.56%; the interface states disappear when the strain is larger or smaller than 5.56%. The movie shows a time-dependent acceleration signal when we stretch or compress the metamaterial. The appearance of the topological interface states is observed when the acceleration value reaches the maximum, which is marked in the Supplementary Movie. The experiment details are shown in Supplementary note 2. The transport of elastic wave in the elastic topological metamaterials is quite different from topologically protected transport in acoustic system. Although both elastic and acoustic waves can propagate along interface without backscattering, the working frequency of acoustic topological insulator is easily affected by its surroundings such as water and air10. Since the polarization of our elastic topological interface states is transverse, the working frequency is relatively stable. We further calculate the phase diagrams of soft metamaterial in water and vacuum background (Supplementary Fig. 10). We find that the band structure of the soft metamaterial in vacuum or water is similar to that in air, indicating a robust band structure regardless of chaotic fluid surroundings. We can find that the frequency ranges of bandgap of the soft metamaterials in air or vacuum are slightly different from that in water (~25 Hz), which mainly results from the impedance mismatch and different loss intensities at the boundary of soft metamaterial and its surroundings. Since the soft metamaterials in different surroundings share a similar phase diagram, we can easily find the topological interface states in a soft metamaterial even randomly patterned with different fillers (air, water, and vacuum) into holes, where the immune nature of topological states is reflected greatly. Our finding provides an example on topological insulator working in a complicated environment, which has special advantage in information processing and communication field. Knowledge of strain-induced elastic topological phase transition opens avenues for topological state manipulation. This strategy gives us the possibility to realize and then control topological interface state statically and dynamically. Although in our study we only focus on unit cell of millimeter scale, the proposed design can be more complex and has various scales depending on the operating frequency. Our research may be generalized in other microscopic and macroscopic phononic systems such as thermal management and soft robotics that make better use of energy. Our study can also inspire electronic topological insulator used on elastomer substrate to develop flexible electronic devices40. A commercial silicone rubber Ecoflex 0030 (Smooth on®) is used to cast the experimental samples with material density ρ = 1030 kg m−3. In order to investigate the mechanical properties of Ecoflex 0030 (part A:part B = 1:1), a uniaxial tension experiment is carried out. A mold based on ASTM D412 standard is fabricated by 3 mm acrylic plate using a laser cutter, with a vector-optimized cutting path to avoid defect on the critical neck region. Subsequently, the Ecoflex mixture is cast into a two-part mold consisting of dog-bone-shaped top and acrylic plate bottom after thoroughly degassed in a vacuum chamber. Then, it is placed at room temperature for 4 h to be cured. Tensile test is performed on a universal testing machine and the relation between engineering stress and elongation is obtained (Supplementary Fig. 11). A nearly incompressible Yeoh hyperelastic model is used to fit this relation, whose strain energy density is given by41: $${W} = \mathop {\sum}\limits_{\left( {i = 1} \right)}^3 {C_i\left( {I_i - 3} \right)^i} + \frac{{\left( {J - 1} \right)^{2i}}}{{D_i}}$$ Thus, we obtain the mechanical properties of the soft material as follows: C1 = 13.44 kPa; C2 = 595 Pa; C3 = −0.8153 Pa; and D1 = D2 = D3 = 14.88 GPa−1. Sample fabrication In order to accurately fabricate the soft metamaterial sample, an assembled mold is prepared. The mold comprises a base, four lateral walls, and hundreds of well-polished stainless steel rods to shape the contour of holes precisely. The base and lateral wall are cut by a laser cutter from acrylic plate. Stainless steel rods with diameters of 3.0 and 3.4 mm are made by polishing and wire cutting to the height slightly higher than the walls on every side. Subsequently, these rods are inserted into the corresponding holes on the base. After the fabrication of mold, two parts of silicone rubber are mixed thoroughly by the ratio of 1:1 using an electric mixer and the casted mixture is placed in the vacuum chamber for degassing. Next, it is allowed to cure at room temperature for 4 h after pouring the mixture into the mold. According to this method, we can precisely fabricate sample with combination of elastic metamaterials with different filling ratios. After demolding, the sidewalls are cut from sample to satisfy periodic condition. Here we obtain a metamaterial sample with two 6 × 6 unit cells: l = 10 mm; c = 5 mm; \(m = \sqrt {3}c\); and d1 = 3.4 mm, d2 = 3.0 mm (Supplementary Fig. 1a). The transverse wave is generated by a shaker (Brüel & Kjær, Type 5961) and vibration is transmitted by a rigid rod. At each side of sample, clips are used to fix, stretch, and compress sample. At the interface of two metamaterials, two clips are used to fix the edges of the interface to make the metamaterials move independently. We measure the interface state characterized by displacement field distribution using accelerometer (Brüel & Kjær, Type 4517) at several levels of applied deformation. At the strain level of interest, we immobilize the specimen by fixing the slide block on the slide guide and measure the displacement in a set of number-labeled air holes marked in Fig. 4b. In order to measure different propagation directions of elastic wave in two parts of metamaterials, we choose two holes on each side according to simulation results and put the accelerometer into them. The exciter provides a random signal and the spectra is obtained by Fourier transforms. The photo of specific experimental setup and related description are shown in Supplementary Fig. 9 and Supplementary note 2. The ambient noise is also recorded when the sample is statically placed. Numerical simulations Numerical simulations are performed by using COMSOL Multiphysics, a finite-element analysis and solver software. The simulations are implemented in the 2D acoustic-structure coupling module, including the actual geometric size and relevant properties. The system consists of the air resonator (vacuum and water) and soft material Ecoflex 0030. The parameters used for air are mass density 1.29 kg m−3 and sound speed 340 m s−1. The mechanical properties for Ecoflex 0030 are extracted from Yeoh hyperelastic model fit with experiment data. The geometric parameters of unit cell are calculated as the elastic metamaterial is under small mechanical deformation (<9%) (Supplementary Fig. 1). The deformed unit cell with periodic boundary conditions is used to calculate the band structure and eigenmode, while the supercell with unidirectional periodic condition is used to calculate the projected band structure, detailed in Supplementary note 3. For calculating the transmission spectra, an external force with different frequencies is imposed on the boundary of finite size sample to mimic the incident wave. In addition, the displacement on the outgoing end is collected as the transmitted signal. The data line is set two to three lattice constants in front of the edge of perfect match layer (PML) depending on the frequency of input. For the calculation of wave propagation in finite sample, the PMLs are set around the sample to prevent the leakage of energy. The data in this study are available from the corresponding author on request. Hasan, M. Z. & Kane, C. L. Colloquium: topological insulators. Rev. Mod. Phys. 82, 3045–3067 (2010). Qi, X. & Zhang, S. Topological insulators and superconductors. Rev. Mod. Phys. 83, 1057–1110 (2011). Cheng, X. et al. Robust reconfigurable electromagnetic pathways within a photonic topological insulator. Nat. Mater. 15, 542–548 (2016). ADS CAS Article PubMed Google Scholar Khanikaev, A. B. et al. Photonic topological insulators. Nat. Mater. 12, 233–239 (2013). Lu, L., Fu, L., Joannopoulos, J. D. & Soljacic, M. Weyl points and line nodes in gyroid photonic crystals. Nat. Photon. 7, 294–299 (2013). Wu, L. & Hu, X. Scheme for achieving a topological photonic crystal by using dielectric material. Phys. Rev. Lett. 114, 223901 (2015). ADS Article PubMed Google Scholar Mousavi, S. H., Khanikaev, A. B. & Wang, Z. Topologically protected elastic waves in phononic metamaterials. Nat. Commun. 6, 8682 (2015). ADS Article PubMed PubMed Central Google Scholar Khanikaev, A. B., Fleury, R., Mousavi, S. H. & Alu, A. Topologically robust sound propagation in an angular-momentum-biased graphene-like resonator lattice. Nat. Commun. 6, 8260 (2015). ADS CAS Article PubMed PubMed Central Google Scholar Xiao, M., Chen, W., He, W. & Chan, C. T. Synthetic gauge flux and Weyl points in acoustic systems. Nat. Phys. 11, 920–924 (2015). He, C. et al. Acoustic topological insulator and robust one-way sound transport. Nat. Phys. 12, 1124–1129 (2016). Yang, Z. et al. Topological Acoustics. Phys. Rev. Lett. 114, 114301 (2015). Wang, P., Lu, L. & Bertoldi, K. Topological phononic crystals with one-way elastic edge waves. Phys. Rev. Lett. 115, 104302 (2015). Prodan, E. & Prodan, C. Topological phonon modes and their role in dynamic instability of microtubules. Phys. Rev. Lett. 103, 248101 (2009). Deymier, P. A., Runge, K. & Vasseur, J. O. Geometric phase and topology of elastic oscillations and vibrations in model systems: Harmonic oscillator and superlattice. AIP Adv. 6, 121801 (2016). ADS Article Google Scholar Pal, R. K. & Ruzzene, M. Edge waves in plates with resonators: an elastic analogue of the quantum valley Hall effect. New J. Phys. 19, 025001 (2017). Po, H. C., Bahri, Y. & Vishwanath, A. Phonon analog of topological nodal semimetals. Phys. Rev. B 93, 205158 (2016). Shan, S. et al. Multistable architected materials for trapping elastic strain energy. Adv. Mater. 27, 4296–4301 (2015). Ma, P. S., Kwon, Y. E. & Kim, Y. Y. Wave dispersion tailoring in an elastic waveguide by phononic crystals. Appl. Phys. Lett. 103, 151901 (2013). Jin, Y., Ying, Y. & Zhao, D. Data communications using guided elastic waves by time reversal pulse position modulation: experimental study. Sensors (Basel) 13, 8352–8376 (2013). Brule, S., Javelaud, E. H., Enoch, S. & Guenneau, S. Experiments on seismic metamaterials: molding surface waves. Phys. Rev. Lett. 112, 133901 (2014). Javid, F., Wang, P., Shanian, A. & Bertoldi, K. Architected materials with ultra-low porosity for vibration control. Adv. Mater. 28, 5943–5948 (2016). Ma, G. et al. Polarization bandgaps and fluid-like elasticity in fully solid elastic metamaterials. Nat. Commun. 7, 13536 (2016). Babaee, S., Viard, N., Wang, P., Fang, N. X. & Bertoldi, K. Harnessing deformation to switch on and off the propagation of sound. Adv. Mater. 28, 1631–1635 (2016). Wang, P., Casadei, F., Shan, S., Weaver, J. C. & Bertoldi, K. Harnessing buckling to design tunable locally resonant acoustic metamaterials. Phys. Rev. Lett. 113, 014301 (2014). Coulais, C., van Hecke, M. & Florijn, B. Programmable mechanical metamaterials. Phys. Rev. Lett. 113, 175503 (2014). Raney, J. R. et al. Stable propagation of mechanical signals in soft media using stored elastic energy. Proc. Natl Acad. Sci. USA 113, 9722–9727 (2016). Brunet, T., Leng, J. & Mondain-Monval, O. Soft acoustic metamaterials. Science 342, 323–324 (2013). Chen, B. G., Upadhyaya, N. & Vitelli, V. Nonlinear conduction via solitons in a topological mechanical insulator. Proc. Natl Acad. Sci. USA 111, 13004–13009 (2014). ADS MathSciNet Article PubMed PubMed Central MATH Google Scholar Rocklin, D. Z., Zhou, S., Sun, K. & Mao, X. Transformable topological mechanical metamaterials. Nat. Commun. 8, 14201 (2017). Parnell, W. J. Effective wave propagation in a prestressed nonlinear elastic composite bar. IMA J. Appl. Math. 72, 223–244 (2007). ADS MathSciNet Article MATH Google Scholar Rechtsman, M. C. et al. Strain-induced pseudomagnetic field and photonic Landau levels in dielectric structures. Nat. Photon. 7, 153–158 (2013). Schomerus, H. & Halpern, N. Y. Parity anomaly and Landau-level lasing in strained photonic honeycomb lattices. Phys. Rev. Lett. 110, 013903 (2013). Zhu, S., Stroscio, J. A. & Li, T. Programmable extreme pseudomagnetic fields in graphene by a uniaxial stretch. Phys. Rev. Lett. 115, 245501 (2015). Kohn, W. Analytic properties of Bloch waves and Wannier functions. Phys. Rev. 115, 809–821 (1959). Zak, J. Symmetry criterion for surface states in solids. Phys. Rev. B 32, 2218–2226 (1985). Xiao, M., Zhang, Z. Q. & Chan, C. T. Surface impedance and bulk band geometric phases in one-dimensional systems. Phys. Rev. X 4, 021017 (2014). Prodan, E., Dobiszewski, K., Kanwai, A., Palmieri, J. & Prodan, C. Dynamical Majorana edge modes in a broad class of topological mechanical systems. Nat. Commun. 8, 14587 (2017). Souslov, A., van Zuiden, B. C., Bartolo, D. & Vitelli, V. Topological sound in active-liquid metamaterials. Nat. Phys. 13, 1091–1094 (2017). Paulose, J., Meeussen, A. S. & Vitelli, V. Selective buckling via states of self-stress in topological metamaterials. Proc. Natl Acad. Sci. USA 112, 7639–7644 (2015). Peng, H. et al. Topological insulator nanostructures for near-infrared transparent flexible electrodes. Nat. Chem. 4, 281–286 (2012). Yeoh, O. H. Some forms of the strain energy function for rubber. Rubber Chem. Technol. 66, 754–771 (1993). This work was supported by the National Natural Science Foundation of China No. 51572096, and the National 1000 Talents Program of China tenable in HUST. We thank Dr. Meng Xiao from Stanford University for fruitful discussions. We are grateful to Yugui Peng and Yaxi Shen from Huazhong University of Science and Technology for theoretical and experimental discussions. School of Optical and Electronic Information and Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China Shuaifeng Li, Hao Niu & Jianfeng Zang School of Physics, Huazhong University of Science and Technology, Wuhan, 430074, China Degang Zhao & Xuefeng Zhu Innovation Institute, Huazhong University of Science and Technology, Wuhan, 430074, China Shuaifeng Li, Degang Zhao, Hao Niu, Xuefeng Zhu & Jianfeng Zang Shuaifeng Li Degang Zhao Hao Niu Xuefeng Zhu Jianfeng Zang J.Z. and S.L. designed the project. S.L. performed the numerical simulation and carried out the experiments. D.Z. assisted with the numerical simulation. H.N. assisted with part of experiments. S.L. and J.Z. wrote the manuscrpt. All authors were involved in analysis and discussion of the results, and in improvement of the manuscript. Correspondence to Jianfeng Zang. Description of Additional Supplementary Files Supplementary Movie 1 Li, S., Zhao, D., Niu, H. et al. Observation of elastic topological states in soft materials. Nat Commun 9, 1370 (2018). https://doi.org/10.1038/s41467-018-03830-8 Non-Hermitian topological coupler for elastic waves Yan Meng Xiaoxiao Wu Jensen Li Science China Physics, Mechanics & Astronomy (2022) From Photonic Crystals to Seismic Metamaterials: A Review via Phononic Crystals and Acoustic Metamaterials C. W. Lim Archives of Computational Methods in Engineering (2021) Topological sound Xiujuan Zhang Meng Xiao Johan Christensen Communications Physics (2018)
CommonCrawl
Mexico–U.S. Immigration: Effects of Wages and Border Enforcement Mexico–U.S. Immigration: Effects of Wages and Border Enforcement Rebecca, Lessem, 2018-10-01 00:00:00 Abstract In this article, I study how relative wages and border enforcement affect immigration from Mexico to the U.S. To do this, I develop a discrete choice dynamic programming model where people choose from a set of locations in both the U.S. and Mexico, while accounting for the location of one's spouse when making decisions. I estimate the model using data on individual immigration decisions from the Mexican Migration Project. Counterfactuals show that a 10% increase in Mexican wages reduces migration rates and durations, overall decreasing the number of years spent in the U.S. by about 5%. A 50% increase in enforcement reduces migration rates and increases durations of stay in the U.S., and the overall effect is a 7% decrease in the number of years spent in the U.S. 1. Introduction Approximately 11 million Mexican immigrants were living illegally in the U.S. in 2015 (Krogstad et al., 2017). This large migrant community affects the economies of both countries. For example, migrants send remittances back home, which support development in Mexico.1 In the U.S., concern about illegal immigration affects political debate and policy. Border enforcement has been increasing since the mid-1980s, and it grew by a factor of 13 between 1986 and 2002 (Massey, 2007). This was a major issue in the 2016 presidential election, where President Trump campaigned on the promise of a wall between the two countries to cut down on illegal immigration. Despite these large concerns about illegal immigration from Mexico, much about the individual decisions and mechanisms remains poorly understood. In this article, I study how wage differentials and U.S. border enforcement affect an individual's immigration decisions. Given the common pattern of repeat and return migration in the data, changes in policy affect both current and future decisions. For example, increased enforcement not only reduces initial migration rates, but also increases the duration of stay in the U.S. by making it more costly for people to come back to the U.S. after returning home. To capture such intertemporal effects, I analyse this problem in a dynamic setting where people choose from multiple locations each period, following Kennan and Walker's (2011).2 The model extends Kennan and Walker's (2011)'s framework in two dimensions. First, I allow for moves across an international border, where people choose from a set of locations which includes both states in the U.S. and in Mexico, necessitating different treatment of illegal and legal immigration. By observing individual legal status, where illegal immigrants crossed the border, and U.S. border enforcement, which varies across locations and time, I can capture various trade-offs of immigration decisions. Secondly, I allow for interactions within the decisions of husbands and wives. The data show that this is important, in that 5.7% of women with a husband in the U.S. move each year, compared to on overall female migration rate of 0.6%, suggesting a positive utility of living in the same place.3 Therefore, a married man living in the U.S. alone will consider the likelihood that his wife will join him, which is endogenous given that she also makes active decisions. This affects reactions to the policy environment. For example, as enforcement increases, a married man living in the U.S. alone knows that his wife is less likely to join him, giving him an extra incentive to return to Mexico. To capture these types of mechanisms, we need a model that allows for interactions within married couples. The most similar paper on Mexico–U.S. immigration is Thom (2010), which estimates a dynamic migration model where men choose which country to live in, focusing on savings decisions as an incentive for repeat and return migration.4 In comparison, in my model, people choose from multiple locations in both countries, allowing for both internal and international migration. I also allow for a relationship between the decisions of married couples, enabling me to study how family interactions affect the counterfactual outcomes. Gemici (2011) studies family migration by estimating a dynamic model of migration decisions with intra-household bargaining using U.S. data. In her model, married couples make a joint decision on where to live together, whereas the data from Mexico show that couples often live in different locations. In this article, I estimate a discrete-choice dynamic programming model where individuals choose from a set of locations in Mexico and the U.S. in each period. Individuals' choices depend on the location of their spouse. To make this computationally feasible, I model household decisions in a sequential process: first, the household head picks a location, and then the spouse decides where to live. The model differentiates between legal and illegal immigrants, who face different moving costs and a different wage distribution in the U.S.5 Border enforcement, measured as the number of person-hours spent patrolling the border, affects the moving cost only for illegal immigrants. To evaluate the effectiveness of border enforcement, I use a new identification strategy, which accounts for the variation in the allocation of enforcement resources along the border and over time. In the model, individuals who move to the U.S. illegally also choose where to cross the border. The data show that as enforcement at the main crossing point increased, migrants shifted their behaviour and crossed at alternate points.6 Past work, which for the most part uses aggregate enforcement levels, misses this component of the effect of increased border patrol on immigration decisions. I estimate the model using data on individual immigration decisions from the Mexican Migration Project (MMP). I use the estimated model to perform several counterfactuals, finding that increases in Mexican wages decrease both immigration rates and the duration of stays in the U.S. A 10% increase in Mexican wages reduces the average number of years that a person lives in the U.S. by about 5%. Estimation of a dynamic model captures mechanisms that could not be studied in a static model. As enforcement increases, fewer people move, but those that do are more reluctant to return home, knowing that it will be harder to re-enter the U.S. in the future. This increases the duration of stays in the U.S. Policy changes also have differential effects with marital status. As enforcement increases, it becomes harder for women to join their husbands in the U.S., giving married men an extra incentive to return home, and thereby pushing their migration durations downwards. I hold female migration rates constant in the counterfactual to isolate this effect, and then see an even larger increase in men's durations of stay in the U.S. Overall, simulations show that a 50% increase in enforcement, distributed uniformly along the border, reduces the average amount of time that an individual in the sample spends in the U.S. over a lifetime by approximately 3%. If total enforcement increased by 50%, not uniformly but instead concentrated at the points along the border where it would have the largest effect, the number of years spent in the U.S. per person would decrease by about 7%. Following U.S. policy changes in the 1990s, most new resources were allocated to certain points along the border, and this research suggests that this is the optimal policy from the perspective of reducing illegal immigration rates. The remainder of the article is organized as follows. Section 2 reviews the literature, and Section 3 explains the model. Section 4 details the data, and Section 5 provides descriptive statistics. The estimation is explained in Section 6, and the results are in Section 7. The counterfactuals are in Section 8, and Section 9 concludes the article. 2. Related Literature Wages are understood to be the main driving force behind immigration from Mexico to the U.S. Hanson and Spilimbergo (1999) find that an increase in U.S. wages relative to Mexican wages positively affects apprehensions at the border, implying that more people attempted to move illegally. Rendón and Cuecuecha (2010) estimate a model of job search, savings, and migration, finding that migration and return migration depend not only on wage differentials, but also on job turnover and job-to-job transitions. In my model, the value of a location depends on expected earnings there, allowing for wage differentials to affect migration decisions. I can quantify how responsive migration decisions are to changes in the wage distribution. To estimate the effect of border enforcement on immigration decisions, some research uses the structural break caused by the 1986 Immigration Reform and Control Act (IRCA), one of the first policies aimed at decreasing illegal immigration. This law increased border enforcement and legalized many illegal immigrants living in the U.S. Espenshade (1990, 1994) finds that there was a decline in apprehensions at the U.S. border in the year after IRCA was implemented, but no lasting effect. Using survey data from communities in Mexico, Cornelius (1989) and Donato et al. (1992) find that IRCA had little or no effect on illegal immigration. After the implementation of IRCA, there was a steady increase in border enforcement over time. Hanson and Spilimbergo (1999) find that increased enforcement led to a greater number of apprehensions at the border. This provides one mechanism for increased enforcement to affect moving costs, as immigrants may have to make a greater number of attempts to successfully cross the border. Changes in enforcement can affect not only initial but also return migration decisions, and some of the past literature has looked at this. Angelucci (2012), using the MMP data, finds that border enforcement affects initial and return migration rates. Her framework permits analysis of initial and return migration decisions separately using a reduced-form framework. By estimating a structural model, I can perform counterfactual analyses to calculate the net effect of changes in enforcement on illegal immigration. The model in this article allows for an individual's characteristics to affect migration decisions. Past literature has studied this, mostly in a static setting, to understand what factors are important. I build on this work by including the relevant characteristics found to impact migration decisions in my dynamic setting. There is a large literature on the selection of migrants, starting with the theoretical model in Borjas (1987), which predicts that migrants will be negatively selected. This is empirically supported in Ibarraran and Lubotsky (2005). However, Chiquiar and Hanson (2005) find that Mexican immigrants in the U.S. are more educated than non-migrants in Mexico. They find evidence of intermediate selection of immigrants, as do Lacuesta (2006) and Orrenius and Zavodny (2005). Past work also looks at the determinants of the duration of stays in the U.S.; for example, see Reyes and Mameesh (2002), Massey and Espinosa (1997), and Lindstrom (1996). 3. Model The basic structure of the model follows Kennan and Walker's (2011), where each person chooses where to live each period. The value of living in a location depends on the expected wages there, as well as the cost of moving. Since the model is dynamic, individuals also consider the value of being in each location in future periods. At the start of a period, each person sees a set of payoff shocks to living in each location, and then chooses the location with the highest total valuation. The shocks are random, independent and identically distributed (i.i.d.) across locations and time, and unobserved by the econometrician. I assume that the payoff shocks follow a type I extreme value distribution, and solve the model following McFadden (1973) and Rust (1987). I assume a finite horizon, so the model can be solved using backward induction. The model extends Kennan and Walker's (2011)'s framework in two dimensions: (1) by allowing for moves across an international border, which necessitates different treatment of illegal and legal immigration, and (2) modelling the interactions within married couples. The model includes elements to account for the fact that people are moving across an international border, which is different than domestic migration in a couple of important ways. When deciding where to live, people choose from a set of locations, defined as states, in both the U.S. and in Mexico. Migration decisions are substantially affected by whether or not people can move to the U.S. legally, and to account for this, the model differentiates between legal and illegal migrants. Legal immigration status is assumed to be exogenous to the model, and people can transition to legal status in future periods. Legal immigration status affects wage offers in the U.S., since we expect that legal immigrants will have access to better job opportunities in the U.S. labour market. In addition, U.S. border enforcement only affects the moving costs for illegal immigrants. I assume that all people who choose to move to the U.S. illegally are successful, so the effects of increased enforcement just come through the increased moving cost.7 This is due to an increased cost of hiring a smuggler (Gathmann, 2008) or an increase in the expected number of attempts before successfully crossing. Illegal immigrants moving to the U.S. choose both a location and a border crossing point, where the cost of moving varies at each crossing point due to differences in the fixed costs and enforcement levels at each point.8 In this article, I also extend Kennan and Walker's (2011)'s framework by allowing for the decisions of married individuals to depend on where their spouse is living. Decisions are made individually, but utility depends on whether a person is in the same location as his spouse. Since individuals' decisions are related, this is a game between the husband and wife. I solve for a Markov perfect equilibrium (Maskin and Tirole, 1988). I make some assumptions on the timing of decisions to ensure that there is only one equilibrium. For each household, I define a primary and a secondary mover, which empirically is the husband and wife, respectively. In each period, the primary mover picks a location first, so he does not know his spouse's location when he makes this choice. After the primary mover makes a decision, the secondary mover learns her payoff shocks and decides where to live.9 This setup allows for people to make migration decisions that are affected by the location of their spouse. Single people's decisions are not affected by a spouse, but they can transition over marital status in future periods, and therefore know that at some point they could have utility differentials based on their spouse's location. In the remainder of this section, I describe a model without any unobserved heterogeneity. In the estimation, there will be three sources of unobserved heterogeneity, over (1) moving costs, (2) wages in the U.S., and (3) whether or not women choose to participate in the labour market. This is explained in more detail when I discuss the estimation in Section 6. 3.1. Model setup 3.1.1. Primary and secondary movers I solve separate value functions for primary and secondary movers, denoted with superscripts 1 and 2, respectively. In the empirical implementation, men are the primary movers, and women are the secondary movers. A married person's decisions depend on the location of his spouse, whose characteristics I denote with the superscript $$s$$. Single men and women make decisions as individuals, but know that they could become married in future periods. I account for these differences by keeping track of marital status $$m_t$$, where $$m_t=1$$ is a married person and $$m_t=2$$ is a single person. 3.1.2. State variables People learn their legal status at the start of each period. I assume that once a person is able to immigrate legally, this option remains with that person forever. I use $$z_t$$ to indicate whether or not a person can move to the U.S. legally, where $$z_t=1$$ means a person can move to the U.S. legally and $$z_t=2$$ means that he cannot. State variables also include a person's location in the previous period ($$\ell_{t-1}$$), their characteristics $$X_t$$, and their marital status $$m_t$$. When a married secondary mover picks a location, the primary mover has already chosen where to live in that period, so the location of the spouse ($$\ell_t^s$$) is known and is part of the state space. For the primary mover, who makes the first decision, the location of the spouse in the previous period ($$\ell_{t-1}^s$$) is part of the state space. The characteristics and legal status of one's spouse ($$X_t^s$$ and $$z_t^s$$) are also part of the state space. To simplify notation, denote $$\Delta_t$$ as the characteristics and legal status of an individual and his spouse, so $$\Delta_t=\{X_t,z_t,X^s_t,z^s_t\}$$. 3.1.3. Choice set Denote the set of locations in the U.S. as $$J_{U}$$, those in Mexico as $$J_{M}$$, and the set of border crossing points as $$C$$. If moving to the U.S. illegally, a person has to pick both a location and a border crossing point. Denote the choice set as $$J(\ell_{t-1},z_t)$$, where \begin{eqnarray} J(\ell_{t-1},z_t)=\left\{ \begin{array}{ll} J_M\cup (J_U\times C) & \text{if } \ell_{t-1}\in J_M \text{ and } z_t=2\\[3pt] J_M\cup J_U & \text{otherwise.} \end{array}\right. \end{eqnarray} (1) 3.1.4. Payoff shocks I denote the set of payoff shocks at time $$t$$ as $$\eta_t=\{\eta_{jt}\}$$, where $$j$$ indexes locations. I assume that these follow an extreme value type I distribution. 3.1.5. Utility The utility flow depends on a person's location $$j$$, characteristics $$X_t$$, legal status $$z_t$$, marital status $$m_t$$, and spouse's location $$\ell_t^s$$, and it is written as $$u(j,X_t,z_t,m_t,\ell_t^s)$$. This allows for utility to depend on wages, which are a function of a person's characteristics and location. Utility also depends on whether or not a person is at his home location, and increases for married couples who are living in the same place. 3.1.6. Moving costs The moving cost depends on which locations a person is moving between, and that person's characteristics and legal status. I denote the cost of moving from location $$\ell_{t-1}$$ to location $$j$$ as $$c_t(\ell_{t-1},j,X_t,z_t)$$. The moving cost is normalized to zero if staying at the same location. 3.1.7. Transition probabilities There are transitions over legal status, spouse's location for married couples, and marital status for people who are single.10 The primary mover is uncertain of his spouse's location in the current period. For example, if he moves to the U.S., he is not sure whether or not his wife will follow. The secondary mover knows her spouse's location in the current period, but is unsure of her spouse's location in the next period. For example, she may move to the U.S. to join her husband, but does not know whether or not he will remain there in the next period. Single people can get married in future periods. Furthermore, if someone gets married, he does not know where his new spouse will be living. Marrying someone who is living in the U.S. will affect decisions differently than marrying someone who is in Mexico. For the primary mover, denote the probability of being in the state with legal status $$z_{t+1}$$, marital status $$m_{t+1}$$, and having a spouse in location $$\ell^s_{t}$$ in this period as $$\rho^1_{t}(z_{t+1},m_{t+1},\ell^s_{t}| j,\Delta_t,m_t,\ell_{t-1}^s)$$. This depends on his location $$j$$, his characteristics, as well as his marital status and his spouse's previous-period location (if married). For the secondary mover, the transition probability is written as $$\rho^2_{t}(z_{t+1},m_{t+1},\ell^s_{t+1}| j,\Delta_t,m_{t},\ell_{t}^s)$$. 3.2. Value function In this section, I derive the value functions for primary and secondary movers. Because the problem is solved by backward induction and the secondary mover makes the last decision, it is logical to start with the secondary mover's problem. 3.2.1. Secondary movers The secondary mover's state space includes her previous-period location, her characteristics and those of her spouse, her marital status, and the location of her spouse. After seeing her payoff shocks, she chooses the location with the highest value: \begin{equation} V^2_t(\ell_{t-1},\Delta_t,m_t,\ell_t^s, \eta_t)= \max_{j\in J(\ell_{t-1},z_t)} v^2_t(j,\ell_{t-1},\Delta_t,m_t,\ell_t^s) +\eta_{jt}. \label{eqn:VF2} \end{equation} (2) The value of living in each location has a deterministic and a random component ($$v_t^2(\cdot)$$ and $$\eta_t$$, respectively). The deterministic component of living in a location consists of the flow payoff plus the discounted expected value of living there at the start of the next period: \begin{eqnarray} v^2_{t}(\cdot)&=& \tilde{v}^2_t(j,\ell_{t-1},\Delta_t,m_t,\ell_t^s)+ \beta \sum_{z_{t+1},m_{t+1},\ell^s_{t+1}}\Big( \rho^2_{t}( z_{t+1},m_{t+1},\ell^s_{t+1}|j,\Delta_t,m_t,\ell_t^s)\notag\\ &\times& E_{\eta}\left[V^2_{t+1}(j,\Delta_{t+1},m_{t+1}, \ell_{t+1}^s, \eta_{t+1})\right]\Big). \label{eqn:deterministic} \end{eqnarray} (3) The flow payoff of living in location $$j$$, denoted as $$\tilde{v}_t(\cdot)$$, consists of utility net of moving costs, and is defined as \begin{equation} \tilde{v}^2_t(j,\ell_{t-1},\Delta_t,m_t,\ell_t^s)= u(j,X_t,z_t,m_t,\ell_t^s)-c_t(\ell_{t-1},j,X_t,z_t). \label{eqn:flow} \end{equation} (4) The second part of the deterministic component in equation (3) is the expected future value of living in a location. The transition probabilities, written as $$\rho^2(\cdot)$$, are over legal status, marital status, and location of primary mover. I integrate out the future payoff shocks using the properties of the extreme value distribution, following McFadden (1973) and Rust (1987). For a given legal status, marital status, and location of primary mover, the expected continuation value is given by \begin{eqnarray} &&E_{\eta}\left[V^2_{t+1} (j,\Delta_{t+1},m_{t+1},\ell^s_{t+1}, \eta_{t+1}) \right]\notag\\ &&=E_{\eta}\left[\max_{k\in J(j,z_{t+1})}v^2_{t+1} (k,j,\Delta_{t+1},m_{t+1},\ell_{t+1}^s) +\eta_{k,t+1}\right]\notag\\ &&=\log\left( \sum_{k\in J(j,z_{t+1})} \exp \Big(v_{t+1}^2(k,j,\Delta_{t+1},m_{t+1},\ell^s_{t+1} )\Big) \right)+\gamma \text{ ,} \label{eqn:expect2} \end{eqnarray} (5) where $$\gamma$$ is Euler's constant ($$\gamma\approx 0.58$$). I calculate the probability that a person will choose location $$j$$ at time $$t$$, which will be used for two purposes. First, this is the choice probability, necessary to calculate the likelihood function. Secondly, the choice probability is used to calculate the transition probabilities for the primary mover, who is concerned with the probability that his spouse lives in a given location in this period. I assume that he has all of the same information as the secondary mover, but since the primary mover makes the first decision, the secondary mover's payoff shocks have not yet been realized, so I can only calculate the probability that the secondary mover will make a given decision. Since I assume that the payoff shocks are distributed with an extreme value distribution, the choice probabilities take a logit form, again following McFadden (1973) and Rust (1987). The probability that a person picks location $$j$$ is given by the following formula: \begin{equation} P_t^2(j|\ell_{t-1},\Delta_t,m_t,\ell_t^s)= \frac{\exp\left(v^2_t(j,\ell_{t-1},\Delta_t,m_t,\ell_t^s)\right)} {\sum_{k\in J(\ell_{t-1},z_{t})} \exp\Big(v^2_t(k,\ell_{t-1},\Delta_t,m_t,\ell_t^s)\Big)}\text{ .}\label{eqn:secondary_prob} \end{equation} (6) 3.2.2. Primary movers I define the value function for the primary mover as follows: \begin{equation} V^1_t(\ell_{t-1},\Delta_t,m_t,\ell_{t-1}^s, \eta_t)= \max_{j\in J(\ell_{t-1},z_t)} v^1_t(j,\ell_{t-1},\Delta_t,m_t,\ell_{t-1}^s) +\eta_{jt} \text{ .} \label{eqn:VF1} \end{equation} (7) In comparison with the secondary mover, the primary mover does not know where his spouse is living in this period, and only knows her previous-period location $$\ell_{t-1}^s$$. As before, the deterministic component of living in a location includes the flow utility and the expected continuation value. However, in this case, I do not know the exact flow utility, since the secondary mover's location has not been determined. I instead calculate the expected flow utility: \begin{eqnarray} &&E_{\ell_t^s}\left[\tilde{v}_t(j, \ell_{t-1},\Delta_t,m_t,\ell_t^s )|\ell_{t-1}^s\right]=\notag\\&& \sum_{k\in J(\ell_{t-1},z_t)} P_t^2(k|\ell_{t-1}^s,\Delta_t^s,m_t^s,j) u(j,X_t,z_t,m_t,k) -c(\ell_{t-1},j,X_t,z_t) \text{ .} \label{eqn:EU} \end{eqnarray} (8) This is calculated using the probability $$P^2_t(\cdot)$$ that the secondary mover will pick a given location, defined in equation (6). Denoting the transition probabilities as $$\rho^1(\cdot)$$, I can write the deterministic component of living in a location as: \begin{eqnarray} v^1_{t}(\cdot)&=& E_{\ell_t^s}\left[\tilde{v}_t( j,\ell_{t-1},\Delta_t,m_t,\ell_t^s )|\ell_{t-1}^s\right] + \beta \sum_{z_{t+1},m_{t+1},\ell^s_{t}}\Big(\rho^1_{t}( z_{t+1},m_{t+1},\ell^s_{t}|j,\Delta_t, m_t, \ell_{t-1}^s)\notag \\ &\times& E_{\eta}\left[V^1_{t+1}(j,\Delta_{t+1} ,m_{t+1},\ell_{t}^s, \eta_{t+1})\right]\Big). \end{eqnarray} (9) For a given state, the continuation value is calculated by integrating over the distribution of future payoff shocks: \begin{eqnarray} &&E_{\eta}\left[V^1_{t+1} (j,\Delta_{t+1},m_{t+1},\ell_{t}^s,\eta_{t+1})\right]\notag \\ &=&E_{\eta}\left[\max_{k\in J^1(j,z_{t+1})}v^1_{t+1} (k,j,\Delta_{t+1},m_{t+1},\ell_{t}^s) +\eta_{j,t+1}\right]\notag\\ &=&\log\left(\sum_{k\in J(j,z_{t+1})} \exp v_{t+1}^1\Big(k,j,\Delta_{t+1},m_{t+1},\ell_{t}^s )\Big) \right)+\gamma. \label{eqn:exp1} \end{eqnarray} (10) I calculate the probabilities that the primary mover picks each location in a period, which are used to calculate the likelihood function. They also are a part of the transition probabilities for the secondary mover. Using the properties of the extreme value distribution, the probability that a primary mover picks location $$j$$ is given by \begin{equation} P_t^1(j|\ell_{t-1},\Delta_t,m_t,\ell_{t-1}^s)= \frac{\exp\Big(v^1_t(j,\ell_{t-1},\Delta_t,m_t, \ell_{t-1}^s)\Big)} {\sum_{k\in J(\ell_{t-1},z_{t})} \exp\Big(v^1_t(k,\ell_{t-1},\Delta_t,m_t, \ell_{t-1}^s)\Big)}\text{ .}\label{eqn:primary_prob} \end{equation} (11) 3.2.3. Transition probabilities In this section, I calculate the transition probabilities. There is uncertainty over future legal status, future marital status (if single), and the location of one's spouse (if married). I assume that the probability that a person has a given legal status in the next period depends on his characteristics and his current legal status.11 For people who are married, the transition probabilities are also over a spouse's future decisions. I assume that the agent has the same information as the spouse about the spouse's future decisions. This means that the probability that a person's spouse lives in a given location is given by his choice probabilities. A single person can become married in future periods with some probability. If he gets married, there is also uncertainty over where his new spouse is living. Recall that $$\rho^1(\cdot)$$ and $$\rho^2(\cdot)$$ are the transition probabilities for primary and secondary movers. These give the probability that a person has a given legal status, marital status, and if married, has a spouse living in a certain location in the next period. \begin{eqnarray} \rho^1_t(z_{t+1},m_{t+1},\ell^s_t| \ell_t,\Delta_t,m_t, \ell_{t-1}^s)=\left\{ \begin{array}{ll} \delta(z_{t+1}|z_t,X_t)P^2_t(\ell^s_t|\ell_{t-1}^s,\Delta_t^s,m_t^s, \ell_t) & \text{if } m_t=1 \\[3pt] \delta(z_{t+1}|z_t,X_t)\psi^1(m_{t+1},\ell_t^s|X_t,\ell_t) & \text{if } m_t=2 \text{ .}\label{eqn:lambda1} \end{array} \right. \end{eqnarray} (12) \begin{eqnarray} \rho^2_t(z_{t+1},m_{t+1},\ell^s_{t+1}| \ell_t, \Delta_t,m_t,\ell_{t}^s)=\left\{ \begin{array}{ll} \delta(z_{t+1}|z_t,X_t)P^1_{t+1}(\ell^s_{t+1}|\ell_{t}^s,\Delta_{t+1}^s,m_{t+1}^s, \ell_{t}) & \text{if } m_t=1 \\[3pt] \delta(z_{t+1}|z_t,X_t)\psi^2(m_{t+1},\ell_{t+1}^s|X_t,\ell_t) & \text{if } m_t=2\text{ .}\label{eqn:lambda2} \end{array} \right. \end{eqnarray} (13) The function $$\delta(\cdot)$$ gives the probability that a person has a given legal status in the next period. For primary movers, there is uncertainty over where the secondary mover will live in the current period. This is represented by the function $$P^2_t(\cdot)$$, which comes from the secondary mover's choice probabilities defined in equation (6). Likewise, for secondary movers, there is uncertainty over the primary mover's location in the next period. This is represented by the function $$P^1_{t+1}(\cdot)$$, which comes from the primary mover's choice probabilities defined in equation (11). Single people could become married in future periods, and the probability of this happening is written as $$\psi^k(\cdot)$$, with $$k=1,2$$ for primary and secondary movers, respectively. If he gets married, there is a probability his new spouse lives in each location. If he does not get married, then he continues to make decisions as a single person.12 4. Data I estimate the model using data from the MMP, a joint project of Princeton University and the University of Guadalajara.13 The MMP is a repeated cross-sectional data set that started in 1982, and is still ongoing. The project aims to understand the decisions and outcomes relating to immigration for Mexican individuals. To my knowledge, this is the most detailed source of information on immigration decisions between the U.S. and Mexico, most importantly on illegal immigrants, which are underrepresented in most U.S.-based surveys. The survey asks questions on when and where people lived in the U.S., how they got across the border, and what the wage outcomes in the U.S. were, which is the set of information necessary to estimate the model detailed in the previous section. For household heads and spouses, the MMP collects a lifetime migration history, asking people which country and state they lived in each year. This information is used to construct a panel data set that contains each person's location at each point in time. I also know if and when each person is allowed to move to the U.S. legally. For people who move to the U.S. illegally, the MMP records when and where they cross the border. The MMP also collects information on the remaining members of the household. The inclusion of these respondents allows me to cover a wider age range than if I were to just use the household head and spouse data. Although the MMP does not ask for the lifetime migration histories for this group, it asks many questions related to migration. The survey asks for the migrants' wages, location, and legal status for their first and last trip to the U.S., as well as their total number of U.S. trips. For people who have moved to the U.S. two or fewer times, I know their full history of U.S. migration, although when they are in Mexico I may not know their precise location. For people who have moved more than two times, there are gaps in the sample for years when a migration is not reported. I will have to integrate over the missing information to compensate for the lack of full histories for each person.14 In addition, in this group, I do not know these people's marital status at each point in time, and they are also not matched to a spouse in the data, so I cannot include the marriage interactions component of the model for this group. I call this sample the "partial history" sample, whereas I call the group of household heads and spouses the "full history" sample. One question in this article is how changes in border enforcement affect immigration decisions. Border patrol was fairly low and constant up to the 1986 IRCA. Because the data have lifetime histories, the sample spans many years. Computing the value function for each year is costly, so I limit the sample time frame to years in which there are changes in enforcement levels. For this reason, I study behaviour starting in 1980. To avoid an initial condition problem, I only include individuals who were aged 17 in 1980 or after. This leaves me with a sample size of 6,457 for the full history sample, where I observe each person's location from age 17 until the year surveyed.15 The partial history sample is larger, consisting of 41,069 individuals. One downside of the data is that the MMP sample is not representative of Mexico, as the surveyed communities are mostly those in rural areas with high migration propensities. Western-central Mexico, the region with the highest migration rates historically, is oversampled.16 Over time, the MMP sampling frame has shifted to other areas in Mexico, thus covering areas with lower migration rates. Because the MMP collects retrospective data, I have information on migration decisions in earlier years in these communities that are surveyed later, mitigating this problem somewhat. Another restriction of the data is that the sample misses permanent migrants, because the survey is administered in Mexico.17 Therefore, the results of this article apply to this specific section of the Mexican population. In Online Appendix A, I compare the MMP sample to the Current Population Study (CPS) (restricting the sample to Mexicans living in the U.S.) and to Mexican census data, to get an understanding of the limitations of the data. Table A1 in Online Appendix A shows that the MMP sample has substantially more men than the CPS, which is unsurprising due to the prevalence of temporary migrants in the MMP. The CPS sample also has higher levels of education. Table A2 compares the MMP sample to the Mexican census data. The MMP sample is younger, most likely because of my sample selection criteria explained in the previous paragraph. The MMP sample also has higher education levels. Unlike other data sources, the MMP has wage data when people are in the U.S. illegally, allowing me to estimate the wage distribution for illegal immigrants living in the U.S. In comparison, other datasets report country of birth but not legal status, and I expect that datasets such as the CPS will be biased towards legal immigrants, since illegal immigrants are likely to avoid government surveys. Because legal immigration is relatively rare in the MMP data, I combine MMP wages with CPS data on Mexicans living in the U.S. to get a larger sample size to study the legal wage distribution. The MMP also records wages in Mexico; however, there are limited wage observations per person and the data give imprecise estimates. Therefore, for Mexican wages, I use data from Mexican labour force surveys: the Encuesta Nacional de Ingresos y Gastos de los Hogares (ENIGH) in 1989, 1992, and 1994 , and the Encuesta Nacional de Empleo (ENE) from 1995 to 2004. To measure border enforcement, I use data from U.S. Customs and Border Protection (CBP) on the number of person-hours spent patrolling each sector of the border.18 CBP divides the U.S.–Mexico border into nine regions, and the data report the person-hours spent patrolling each sector. 5. Descriptive Statistics Tables 1 and 2 show the characteristics of the sample, divided into five groups: people who move internally, people who move to the U.S., people who move internally and to the U.S., non-migrants, and people who can immigrate legally. Table 1 shows this information for the full history sample, and Table 2 for the partial history sample. For the partial history sample, there is no information on internal movers, since the MMP has insufficient information to isolate this group. These tables show that most U.S. migrants are male. Each row shows the percentage of a group ($$i.e.$$ internal movers) with a given level of education. People who move to the U.S. have the least education. The literature finds that returns to education are higher in Mexico than in the U.S., possibly explaining why educated people are less likely to immigrate. In addition, illegal immigrants do not have access to the full U.S. labour market, and therefore may not be able to find jobs that require higher levels of education. People who can immigrate legally make up close to 3% of the full history sample and about 2.4% of the partial history sample. Table 1. Characteristics of full history sample Internal movers (%) Moves to U.S. (%) Moves internally and to the U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 60.53 91.51 89.51 50.82 90.63 60.66 Percent married 67.59 81.01 78.40 75.74 92.19 76.24 Average age 29.95 30.13 30.74 29.73 30.86 29.88 Years of education 0–4 16.07 18.03 14.81 17.72 11.46 17.33 5–8 39.47 43.61 43.83 40.48 53.13 41.33 9–11 28.67 30.34 26.54 30.83 22.92 30.17 12 9.42 5.92 8.64 7.66 8.85 7.64 13$$+$$ 6.37 2.10 6.27 3.30 3.65 3.53 Observations 722 1,048 162 4,333 192 6,457 Internal movers (%) Moves to U.S. (%) Moves internally and to the U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 60.53 91.51 89.51 50.82 90.63 60.66 Percent married 67.59 81.01 78.40 75.74 92.19 76.24 Average age 29.95 30.13 30.74 29.73 30.86 29.88 Years of education 0–4 16.07 18.03 14.81 17.72 11.46 17.33 5–8 39.47 43.61 43.83 40.48 53.13 41.33 9–11 28.67 30.34 26.54 30.83 22.92 30.17 12 9.42 5.92 8.64 7.66 8.85 7.64 13$$+$$ 6.37 2.10 6.27 3.30 3.65 3.53 Observations 722 1,048 162 4,333 192 6,457 Notes: Calculated using data from the full history sample in the MMP. For education, the table gives the percentage of each group ($$i.e.$$ internal movers) that has a given level of education. Table 1. Characteristics of full history sample Internal movers (%) Moves to U.S. (%) Moves internally and to the U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 60.53 91.51 89.51 50.82 90.63 60.66 Percent married 67.59 81.01 78.40 75.74 92.19 76.24 Average age 29.95 30.13 30.74 29.73 30.86 29.88 Years of education 0–4 16.07 18.03 14.81 17.72 11.46 17.33 5–8 39.47 43.61 43.83 40.48 53.13 41.33 9–11 28.67 30.34 26.54 30.83 22.92 30.17 12 9.42 5.92 8.64 7.66 8.85 7.64 13$$+$$ 6.37 2.10 6.27 3.30 3.65 3.53 Observations 722 1,048 162 4,333 192 6,457 Internal movers (%) Moves to U.S. (%) Moves internally and to the U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 60.53 91.51 89.51 50.82 90.63 60.66 Percent married 67.59 81.01 78.40 75.74 92.19 76.24 Average age 29.95 30.13 30.74 29.73 30.86 29.88 Years of education 0–4 16.07 18.03 14.81 17.72 11.46 17.33 5–8 39.47 43.61 43.83 40.48 53.13 41.33 9–11 28.67 30.34 26.54 30.83 22.92 30.17 12 9.42 5.92 8.64 7.66 8.85 7.64 13$$+$$ 6.37 2.10 6.27 3.30 3.65 3.53 Observations 722 1,048 162 4,333 192 6,457 Notes: Calculated using data from the full history sample in the MMP. For education, the table gives the percentage of each group ($$i.e.$$ internal movers) that has a given level of education. Table 2. Characteristics of partial history sample Moves to U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 71.85 43.80 65.79 48.94 Percent married 58.72 53.40 70.32 54.68 Average age 26.02 24.92 28.21 25.18 0–4 years education 8.96 9.59 6.64 9.42 5–8 years education 40.05 29.99 36.42 31.80 9–11 years education 34.07 31.48 32.90 31.94 12 years education 11.84 14.39 15.29 13.99 13$$+$$ years education 5.09 14.55 8.75 12.85 Observations 6,742 33,333 994 41,069 Moves to U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 71.85 43.80 65.79 48.94 Percent married 58.72 53.40 70.32 54.68 Average age 26.02 24.92 28.21 25.18 0–4 years education 8.96 9.59 6.64 9.42 5–8 years education 40.05 29.99 36.42 31.80 9–11 years education 34.07 31.48 32.90 31.94 12 years education 11.84 14.39 15.29 13.99 13$$+$$ years education 5.09 14.55 8.75 12.85 Observations 6,742 33,333 994 41,069 Notes: Calculated using data from the partial history sample in the MMP. For education, the table gives the percentage of each group ($$i.e.$$ people that move to the U.S.) that has a given level of education. Table 2. Characteristics of partial history sample Moves to U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 71.85 43.80 65.79 48.94 Percent married 58.72 53.40 70.32 54.68 Average age 26.02 24.92 28.21 25.18 0–4 years education 8.96 9.59 6.64 9.42 5–8 years education 40.05 29.99 36.42 31.80 9–11 years education 34.07 31.48 32.90 31.94 12 years education 11.84 14.39 15.29 13.99 13$$+$$ years education 5.09 14.55 8.75 12.85 Observations 6,742 33,333 994 41,069 Moves to U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 71.85 43.80 65.79 48.94 Percent married 58.72 53.40 70.32 54.68 Average age 26.02 24.92 28.21 25.18 0–4 years education 8.96 9.59 6.64 9.42 5–8 years education 40.05 29.99 36.42 31.80 9–11 years education 34.07 31.48 32.90 31.94 12 years education 11.84 14.39 15.29 13.99 13$$+$$ years education 5.09 14.55 8.75 12.85 Observations 6,742 33,333 994 41,069 Notes: Calculated using data from the partial history sample in the MMP. For education, the table gives the percentage of each group ($$i.e.$$ people that move to the U.S.) that has a given level of education. 5.1. Migration decisions Between 1980 and 2004, an average of 2.5% of the people in the sample living in Mexico moved to the U.S. in each year. Table 3 looks at the effects of family interactions on migration rates.19 The migration behaviour of married men is very similar to that of single men. However, there are stark differences in the migration decisions of married and single women. I compare married women, whose husband is in the U.S. to single women, and show that these married women have substantially higher migration rates.20 This suggests that husband's decisions have an important effect on female migration decisions. Table 3. Family and migration rates Married men (%) Single men (%) Married women (spouse in U.S.) (%) Single women (%) 0–4 years education 3.44 4.10 1.74 0.81 5–8 years education 4.92 4.55 3.27 1.43 9–11 years education 3.82 3.26 3.45 1.30 12 years education 2.36 2.60 6.25 1.21 13$$+$$ years education 1.14 1.00 10.00 0.58 Total 4.04 3.74% 3.22% 1.17% Married men (%) Single men (%) Married women (spouse in U.S.) (%) Single women (%) 0–4 years education 3.44 4.10 1.74 0.81 5–8 years education 4.92 4.55 3.27 1.43 9–11 years education 3.82 3.26 3.45 1.30 12 years education 2.36 2.60 6.25 1.21 13$$+$$ years education 1.14 1.00 10.00 0.58 Total 4.04 3.74% 3.22% 1.17% Notes: This table calculates average annual Mexico to U.S. migration rates in the full history sample. For married women, I only include those whose husband is living in the U.S. Table 3. Family and migration rates Married men (%) Single men (%) Married women (spouse in U.S.) (%) Single women (%) 0–4 years education 3.44 4.10 1.74 0.81 5–8 years education 4.92 4.55 3.27 1.43 9–11 years education 3.82 3.26 3.45 1.30 12 years education 2.36 2.60 6.25 1.21 13$$+$$ years education 1.14 1.00 10.00 0.58 Total 4.04 3.74% 3.22% 1.17% Married men (%) Single men (%) Married women (spouse in U.S.) (%) Single women (%) 0–4 years education 3.44 4.10 1.74 0.81 5–8 years education 4.92 4.55 3.27 1.43 9–11 years education 3.82 3.26 3.45 1.30 12 years education 2.36 2.60 6.25 1.21 13$$+$$ years education 1.14 1.00 10.00 0.58 Total 4.04 3.74% 3.22% 1.17% Notes: This table calculates average annual Mexico to U.S. migration rates in the full history sample. For married women, I only include those whose husband is living in the U.S. To further analyse the determinants of migration decisions, I estimate the probability that a person who lives in Mexico moves to the U.S. in a given year using probit regressions. The marginal effects are reported in Table 4. The first two columns include both genders, and the third and fourth columns allow for separate effects for men and women, respectively.21 In all regressions but column (4), the effect of age on migration is negative and statistically significant, supporting the human capital model, which predicts that younger people are more likely to move because they have more time to earn higher wages. Using family members as a measure of networks, I find that having a family member in the U.S. makes a person more likely to immigrate. Legal immigrants are more likely to move, as are people who have moved to the U.S. before. Columns (2)–(4) include controls for marital status. Column (2), which includes both men and women, indicates that single men, married men, and married women are more likely to move than single women. Column (3) only includes men, and shows no difference between married and single men. Column (4), which only includes women, again shows that married women whose spouse is in the U.S. are more likely to immigrate than single women. Since married women only move to the U.S. when their husband is in the U.S., it is important to include these sorts of interactions in a model.22 Table 4. Migration probit regression Dependent variable = 1 if moves to the U.S. Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education 0.00680*** 0.00302 0.00242 0.00674** (0.000959) (0.00194) (0.00260) (0.00256) 9–11 years education 0.00511*** –0.000856 –0.00347 0.00713** (0.00102) (0.00217) (0.00289) (0.00259) 12 years education –0.00130 –0.00393 –0.00754 0.00714* (0.00120) (0.00326) (0.00440) (0.00326) 13$$+$$ years education –0.0191*** –0.0144** –0.0203*** 0.00194 (0.00147) (0.00462) (0.00604) (0.00538) Age –0.00365*** –0.00266* –0.00319* –0.0000882 (0.000521) (0.00120) (0.00160) (0.00122) Age squared 0.0000408*** 0.0000164 0.0000193 –0.0000144 (0.0000105) (0.0000231) (0.0000307) (0.0000248) Family in U.S. 0.0104*** 0.0161*** 0.0206*** 0.00427** (0.000722) (0.00149) (0.00201) (0.00140) Legal immigrant 0.0771*** 0.0503*** 0.0627*** 0.0185*** (0.00356) (0.00703) (0.00979) (0.00393) Has moved to U.S. before 0.0465*** 0.0476*** 0.0599*** 0.0141*** (0.00144) (0.00245) (0.00321) (0.00288) Single man 0.0471*** (0.00306) Married man 0.0466*** –0.00119 (0.00317) (0.00219) Married woman 0.0366*** 0.0119*** (0.00480) (0.00179) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 421,638 69,344 50,610 16,288 Dependent variable = 1 if moves to the U.S. Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education 0.00680*** 0.00302 0.00242 0.00674** (0.000959) (0.00194) (0.00260) (0.00256) 9–11 years education 0.00511*** –0.000856 –0.00347 0.00713** (0.00102) (0.00217) (0.00289) (0.00259) 12 years education –0.00130 –0.00393 –0.00754 0.00714* (0.00120) (0.00326) (0.00440) (0.00326) 13$$+$$ years education –0.0191*** –0.0144** –0.0203*** 0.00194 (0.00147) (0.00462) (0.00604) (0.00538) Age –0.00365*** –0.00266* –0.00319* –0.0000882 (0.000521) (0.00120) (0.00160) (0.00122) Age squared 0.0000408*** 0.0000164 0.0000193 –0.0000144 (0.0000105) (0.0000231) (0.0000307) (0.0000248) Family in U.S. 0.0104*** 0.0161*** 0.0206*** 0.00427** (0.000722) (0.00149) (0.00201) (0.00140) Legal immigrant 0.0771*** 0.0503*** 0.0627*** 0.0185*** (0.00356) (0.00703) (0.00979) (0.00393) Has moved to U.S. before 0.0465*** 0.0476*** 0.0599*** 0.0141*** (0.00144) (0.00245) (0.00321) (0.00288) Single man 0.0471*** (0.00306) Married man 0.0466*** –0.00119 (0.00317) (0.00219) Married woman 0.0366*** 0.0119*** (0.00480) (0.00179) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 421,638 69,344 50,610 16,288 Notes: Standard errors, clustered at the household level, in parentheses. $$^{*}p<0.05$$, $$^{**}p<0.01$$, $$^{***}p<0.001$$. Table is reporting marginal effects from a probit regression. The sample includes individuals who were living in Mexico at the start of the period. Column (1) uses the whole sample, and columns (2)–(4) only include the full history sample. For education, the excluded group is people with four or fewer years of education. Married women whose spouse is in Mexico are not included in the regression. Table 4. Migration probit regression Dependent variable = 1 if moves to the U.S. Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education 0.00680*** 0.00302 0.00242 0.00674** (0.000959) (0.00194) (0.00260) (0.00256) 9–11 years education 0.00511*** –0.000856 –0.00347 0.00713** (0.00102) (0.00217) (0.00289) (0.00259) 12 years education –0.00130 –0.00393 –0.00754 0.00714* (0.00120) (0.00326) (0.00440) (0.00326) 13$$+$$ years education –0.0191*** –0.0144** –0.0203*** 0.00194 (0.00147) (0.00462) (0.00604) (0.00538) Age –0.00365*** –0.00266* –0.00319* –0.0000882 (0.000521) (0.00120) (0.00160) (0.00122) Age squared 0.0000408*** 0.0000164 0.0000193 –0.0000144 (0.0000105) (0.0000231) (0.0000307) (0.0000248) Family in U.S. 0.0104*** 0.0161*** 0.0206*** 0.00427** (0.000722) (0.00149) (0.00201) (0.00140) Legal immigrant 0.0771*** 0.0503*** 0.0627*** 0.0185*** (0.00356) (0.00703) (0.00979) (0.00393) Has moved to U.S. before 0.0465*** 0.0476*** 0.0599*** 0.0141*** (0.00144) (0.00245) (0.00321) (0.00288) Single man 0.0471*** (0.00306) Married man 0.0466*** –0.00119 (0.00317) (0.00219) Married woman 0.0366*** 0.0119*** (0.00480) (0.00179) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 421,638 69,344 50,610 16,288 Dependent variable = 1 if moves to the U.S. Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education 0.00680*** 0.00302 0.00242 0.00674** (0.000959) (0.00194) (0.00260) (0.00256) 9–11 years education 0.00511*** –0.000856 –0.00347 0.00713** (0.00102) (0.00217) (0.00289) (0.00259) 12 years education –0.00130 –0.00393 –0.00754 0.00714* (0.00120) (0.00326) (0.00440) (0.00326) 13$$+$$ years education –0.0191*** –0.0144** –0.0203*** 0.00194 (0.00147) (0.00462) (0.00604) (0.00538) Age –0.00365*** –0.00266* –0.00319* –0.0000882 (0.000521) (0.00120) (0.00160) (0.00122) Age squared 0.0000408*** 0.0000164 0.0000193 –0.0000144 (0.0000105) (0.0000231) (0.0000307) (0.0000248) Family in U.S. 0.0104*** 0.0161*** 0.0206*** 0.00427** (0.000722) (0.00149) (0.00201) (0.00140) Legal immigrant 0.0771*** 0.0503*** 0.0627*** 0.0185*** (0.00356) (0.00703) (0.00979) (0.00393) Has moved to U.S. before 0.0465*** 0.0476*** 0.0599*** 0.0141*** (0.00144) (0.00245) (0.00321) (0.00288) Single man 0.0471*** (0.00306) Married man 0.0466*** –0.00119 (0.00317) (0.00219) Married woman 0.0366*** 0.0119*** (0.00480) (0.00179) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 421,638 69,344 50,610 16,288 Notes: Standard errors, clustered at the household level, in parentheses. $$^{*}p<0.05$$, $$^{**}p<0.01$$, $$^{***}p<0.001$$. Table is reporting marginal effects from a probit regression. The sample includes individuals who were living in Mexico at the start of the period. Column (1) uses the whole sample, and columns (2)–(4) only include the full history sample. For education, the excluded group is people with four or fewer years of education. Married women whose spouse is in Mexico are not included in the regression. The data on return migration rates show that 9% of all migrants living in the U.S. move to Mexico each year. Raw statistics show that men have higher return migration rates than women. Suspecting that return migration rates for married men are affected by the location of their wives, in Table 5, looking at only men in the full history sample, I split the sample by marital status and wife's location. Married men whose wife is in Mexico are much more likely to return home, whereas those whose wife is living in the U.S. have a much lower return migration rate. Table 5. Family and male return migration rates Wife in Mexico (%) Wife in U.S. (%) Single (%) 0–4 years education 40.55 15.38 33.39 5–8 years education 33.59 22.22 31.70 9–11 years education 39.83 16.22 29.43 12 years education 48.84 9.09 26.19 13$$+$$ years education 29.41 0.00 35.09 Total 36.61 17.88 30.96 Wife in Mexico (%) Wife in U.S. (%) Single (%) 0–4 years education 40.55 15.38 33.39 5–8 years education 33.59 22.22 31.70 9–11 years education 39.83 16.22 29.43 12 years education 48.84 9.09 26.19 13$$+$$ years education 29.41 0.00 35.09 Total 36.61 17.88 30.96 Notes: This table reports the average annual return migration rates, using the the full history sample. Table 5. Family and male return migration rates Wife in Mexico (%) Wife in U.S. (%) Single (%) 0–4 years education 40.55 15.38 33.39 5–8 years education 33.59 22.22 31.70 9–11 years education 39.83 16.22 29.43 12 years education 48.84 9.09 26.19 13$$+$$ years education 29.41 0.00 35.09 Total 36.61 17.88 30.96 Wife in Mexico (%) Wife in U.S. (%) Single (%) 0–4 years education 40.55 15.38 33.39 5–8 years education 33.59 22.22 31.70 9–11 years education 39.83 16.22 29.43 12 years education 48.84 9.09 26.19 13$$+$$ years education 29.41 0.00 35.09 Total 36.61 17.88 30.96 Notes: This table reports the average annual return migration rates, using the the full history sample. Using a probit regression, I estimate the probability that a person currently living in the U.S. returns to Mexico in a given year. The marginal effects are shown in Table 6. Columns (1) and (2) use data for both genders, and columns (3) and (4) use data for men and women, respectively.23 All specifications except for column (4) show that that legal immigrants are less likely to return home. Columns (2)–(4) control for marital status, and additionally split the sample for married men based on whether their spouse is living in Mexico or the U.S. Married men with a wife in Mexico are more likely to return migrate than single men, whereas married men whose wife is in the U.S. are less likely to return migrate than single men. This suggests that moving home to be with one's spouse is a strong incentive for return migration. Table 6. Return migration probit regression Dependent variable$$=$$1 if moves from U.S. to Mexico Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education –0.0263*** 0.0109 0.00962 0.120 (0.00699) (0.0279) (0.0288) (0.0983) 9–11 years education –0.0320*** –0.00796 –0.00648 0.113 (0.00734) (0.0308) (0.0321) (0.101) 12 years education –0.0429*** –0.0125 –0.00367 0.00428 (0.00898) (0.0434) (0.0473) (0.116) 13$$+$$ years education –0.0194 0.0134 0.0515 –0.242 (0.0109) (0.0542) (0.0608) (0.150) Age –0.00495 0.0181 0.0218 0.0410 (0.00321) (0.0133) (0.0138) (0.0455) Age squared 0.000104 –0.000237 –0.000293 –0.000844 (0.0000617) (0.000248) (0.000257) (0.000901) Family in U.S. 0.0313*** –0.0304 –0.0349 0.0480 (0.00482) (0.0208) (0.0218) (0.0519) Legal immigrant –0.0725*** –0.284*** –0.295*** –0.0167 (0.00794) (0.0299) (0.0311) (0.0838) Single man 0.0794* (0.0395) Married man, wife in U.S. –0.0709 –0.149** (0.0631) (0.0530) Married man, wife in Mexico 0.121** 0.0442* (0.0430) (0.0223) Married woman 0.0590 0.0711 (0.0587) (0.0552) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 40,268 5,624 5,185 425 Dependent variable$$=$$1 if moves from U.S. to Mexico Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education –0.0263*** 0.0109 0.00962 0.120 (0.00699) (0.0279) (0.0288) (0.0983) 9–11 years education –0.0320*** –0.00796 –0.00648 0.113 (0.00734) (0.0308) (0.0321) (0.101) 12 years education –0.0429*** –0.0125 –0.00367 0.00428 (0.00898) (0.0434) (0.0473) (0.116) 13$$+$$ years education –0.0194 0.0134 0.0515 –0.242 (0.0109) (0.0542) (0.0608) (0.150) Age –0.00495 0.0181 0.0218 0.0410 (0.00321) (0.0133) (0.0138) (0.0455) Age squared 0.000104 –0.000237 –0.000293 –0.000844 (0.0000617) (0.000248) (0.000257) (0.000901) Family in U.S. 0.0313*** –0.0304 –0.0349 0.0480 (0.00482) (0.0208) (0.0218) (0.0519) Legal immigrant –0.0725*** –0.284*** –0.295*** –0.0167 (0.00794) (0.0299) (0.0311) (0.0838) Single man 0.0794* (0.0395) Married man, wife in U.S. –0.0709 –0.149** (0.0631) (0.0530) Married man, wife in Mexico 0.121** 0.0442* (0.0430) (0.0223) Married woman 0.0590 0.0711 (0.0587) (0.0552) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 40,268 5,624 5,185 425 Notes: Standard errors, clustered at the household level, in parentheses. $$^{*}p<0.05$$, $$^{**}p<0.01$$, $$^{***}p<0.001$$. Table is reporting marginal effects from a probit regression. The sample includes individuals who were living in the U.S. at the start of the period. Column 1 uses the whole sample, and columns (2)–(4) only use the full history sample. The excluded group for education is people with four or fewer years of education. Table 6. Return migration probit regression Dependent variable$$=$$1 if moves from U.S. to Mexico Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education –0.0263*** 0.0109 0.00962 0.120 (0.00699) (0.0279) (0.0288) (0.0983) 9–11 years education –0.0320*** –0.00796 –0.00648 0.113 (0.00734) (0.0308) (0.0321) (0.101) 12 years education –0.0429*** –0.0125 –0.00367 0.00428 (0.00898) (0.0434) (0.0473) (0.116) 13$$+$$ years education –0.0194 0.0134 0.0515 –0.242 (0.0109) (0.0542) (0.0608) (0.150) Age –0.00495 0.0181 0.0218 0.0410 (0.00321) (0.0133) (0.0138) (0.0455) Age squared 0.000104 –0.000237 –0.000293 –0.000844 (0.0000617) (0.000248) (0.000257) (0.000901) Family in U.S. 0.0313*** –0.0304 –0.0349 0.0480 (0.00482) (0.0208) (0.0218) (0.0519) Legal immigrant –0.0725*** –0.284*** –0.295*** –0.0167 (0.00794) (0.0299) (0.0311) (0.0838) Single man 0.0794* (0.0395) Married man, wife in U.S. –0.0709 –0.149** (0.0631) (0.0530) Married man, wife in Mexico 0.121** 0.0442* (0.0430) (0.0223) Married woman 0.0590 0.0711 (0.0587) (0.0552) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 40,268 5,624 5,185 425 Dependent variable$$=$$1 if moves from U.S. to Mexico Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education –0.0263*** 0.0109 0.00962 0.120 (0.00699) (0.0279) (0.0288) (0.0983) 9–11 years education –0.0320*** –0.00796 –0.00648 0.113 (0.00734) (0.0308) (0.0321) (0.101) 12 years education –0.0429*** –0.0125 –0.00367 0.00428 (0.00898) (0.0434) (0.0473) (0.116) 13$$+$$ years education –0.0194 0.0134 0.0515 –0.242 (0.0109) (0.0542) (0.0608) (0.150) Age –0.00495 0.0181 0.0218 0.0410 (0.00321) (0.0133) (0.0138) (0.0455) Age squared 0.000104 –0.000237 –0.000293 –0.000844 (0.0000617) (0.000248) (0.000257) (0.000901) Family in U.S. 0.0313*** –0.0304 –0.0349 0.0480 (0.00482) (0.0208) (0.0218) (0.0519) Legal immigrant –0.0725*** –0.284*** –0.295*** –0.0167 (0.00794) (0.0299) (0.0311) (0.0838) Single man 0.0794* (0.0395) Married man, wife in U.S. –0.0709 –0.149** (0.0631) (0.0530) Married man, wife in Mexico 0.121** 0.0442* (0.0430) (0.0223) Married woman 0.0590 0.0711 (0.0587) (0.0552) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 40,268 5,624 5,185 425 Notes: Standard errors, clustered at the household level, in parentheses. $$^{*}p<0.05$$, $$^{**}p<0.01$$, $$^{***}p<0.001$$. Table is reporting marginal effects from a probit regression. The sample includes individuals who were living in the U.S. at the start of the period. Column 1 uses the whole sample, and columns (2)–(4) only use the full history sample. The excluded group for education is people with four or fewer years of education. One of the motivations for the dynamic model estimated in this article is that repeat migration is common. In the sample, the average number of moves to the U.S. per migrant is 1.64 for men and 1.14 for women , showing that many migrants move more than once.24 Women move less and are less likely to return migrate, implying that when women move, their decision is more likely to be permanent. The average durations illustrate this more clearly. Overall, the average migration duration is 4.4 years. It is slightly higher for legal than illegal movers (4.83 versus 4.35 years, respectively). The average duration for men is 4.15 years, and the average duration for women is 5.20 years, again indicating that when women move, their decision is more likely to be permanent. This section shows that it is crucial to allow for a relationship between spouses' decisions. The model in this article accounts for the following trends observed in the data: (1) women are more likely to move if their husband is in the U.S., and (2) men are less likely to return migrate if their spouse is living with them in the U.S. By including both male and female decisions in the model, I can study how their interactions affect the counterfactual outcomes. A key component of the model is that individuals are choosing from a set of locations in both the U.S. and Mexico, instead of just picking between the two countries. This is an important contribution of this article, in that most of the past work on Mexico to U.S. migration does not allow for internal migration. Internal migration is fairly common, as close to 30% of the people in the full history sample moves internally, making it important to allow for people to choose from locations in both countries.25 Due to these high rates, changes in wages in Mexico, even outside of one's home location, could affect the decision on whether or not to move to the U.S. The model accounts for this by letting people choose from a set of locations in both countries. 5.2. Border enforcement To measure border enforcement, I use data from U.S. CBP on the number of person-hours spent patrolling the border. CBP divides the U.S.–Mexico border into nine sectors, as shown in Figure 1, each of which gets a different allocation of resources each year.26Figure 2 shows the number of person-hours spent patrolling each region of the border over time.27 Relative to the levels observed today, border patrol was fairly low in the early 1980s. Enforcement was initially highest at San Diego and grew the fastest there. Enforcement also grew substantially at Tucson and the Rio Grande Valley, although the growth started later than at San Diego. In most of the other sectors, there was a small amount of growth in enforcement, mostly starting in the late 1990s. Figure 1 View largeDownload slide Border patrol sectors. Notes: Map downloaded from U.S. CBP website. Figure 1 View largeDownload slide Border patrol sectors. Notes: Map downloaded from U.S. CBP website. Figure 2 View largeDownload slide Hours patrolling the border Notes: Data on enforcement from U.S. CBP. Figure 2 View largeDownload slide Hours patrolling the border Notes: Data on enforcement from U.S. CBP. Much of the variation in Figure 2 can be explained by changes in U.S. policy. The 1986 IRCA called for increased enforcement along the U.S.–Mexico border. However, changes in enforcement were small until the early 1990s, when new policies further increased border patrol.28 Illegal immigrants surveyed in the MMP reported the closest city in Mexico to where they crossed the border. I use this information to match each individual to a border patrol sector. Figure 3 shows the percentage of illegal immigrants who cross the border at each crossing point in each year. Initially, the largest share of people crossed the border near San Diego. However, as enforcement there increased, fewer people crossed at San Diego. Before 1995, about 50% of illegal immigrants crossed the border at San Diego. This decreased to 27% post-1995. At the same time, the share of people crossing at Tucson increased. I use this variation in behaviour, combined with the changes in enforcement at each sector over time, to identify the effect of border enforcement on immigration decisions.29 Figure 3 View largeDownload slide Border crossing locations (MMP). Notes: In this figure, I use data from the MMP to calculate the share of illegal migrants that cross at each border patrol sector in each year. Figure 3 View largeDownload slide Border crossing locations (MMP). Notes: In this figure, I use data from the MMP to calculate the share of illegal migrants that cross at each border patrol sector in each year. 6. Estimation I estimate the model using maximum likelihood. I assume that a person has 28 location choices, which include 24 locations in Mexico and four in the U.S. The Mexico locations are loosely defined as states; however, some states are grouped when they border each other and have smaller sample sizes.30 The locations in the U.S. are California, Texas, Illinois, and the remainder of states that are grouped into one location choice.31 I restrict decisions so that a married woman cannot move to the U.S. unless her husband is living there. This simplifies computation, and is empirically grounded since it is very rare in the data for the wife to live in the U.S. while the husband is in Mexico. Illegal immigrants moving to the U.S. also choose where to cross the border. The U.S. government divides the border into nine regions. However, very few people in the data cross at some of these points, making identification of the fixed cost of crossing difficult. I reduce the number of crossing points to seven to avoid this problem.32 Therefore, an illegal immigrant has twenty-eight choices in the U.S., the four locations combined with the seven crossing points. I define a time period as one year, and use a one-year discount rate of 0.95. I assume that people solve the model starting at the age of 17 and work until the age of 65. There are three sources of unobserved heterogeneity in the model. The first is over moving cost type, and this is at the household level. In particular, I assume that there are two types, where one group (the stayers) has infinitely high moving costs and will never move to the U.S. The second source of unobserved heterogeneity is over wage outcomes when living in the U.S., and I assume that this is at the individual level. These values are known by the individual but unobserved by the econometrician. The data show that many women do not work, and therefore would not be affected by wage differentials. To account for this, there is a third set of unobserved heterogeneity that allows for women to be a worker or a non-worker type, where decisions of non-worker types are not affected by wages. I integrate over the probability that a woman is a worker type, which is taken from aggregate statistics on female labour force participation from the World Bank's World Development Indicators.33 Identification of the wage parameters and the fixed cost of moving follows the arguments in Kennan and Walker's (2011). My model also has the parameters related to illegal immigration, where identification of the border enforcement term comes from comparing the rate that people cross at each border patrol sector over time as enforcement hours are reallocated. The intuition for how these parameters are identified is discussed in Online Appendix B. 6.1. Wages I estimate three sets of wage functions: when people are in Mexico, in the U.S. illegally, and in the U.S. legally. For all three situations, wages have a deterministic and a random component, where the latter is realized each period after a person decides where to live. This means that when making migration decisions, people only consider their expected wage in each location. Wages in Mexico are estimated in a first-stage regression. The MMP data do not have sufficient information on individual wages in Mexico, so I cannot learn about how individual variations in wage draws affect migration decisions.34 Instead, I use data from Mexican labour force surveys, which have more accurate information on Mexican wages in each year, to estimate this wage distribution. Using data from the ENIGH in 1989, 1992, and 1994 and the ENE from 1995 to 2004, I estimate wage regressions in each year: \begin{eqnarray} w^{M}_{ijt} = \beta^{Mt} X_{it} + \gamma^{Mt}_j + \epsilon_{ijt}\text{ .}\label{eqn:wagesmex} \end{eqnarray} (14) In equation (14), $$X_{it}$$ are individual characteristics, $$\beta^{Mt}$$ are the returns to these characteristics when in Mexico at time $$t$$, and $$\gamma^{Mt}$$ are state fixed effects, which also vary over time. The first two columns of Table 7 show the results of the wage regression for Mexican wages in 1989 and 2004, the first and last years where I have these data. The regressions for all years are in Online Appendix A. There are strong returns to education and experience in these data, which have fluctuated significantly over the time period analysed. Note that in equation (14), there is no unobserved heterogeneity in wages, so unobserved types are independent of Mexican wages. I make this assumption due to the lack of reliable wage information in the MMP when individuals live in Mexico. Unfortunately, the lack of individual-level heterogeneity over Mexican wages is a limitation of this analysis. Table 7. Wage regressions in Mexico Dependent variable: Wage in Mexico 1989 2004 1989–2004 (1) (2) (3) Age 2.89*** 1.28*** 1.38*** (0.23) (0.01) (0.005) Age-squared –0.28*** –0.13*** –0.14*** (0.03) (0.002) (0.001) Male 0.78 0.15*** 0.18*** (0.10) (0.005) (0.002) 5–8 years education 1.12*** 0.46*** 0.72*** (0.13) (0.008) (0.02) 9–11 years education 1.74*** 0.95*** 1.36*** (0.14) (0.008) (0.01) 12 years education 2.96*** 1.26*** 2.49*** (0.15) (0.01) (0.02) 13$$+$$ years education 5.60*** 2.87*** 3.99*** (0.15) (0.009) (0.02) 0–4 years education $$\times$$ time –0.02*** (0.0004) 5–8 years education $$\times$$ time –0.02*** (0.001) 9–11 years education $$\times$$ time –0.03*** (0.001) 12 years education $$\times$$ time –0.09*** (0.002) 13+ years education $$\times$$ time –0.78*** (0.001) State fixed effects Yes Yes Yes $$R^{2}$$ 0.19 0.28 0.29 Dependent variable: Wage in Mexico 1989 2004 1989–2004 (1) (2) (3) Age 2.89*** 1.28*** 1.38*** (0.23) (0.01) (0.005) Age-squared –0.28*** –0.13*** –0.14*** (0.03) (0.002) (0.001) Male 0.78 0.15*** 0.18*** (0.10) (0.005) (0.002) 5–8 years education 1.12*** 0.46*** 0.72*** (0.13) (0.008) (0.02) 9–11 years education 1.74*** 0.95*** 1.36*** (0.14) (0.008) (0.01) 12 years education 2.96*** 1.26*** 2.49*** (0.15) (0.01) (0.02) 13$$+$$ years education 5.60*** 2.87*** 3.99*** (0.15) (0.009) (0.02) 0–4 years education $$\times$$ time –0.02*** (0.0004) 5–8 years education $$\times$$ time –0.02*** (0.001) 9–11 years education $$\times$$ time –0.03*** (0.001) 12 years education $$\times$$ time –0.09*** (0.002) 13+ years education $$\times$$ time –0.78*** (0.001) State fixed effects Yes Yes Yes $$R^{2}$$ 0.19 0.28 0.29 Notes: Standard errors in parentheses. $$^{*}p<0.05$$, $$^{**}p<0.01$$, $$^{***}p<0.001$$. Age is divided by 10. For education, the excluded group is people with less than five years of education. The dependent variable is hourly wages, in 2000 dollars using PPP exchange rates. Column (3) has data from 1989, 1992, and 1994–2004. Time is (year-1989). Quadratic and cubic terms for time also included in column (3). Table 7. Wage regressions in Mexico Dependent variable: Wage in Mexico 1989 2004 1989–2004 (1) (2) (3) Age 2.89*** 1.28*** 1.38*** (0.23) (0.01) (0.005) Age-squared –0.28*** –0.13*** –0.14*** (0.03) (0.002) (0.001) Male 0.78 0.15*** 0.18*** (0.10) (0.005) (0.002) 5–8 years education 1.12*** 0.46*** 0.72*** (0.13) (0.008) (0.02) 9–11 years education 1.74*** 0.95*** 1.36*** (0.14) (0.008) (0.01) 12 years education 2.96*** 1.26*** 2.49*** (0.15) (0.01) (0.02) 13$$+$$ years education 5.60*** 2.87*** 3.99*** (0.15) (0.009) (0.02) 0–4 years education $$\times$$ time –0.02*** (0.0004) 5–8 years education $$\times$$ time –0.02*** (0.001) 9–11 years education $$\times$$ time –0.03*** (0.001) 12 years education $$\times$$ time –0.09*** (0.002) 13+ years education $$\times$$ time –0.78*** (0.001) State fixed effects Yes Yes Yes $$R^{2}$$ 0.19 0.28 0.29 Dependent variable: Wage in Mexico 1989 2004 1989–2004 (1) (2) (3) Age 2.89*** 1.28*** 1.38*** (0.23) (0.01) (0.005) Age-squared –0.28*** –0.13*** –0.14*** (0.03) (0.002) (0.001) Male 0.78 0.15*** 0.18*** (0.10) (0.005) (0.002) 5–8 years education 1.12*** 0.46*** 0.72*** (0.13) (0.008) (0.02) 9–11 years education 1.74*** 0.95*** 1.36*** (0.14) (0.008) (0.01) 12 years education 2.96*** 1.26*** 2.49*** (0.15) (0.01) (0.02) 13$$+$$ years education 5.60*** 2.87*** 3.99*** (0.15) (0.009) (0.02) 0–4 years education $$\times$$ time –0.02*** (0.0004) 5–8 years education $$\times$$ time –0.02*** (0.001) 9–11 years education $$\times$$ time –0.03*** (0.001) 12 years education $$\times$$ time –0.09*** (0.002) 13+ years education $$\times$$ time –0.78*** (0.001) State fixed effects Yes Yes Yes $$R^{2}$$ 0.19 0.28 0.29 Notes: Standard errors in parentheses. $$^{*}p<0.05$$, $$^{**}p<0.01$$, $$^{***}p<0.001$$. Age is divided by 10. For education, the excluded group is people with less than five years of education. The dependent variable is hourly wages, in 2000 dollars using PPP exchange rates. Column (3) has data from 1989, 1992, and 1994–2004. Time is (year-1989). Quadratic and cubic terms for time also included in column (3). I use the results of year-by-year regressions to calculate an expected wage for each person in each location in Mexico and year. Because I do not have wage data for every year in the estimation, I need to compute expected wages in the missing years. To do this, I run a wage regression using all of the available data, including time trends in the returns to education, which allows for (1) changes in wage levels over time and (2) changes in the returns to education. The results of this regression are in the third column of Table 7. This allows me to calculate expected wages in Mexico in all years and states, using the year-by-year regressions when possible and the regression with all years of data when I do not have data for that year. To estimate the model, I also need to make assumptions on people's beliefs on future wages. It is unlikely that people had perfect foresight over what would happen to Mexican wages over this period, especially due to the severe fluctuations in Mexico's economy. To specify wage expectations, I use the results from the wage regression to impute an expected wage for each person in each location and time, denoted as $$\hat{w}^M_{ijt}$$. I assume that people expect there is some chance (denoted as $$p_{loss}$$) of a large wage drop (at rate $$\alpha$$) in each period that causes them to earn less than this expected wage.35 Then I can write each person's wage expectations as \begin{equation} E w^M_{ijt}=\left\{ \begin{array}{ll} \hat{w}^M_{ijt} & \text{with probability } 1-p_{\rm loss}\\ (1-\alpha) \hat{w}^M_{ijt} & \text{with probability } p_{\rm loss}\text{ .} \end{array} \right. \end{equation} (15) The probability $$p_{\rm loss}$$ of this wage drop is given by the fraction of years Mexico experienced negative wage growth. The expected wage drop ($$\alpha$$) is equal to the average wage drop in these bad years. For wages in the U.S., the parameters are estimated jointly with the moving cost and utility parameters. There is a separate wage process for legal and illegal immigrants, written as \begin{eqnarray} w^{ill}_{ijt} &=& \beta^{ill} X_{it} + \gamma^{ill}_j + \kappa^{ill}_i + \epsilon^{ill}_{ijt}\label{eqn:wage_illegal}\\ \end{eqnarray} (16) \begin{eqnarray} w^{leg}_{ijt}&=&\beta^{leg}X_{it}+\gamma^{leg}_j+\kappa^{leg}_i+\epsilon^{leg}_{ijt}.\label{eqn:wage_legal} \end{eqnarray} (17) Wages depend on demographic characteristics $$X_{it}$$, which include education, gender, age, and whether or not a person has family living in the U.S.36 I include time trends to allow for changes over time, as well as location fixed effects $$\gamma_j$$.37 The match component, which is the source of unobserved heterogeneity over wages, is written as $$\kappa_i=\{\kappa_i^{ill},\kappa^{leg}_i\}$$. When estimating these terms, I assume the legal and illegal fixed effects are each drawn from a symmetric three point distribution where each value is equally likely. There is a correlation between the unobserved types of husbands and wives. Each individual knows the value of his fixed effect if he were to move to the U.S. For legal immigrants, the MMP only has a small number of observations with wage information, making it difficult to precisely estimate the wage parameters. I combine the MMP wage observations with CPS data to estimate this wage process. I use data on Mexican-born individuals in the CPS, jointly with the MMP wage observations for legal immigrants, to estimate this set of wage parameters.38 For the CPS data, I do not have information on their migration decisions, so these individuals contribute to the likelihood through just their wages. 6.2. Moving costs Here I explain the determinants of moving costs for the mover types in the model. The full parameterization of the moving cost function is explained in Online Appendix C. The cost of moving includes a fixed cost, and also depends on the distance between locations, calculated as the driving distances between the most populous cities in each state.39 The cost of moving also depends on age, which captures other effects of age on immigration that are not accounted for in the model or the wage distribution. The population size of the destination also affects moving costs, to account for the empirical fact that people are more likely to move to larger locations.40 For people moving to the U.S., I allow the moving cost to depend on education. Networks, defined as the people that an individual knows who are already living in the U.S., can affect the cost of moving to the U.S. for that person.41 Empirical evidence shows that migration rates vary across states, suggesting that people from high-migration states have larger networks. I exploit differences in state-level immigration patterns, which have been well-documented empirically, to measure a person's network. I use the distance to the railroad as a proxy for regional network effects.42 When immigration from Mexico to the U.S. began in the early 1900s, U.S. employers used railroads to transport laborers across the border, meaning that the first migrants came from communities located near the railroad (Durand et al., 2001). These communities still have the highest immigration rates today. U.S. border enforcement affects the border crossing costs for illegal immigrants. However, there is potential endogeneity in that enforcement at each sector could be affected by the number of migrants crossing there. To account for this, I follow Bohn and Pugatch (2015) and use the enforcement levels, lagged by 2 periods, to predict future enforcement. Budget allocations for border enforcement are typically determined two years ahead of time, although extra resources can be allocated when needed due to unexpected shocks. The two-year-lagged values of border enforcement levels represent the best predictor of future enforcement needs before these shocks hit. This controls for endogeneity of enforcement and migration flows at each sector. This setup assumes perfect foresight, which is a strong assumption. I have estimated the model assuming myopic expectations and the results were similar. The cost of moving through a specific border patrol sector depends on the predicted enforcement levels there, as well as a fixed cost of crossing through that point. Some of the border crossing points consistently have low enforcement, yet few people choose to cross there. I assume that there are other reasons, constant across time, that account for this trend, such as being in a desert where it is dangerous to cross. The estimated fixed costs account for these factors. Since the model is dynamic, I need to make assumptions on people's beliefs on future levels of border enforcement. I assume that people have perfect foresight on border enforcement.43 6.3. Transition rates The transition probabilities defined in Section 3.2.3 are over spouse locations, legal status, and marriage rates. The transitions over spouse's location come from the choice probabilities in the model. The legal status and marriage transition rates come from the data. Using the MMP data, I estimate the probability that a person switches from illegal to legal status with a probit regression that controls for education, family networks, and gender. I assume the amnesty due to IRCA in 1986 was unanticipated. People could only be legalized under IRCA if they had lived in the U.S. continuously since 1982. Therefore, this policy would only affect immigration decisions if it was anticipated four to five years prior to implementation, making this assumption reasonable. The results of this regression, shown in column (1) of Table A7 in Online Appendix A, indicate that having family in the U.S. and being male strongly affects the probability of being granted legal status. I use the results of this regression to impute a probability that each person is granted legal status, which is used as exogenously given transition rates when estimating the model. In the model, single people know that there is some probability that they will get married in future periods. I estimate marriage rates using a probit regression. Column (2) of Table A7 in Online Appendix A shows how different factors affect the probability of becoming married. I use these results for the transition probabilities in the model estimation. 6.4. Utility function Utility depends on a person's expected wage, which is a function of his location and characteristics. A person's utility increases if he is living at his home location, which is defined as the state in which he was born. I allow for utility to increase if a person is in the same country as his spouse. Alternatively, I could have assumed that this depends on being in the same location as one's spouse, but this would significantly increase computation. My methodology only requires me to track the country of his spouse instead of the exact location, and yet still captures the empirical trend that people make migration decisions to be near their spouse.44 I also allow for higher utility in the U.S. if a person also has family members living there. The full parameterization of the utility function is explained in Online Appendix C. 6.5. Likelihood function In this section, I explain the derivation of the likelihood function; the full details are explained in Online Appendix D. I estimate the model using maximum likelihood. I calculate the likelihood function at the household level, where I integrate over the probability that each household is of a specific moving cost type, the probability that each person has a specific wage fixed effect, and the probability that the woman is a worker type.45 For each household, I observe a history of location choices for the primary and secondary mover. These choices depend on moving cost type $$\tau$$, where I assume there are mover and stayer types, where the stayer types have infinitely high costs of moving to the U.S. Women can be worker or non-worker types, where utility for the non-worker types is not affected by wages. For each person, I observe wage draws when in the U.S. There is unobserved heterogeneity in the wage draws. These are individual specific terms, known by every member of the household and unobserved by the econometrician. I allow for a correlation between the unobserved types of husbands and wives. First, I explain how I calculate the likelihood function conditional on moving cost type and wage type. The migration probabilities for each period come from the choice probabilities defined in equations (11) and (6). For secondary movers, I differentiate between the choice probabilities for worker and non-worker types, where the utility for non-worker types is not affected by wages. I calculate the probability of seeing an observed history for a household when the woman is a worker type and when she is a non-worker type. I then integrate over the probability that the woman is a worker type. The previous explanation was for calculating the likelihood conditional on moving cost and wage type. To calculate the full likelihood, I have to incorporate the probability that a household has moving cost type $$\tau$$ and each individual has a given wage type. I estimate the probability that a household has moving cost type $$\tau$$. I allow for a correlation between the types of husbands and wives, by estimating the probability that a woman with a given wage-type is married to a man with a given type. This allows for assortative matching in the labour market, if the estimates reveal that a high-wage-type man is most likely to be married to a high-wage-type woman.46 7. Results Table 8 reports the utility parameter estimates.47 The results show that people prefer to live at their home location, and that men living in the same location as their spouse have higher utility. There is no statistically significant effect for women, which can be explained due to the assumptions in the model. Because women rarely move to the U.S. without their husband, I assumed that a married woman cannot live in the U.S. unless her husband is there. Without this assumption, I would get a much larger preference for living in the same location as one's spouse for women, since women do not move to the U.S. without their husbands. In addition, people with family in the U.S. have higher utility when living in the U.S. than those who do not. There are mover and stayer types in the model; the estimation is set so that the fixed cost of moving to the U.S. is infinity for stayer types so they will never choose to make that move. I find that the probability that a household is a mover type is close to 70%. Table 8. Utility parameter estimates Wage term 0.056 (0.0022) Home bias 0.20 (0.0040) With spouse (men) 0.36 (0.053) With spouse (women) 0.032 (0.042) Family in U.S. 0.029 (0.012) Probability (mover type) 0.68 (0.022) Log-likelihood –232,643.05 Wage term 0.056 (0.0022) Home bias 0.20 (0.0040) With spouse (men) 0.36 (0.053) With spouse (women) 0.032 (0.042) Family in U.S. 0.029 (0.012) Probability (mover type) 0.68 (0.022) Log-likelihood –232,643.05 Notes: Standard errors in parentheses. Table 8. Utility parameter estimates Wage term 0.056 (0.0022) Home bias 0.20 (0.0040) With spouse (men) 0.36 (0.053) With spouse (women) 0.032 (0.042) Family in U.S. 0.029 (0.012) Probability (mover type) 0.68 (0.022) Log-likelihood –232,643.05 Wage term 0.056 (0.0022) Home bias 0.20 (0.0040) With spouse (men) 0.36 (0.053) With spouse (women) 0.032 (0.042) Family in U.S. 0.029 (0.012) Probability (mover type) 0.68 (0.022) Log-likelihood –232,643.05 Notes: Standard errors in parentheses. In a separate exercise, I estimated a simpler version of this model, taking away the utility preference for living at the same location as a person's spouse. This leads to a significant change in the likelihood at the optimal point, where equality of the likelihood with the original and simpler model was rejected by a likelihood ratio test.48 This shows that the inclusion of this part of the model substantially improves its ability to model decisions. Table 9. Immigrant wage estimates Illegal Legal Match probabilities Age 2.63 6.17 Low–low 0.32 (1.14) (0.13) (2.17) Age-squared –0.44 –0.64 Low–medium 0.01 (0.21) (0.016) (2.01) 5–8 years education 1.23 1.49 Medium–low 0.01 (0.20) (0.16) (2.05) 9–11 years education 1.93 2.80 Medium–medium 0.28 (0.21) (0.16) (1.96) 12 years education 2.24 4.83 (0.24) (0.15) 13$$+$$ years education 2.05 6.87 (0.37) (0.16) Family in U.S. –0.51 (0.22) Male 1.31 2.76 (0.28) (0.047) Match component 2.29 0.98 (0.25) (0.60) Constant 0.96 –6.96 (1.44) (0.27) Standard deviation of wages 2.52 5.08 (0.082) (0.078) Illegal Legal Match probabilities Age 2.63 6.17 Low–low 0.32 (1.14) (0.13) (2.17) Age-squared –0.44 –0.64 Low–medium 0.01 (0.21) (0.016) (2.01) 5–8 years education 1.23 1.49 Medium–low 0.01 (0.20) (0.16) (2.05) 9–11 years education 1.93 2.80 Medium–medium 0.28 (0.21) (0.16) (1.96) 12 years education 2.24 4.83 (0.24) (0.15) 13$$+$$ years education 2.05 6.87 (0.37) (0.16) Family in U.S. –0.51 (0.22) Male 1.31 2.76 (0.28) (0.047) Match component 2.29 0.98 (0.25) (0.60) Constant 0.96 –6.96 (1.44) (0.27) Standard deviation of wages 2.52 5.08 (0.082) (0.078) Notes: Standard errors in parentheses. The excluded term is people with less than five years of education. Age is divided by 10. The match components are drawn from a three-point symmetric distribution around zero. The first component in the match probability is for the husband, and the second is for the wife. The wage equations include time trends in education and location fixed effects from the CPS. Table 9. Immigrant wage estimates Illegal Legal Match probabilities Age 2.63 6.17 Low–low 0.32 (1.14) (0.13) (2.17) Age-squared –0.44 –0.64 Low–medium 0.01 (0.21) (0.016) (2.01) 5–8 years education 1.23 1.49 Medium–low 0.01 (0.20) (0.16) (2.05) 9–11 years education 1.93 2.80 Medium–medium 0.28 (0.21) (0.16) (1.96) 12 years education 2.24 4.83 (0.24) (0.15) 13$$+$$ years education 2.05 6.87 (0.37) (0.16) Family in U.S. –0.51 (0.22) Male 1.31 2.76 (0.28) (0.047) Match component 2.29 0.98 (0.25) (0.60) Constant 0.96 –6.96 (1.44) (0.27) Standard deviation of wages 2.52 5.08 (0.082) (0.078) Illegal Legal Match probabilities Age 2.63 6.17 Low–low 0.32 (1.14) (0.13) (2.17) Age-squared –0.44 –0.64 Low–medium 0.01 (0.21) (0.016) (2.01) 5–8 years education 1.23 1.49 Medium–low 0.01 (0.20) (0.16) (2.05) 9–11 years education 1.93 2.80 Medium–medium 0.28 (0.21) (0.16) (1.96) 12 years education 2.24 4.83 (0.24) (0.15) 13$$+$$ years education 2.05 6.87 (0.37) (0.16) Family in U.S. –0.51 (0.22) Male 1.31 2.76 (0.28) (0.047) Match component 2.29 0.98 (0.25) (0.60) Constant 0.96 –6.96 (1.44) (0.27) Standard deviation of wages 2.52 5.08 (0.082) (0.078) Notes: Standard errors in parentheses. The excluded term is people with less than five years of education. Age is divided by 10. The match components are drawn from a three-point symmetric distribution around zero. The first component in the match probability is for the husband, and the second is for the wife. The wage equations include time trends in education and location fixed effects from the CPS. Table 10 shows the parameters of the immigrant wage distribution, for both legal and illegal immigrants. There are stronger returns to education for legal immigrants than for illegal immigrants, reflecting that high-skilled legal immigrants can access jobs that reward these skills.49 The age profile has the standard concave shape for legal immigrants. For illegal immigrants, wages increase slightly at young ages, but then decrease. For the age range that comprises most of the sample, the wage profile is essentially flat, since the steeper drop-off in wages does not begin until older ages. Table 10. Moving cost estimates Mexico to U.S. Return migration Internal migration Fixed cost for men 3.43 3.59 3.53 (0.47) (0.37) (0.12) Fixed cost for women 2.22 6.40 3.55 (0.45) (0.38) (0.12) Distance (legal) 0.60 –0.91 0.0000027 (0.18) (0.086) (0.046) Age 0.0047 0.062 0.13 (0.013) (0.014) (0.0050) Population size 0.0053 –0.00016 –0.014 (0.00034) (0.0013) (0.00091) Distance to railroad 0.30 (0.027) 5–8 years education –0.047 (0.089) 9–11 years education –0.21 (0.084) 12 years education 0.67 (0.12) 13$$+$$ years education 0.98 (0.18) Mexico to U.S. Return migration Internal migration Fixed cost for men 3.43 3.59 3.53 (0.47) (0.37) (0.12) Fixed cost for women 2.22 6.40 3.55 (0.45) (0.38) (0.12) Distance (legal) 0.60 –0.91 0.0000027 (0.18) (0.086) (0.046) Age 0.0047 0.062 0.13 (0.013) (0.014) (0.0050) Population size 0.0053 –0.00016 –0.014 (0.00034) (0.0013) (0.00091) Distance to railroad 0.30 (0.027) 5–8 years education –0.047 (0.089) 9–11 years education –0.21 (0.084) 12 years education 0.67 (0.12) 13$$+$$ years education 0.98 (0.18) Notes: Standard errors in parentheses. Distance measured in thousands of miles. Population divided by 100,000. Table 10. Moving cost estimates Mexico to U.S. Return migration Internal migration Fixed cost for men 3.43 3.59 3.53 (0.47) (0.37) (0.12) Fixed cost for women 2.22 6.40 3.55 (0.45) (0.38) (0.12) Distance (legal) 0.60 –0.91 0.0000027 (0.18) (0.086) (0.046) Age 0.0047 0.062 0.13 (0.013) (0.014) (0.0050) Population size 0.0053 –0.00016 –0.014 (0.00034) (0.0013) (0.00091) Distance to railroad 0.30 (0.027) 5–8 years education –0.047 (0.089) 9–11 years education –0.21 (0.084) 12 years education 0.67 (0.12) 13$$+$$ years education 0.98 (0.18) Mexico to U.S. Return migration Internal migration Fixed cost for men 3.43 3.59 3.53 (0.47) (0.37) (0.12) Fixed cost for women 2.22 6.40 3.55 (0.45) (0.38) (0.12) Distance (legal) 0.60 –0.91 0.0000027 (0.18) (0.086) (0.046) Age 0.0047 0.062 0.13 (0.013) (0.014) (0.0050) Population size 0.0053 –0.00016 –0.014 (0.00034) (0.0013) (0.00091) Distance to railroad 0.30 (0.027) 5–8 years education –0.047 (0.089) 9–11 years education –0.21 (0.084) 12 years education 0.67 (0.12) 13$$+$$ years education 0.98 (0.18) Notes: Standard errors in parentheses. Distance measured in thousands of miles. Population divided by 100,000. Table 11 shows the moving cost parameters (excluding the parts related to illegal immigration). There are three moving cost functions: Mexico to U.S. migration, return migration, and internal migration. The first component of the moving cost is the fixed cost of moving, which I allow to vary with gender. The moving cost also depends on the distance between locations. For Mexico to U.S. (legal) migration, the cost increases in distance, as expected, and I do not see a statistically significant effect of distance on internal migration decisions. For return migration, the moving cost decreases with distance. The location in Illinois has the highest return migration rates, and is the furthest from the border.50 This behaviour is most likely driving this parameter estimate. Moving costs also depend on population size, in that I would expect people to be more likely to move to larger locations.51 For internal migration, the moving cost decreases with population size, indicating that people are more likely to move to larger locations. For Mexico to U.S. migration, the effect is positive but small. Population size is perhaps not an accurate proxy in this case, since migrants may care more about the number of people from their community in a location than the total population size. Table 11. Illegal immigration parameter estimates Distance 1.23 (0.056) Enforcement 0.04 (0.0069) Fixed cost 1.17 (0.39) Crossing point fixed costs El Paso, TX –1.07 (0.26) San Diego, CA –4.01 (0.23) Laredo, TX –0.37 (0.28) Rio Grande Valley, TX 0.065 (0.30) Tucson, AZ –2.05 (0.24) El Centro, TX –2.36 (0.24) Distance 1.23 (0.056) Enforcement 0.04 (0.0069) Fixed cost 1.17 (0.39) Crossing point fixed costs El Paso, TX –1.07 (0.26) San Diego, CA –4.01 (0.23) Laredo, TX –0.37 (0.28) Rio Grande Valley, TX 0.065 (0.30) Tucson, AZ –2.05 (0.24) El Centro, TX –2.36 (0.24) Notes: Standard errors in parentheses. Enforcement measured in 10,000 person-hours. Distance measured in thousands of miles. Table 11. Illegal immigration parameter estimates Distance 1.23 (0.056) Enforcement 0.04 (0.0069) Fixed cost 1.17 (0.39) Crossing point fixed costs El Paso, TX –1.07 (0.26) San Diego, CA –4.01 (0.23) Laredo, TX –0.37 (0.28) Rio Grande Valley, TX 0.065 (0.30) Tucson, AZ –2.05 (0.24) El Centro, TX –2.36 (0.24) Distance 1.23 (0.056) Enforcement 0.04 (0.0069) Fixed cost 1.17 (0.39) Crossing point fixed costs El Paso, TX –1.07 (0.26) San Diego, CA –4.01 (0.23) Laredo, TX –0.37 (0.28) Rio Grande Valley, TX 0.065 (0.30) Tucson, AZ –2.05 (0.24) El Centro, TX –2.36 (0.24) Notes: Standard errors in parentheses. Enforcement measured in 10,000 person-hours. Distance measured in thousands of miles. Table 11 shows the parameter estimates relating to illegal immigration. Distance increases the cost of moving, where the distance is calculated as the distance from the Mexican state to the crossing point plus the distance from the crossing point to the U.S. destination. This allows for the location choices and crossing point decisions to be related. I find that moving costs increase with border enforcement. I estimate a separate fixed cost for each border crossing point. The crossing points with low levels of enforcement, but where people do not cross, have high fixed costs. For example, San Diego is where the greatest share of people cross, but it also has the highest enforcement. Therefore the estimation finds that this point has the lowest fixed costs. 7.1. Model fit To look at the model fit, I first show statistics on annual Mexico to U.S. and return migration rates, comparing the values in the data to model predictions. The first row of Table 12 shows the whole sample, and the second two rows split the sample by legal status. The model fits migration rates for illegal immigrants well, but is unable to match the high migration rates and overestimates return migration rates for legal migrants. Legal immigrants are a small part of the sample. The model allows for different moving costs and wages for legal immigrants, but since most of the other parameters are the same, the model cannot fit the data for legal immigrants well.52 The last four rows split the sample by marital status, first looking at the full history sample and then at the partial history sample. The full history sample is split into married primary movers, married secondary movers, and people who are single. The model underpredicts both the migration rates of married men and women, although it does capture that primary movers are much more likely to move than secondary movers. Table 13 splits the sample by education, looking at the same summary statistics, and again shows that the model is fitting the annual migration rates relatively well. Looking at this along another dimension, Figures 4 and 5 show the annual migration rates over time, and Figures 6 and 7 split the sample by age. The model is fitting the general trends relatively well. However, it is overestimating return migration rates in the later years. Figure 4 View largeDownload slide Model fit: mexico to U.S. migration rates by year. Notes: For each year, I calculate the average Mexico to U.S. migration rate, in the data and as predicted by the model. Figure 4 View largeDownload slide Model fit: mexico to U.S. migration rates by year. Notes: For each year, I calculate the average Mexico to U.S. migration rate, in the data and as predicted by the model. Figure 5 View largeDownload slide Model fit: return migration rates by year. Notes: For each year, I calculate the average return migration rate, in the data and as predicted by the model. Figure 5 View largeDownload slide Model fit: return migration rates by year. Notes: For each year, I calculate the average return migration rate, in the data and as predicted by the model. Figure 6 View largeDownload slide Model fit: Mexico to U.S. migration rates by age. Notes: For each age, I calculate the average Mexico to U.S. migration rate, in the data and as predicted by the model. Figure 6 View largeDownload slide Model fit: Mexico to U.S. migration rates by age. Notes: For each age, I calculate the average Mexico to U.S. migration rate, in the data and as predicted by the model. Figure 7 View largeDownload slide Model fit: return migration rates by age. Notes: For each age, I calculate the average return migration rate, in the data and as predicted by the model. Figure 7 View largeDownload slide Model fit: return migration rates by age. Notes: For each age, I calculate the average return migration rate, in the data and as predicted by the model. Table 12. Model fit: annual migration rates Mexico to U.S. migration rate Return migration rate Model (%) Data (%) Model (%) Data (%) Whole sample 2.60 2.37 10.1 8.50 Illegal immigrants 2.53 2.19 10.1 9.31 Legal immigrants 16.55 40.83 9.78 4.52 Full history sample $$\quad$$ Primary movers 1.45 3.30 22.26 29.03 $$\quad$$ Secondary movers 0.00073 0.0027 33.87 25.48 $$\quad$$ Single people 3.46 2.15 10.89 24.93 Partial history sample 2.63 2.46 9.33 5.64 Mexico to U.S. migration rate Return migration rate Model (%) Data (%) Model (%) Data (%) Whole sample 2.60 2.37 10.1 8.50 Illegal immigrants 2.53 2.19 10.1 9.31 Legal immigrants 16.55 40.83 9.78 4.52 Full history sample $$\quad$$ Primary movers 1.45 3.30 22.26 29.03 $$\quad$$ Secondary movers 0.00073 0.0027 33.87 25.48 $$\quad$$ Single people 3.46 2.15 10.89 24.93 Partial history sample 2.63 2.46 9.33 5.64 Notes: I calculate the model-predicted Mexico to U.S. and return migration rates for all individuals in the sample, and compare them to rates in the data. For Mexico to U.S. migration, I use all people in Mexico at the start of the period. For return migration, I use all people in the U.S. at the start of the period. Table 12. Model fit: annual migration rates Mexico to U.S. migration rate Return migration rate Model (%) Data (%) Model (%) Data (%) Whole sample 2.60 2.37 10.1 8.50 Illegal immigrants 2.53 2.19 10.1 9.31 Legal immigrants 16.55 40.83 9.78 4.52 Full history sample $$\quad$$ Primary movers 1.45 3.30 22.26 29.03 $$\quad$$ Secondary movers 0.00073 0.0027 33.87 25.48 $$\quad$$ Single people 3.46 2.15 10.89 24.93 Partial history sample 2.63 2.46 9.33 5.64 Mexico to U.S. migration rate Return migration rate Model (%) Data (%) Model (%) Data (%) Whole sample 2.60 2.37 10.1 8.50 Illegal immigrants 2.53 2.19 10.1 9.31 Legal immigrants 16.55 40.83 9.78 4.52 Full history sample $$\quad$$ Primary movers 1.45 3.30 22.26 29.03 $$\quad$$ Secondary movers 0.00073 0.0027 33.87 25.48 $$\quad$$ Single people 3.46 2.15 10.89 24.93 Partial history sample 2.63 2.46 9.33 5.64 Notes: I calculate the model-predicted Mexico to U.S. and return migration rates for all individuals in the sample, and compare them to rates in the data. For Mexico to U.S. migration, I use all people in Mexico at the start of the period. For return migration, I use all people in the U.S. at the start of the period. Table 13. Model fit: annual migration rates Mexico to U.S. migration rate Return migration rate Years of education Model (%) Data (%) Model (%) Data (%) 0–4 2.38 2.07 11.55 11.87 5–8 2.99 2.97 10.42 8.93 9–11 3.41 2.69 10.65 8.10 12 1.81 1.92 7.23 6.0 13$$+$$ 0.81 0.79 7.96 7.47 Mexico to U.S. migration rate Return migration rate Years of education Model (%) Data (%) Model (%) Data (%) 0–4 2.38 2.07 11.55 11.87 5–8 2.99 2.97 10.42 8.93 9–11 3.41 2.69 10.65 8.10 12 1.81 1.92 7.23 6.0 13$$+$$ 0.81 0.79 7.96 7.47 Notes: I calculate the model-predicted Mexico to U.S. and return migration rates for all individuals in the sample, and compare them to rates in the data. For Mexico to U.S. migration, I use all people in Mexico at the start of the period. For return migration, I use all people in the U.S. at the start of the period. Table 13. Model fit: annual migration rates Mexico to U.S. migration rate Return migration rate Years of education Model (%) Data (%) Model (%) Data (%) 0–4 2.38 2.07 11.55 11.87 5–8 2.99 2.97 10.42 8.93 9–11 3.41 2.69 10.65 8.10 12 1.81 1.92 7.23 6.0 13$$+$$ 0.81 0.79 7.96 7.47 Mexico to U.S. migration rate Return migration rate Years of education Model (%) Data (%) Model (%) Data (%) 0–4 2.38 2.07 11.55 11.87 5–8 2.99 2.97 10.42 8.93 9–11 3.41 2.69 10.65 8.10 12 1.81 1.92 7.23 6.0 13$$+$$ 0.81 0.79 7.96 7.47 Notes: I calculate the model-predicted Mexico to U.S. and return migration rates for all individuals in the sample, and compare them to rates in the data. For Mexico to U.S. migration, I use all people in Mexico at the start of the period. For return migration, I use all people in the U.S. at the start of the period. Next I look at the fit of the dynamic aspects of the model. In Table 14, I calculate three statistics in the data: the percentage of the sample that moves to the U.S., the number of moves to the U.S. per migrant, and the average duration of each move to the U.S. I then simulated the model and calculated the model's predicted values for each of these variables. The model has too few people moving, and those that move stay for longer than in the data. The number of moves per migrant matches the data very well. Table 14. Model fit: lifetime behaviour Model Data Percent that move (%) 17.23 19.2 Years per move 5.51 4.39 Number of moves per migrant 1.20 1.18 Model Data Percent that move (%) 17.23 19.2 Years per move 5.51 4.39 Number of moves per migrant 1.20 1.18 Notes: These numbers are based on simulations of the model using the data in the sample. Table 14. Model fit: lifetime behaviour Model Data Percent that move (%) 17.23 19.2 Years per move 5.51 4.39 Number of moves per migrant 1.20 1.18 Model Data Percent that move (%) 17.23 19.2 Years per move 5.51 4.39 Number of moves per migrant 1.20 1.18 Notes: These numbers are based on simulations of the model using the data in the sample. Figure 8 shows the model fit for wages of illegal immigrants, splitting the sample by age. For younger ages, the model fits the data well, although it does tend to overestimate wages. For older ages, the model is underestimating wages. The model also estimates wages for legal immigrants. Using the model estimates, I find that the average illegal immigrant would earn 18% more as a legal immigrant. In comparison, Kossoudji and Cobb-Clark (2002) estimate a wage penalty from being an undocumented immigrant of 14–24%. My results fall within their range. Figure 8 View largeDownload slide Model fit: wages for illegal immigrants. Notes: I calculate the average wage of people living in the U.S. illegally, in the data and as predicted by the model. Figure 8 View largeDownload slide Model fit: wages for illegal immigrants. Notes: I calculate the average wage of people living in the U.S. illegally, in the data and as predicted by the model. 8. Counterfactuals In the counterfactuals, I study how changes in relative wages and U.S. border enforcement affect immigration decisions. I find that increased Mexican wages reduce migration rates and the duration of stays in the U.S. Increased border enforcement reduces migration rates and increases return migration rates. However, for married men living in the U.S. alone, there is a secondary effect on return migration, in that it is now harder for their wives to move to the U.S., providing the men an extra incentive to return home. I isolate this effect in this counterfactual. In all of these counterfactuals, I only include the population of illegal immigrants to focus on the group most affected by policy changes. In each counterfactual, I simulate the model in the baseline and in the alternate policy environments. I then calculate the percentage of the sample that moves to the U.S., the average number of moves to the U.S. per migrant, the average number of years spent living in the U.S. per move, and the average number of years a person lives in the U.S. over a lifetime. These summary statistics indicate the changes in immigration behaviour in these alternate environments. 8.1. Changes in wages In the first counterfactual, I look at the effect of a 10% increase in Mexican wages, holding U.S. wages constant.53 Over time, as Mexico's economy grows, the wage gap between the two countries will decrease. This counterfactual analyses how this will affect illegal immigration. The first row of Table 15 shows the baseline simulation, and the second row shows the results after a 10% increase in Mexican wages. After this change, fewer people move to the U.S., and for those that move, the duration of each trip decreases. This reflects a higher value of living in Mexico in the counterfactual due to the higher wages. These effects combine to decrease the average number of years that a person lives in the U.S. by around 5%. Table 15. Counterfactuals Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 17.23 5.51 1.20 1.14 10% increase in Mexican wages 16.53 5.43 1.20 1.08 $$\qquad$$ in all locations but home 17.51 5.44 1.20 1.14 10% decrease in U.S. wages 15.55 5.29 1.20 0.99 50% increase in enforcement 16.35 5.67 1.18 1.10 50% increase in enforcement (equal costs) 15.49 5.81 1.17 1.05 Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 17.23 5.51 1.20 1.14 10% increase in Mexican wages 16.53 5.43 1.20 1.08 $$\qquad$$ in all locations but home 17.51 5.44 1.20 1.14 10% decrease in U.S. wages 15.55 5.29 1.20 0.99 50% increase in enforcement 16.35 5.67 1.18 1.10 50% increase in enforcement (equal costs) 15.49 5.81 1.17 1.05 Notes: These are the results from simulations of the model, only including the sample of individuals who cannot migrate legally. Table 15. Counterfactuals Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 17.23 5.51 1.20 1.14 10% increase in Mexican wages 16.53 5.43 1.20 1.08 $$\qquad$$ in all locations but home 17.51 5.44 1.20 1.14 10% decrease in U.S. wages 15.55 5.29 1.20 0.99 50% increase in enforcement 16.35 5.67 1.18 1.10 50% increase in enforcement (equal costs) 15.49 5.81 1.17 1.05 Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 17.23 5.51 1.20 1.14 10% increase in Mexican wages 16.53 5.43 1.20 1.08 $$\qquad$$ in all locations but home 17.51 5.44 1.20 1.14 10% decrease in U.S. wages 15.55 5.29 1.20 0.99 50% increase in enforcement 16.35 5.67 1.18 1.10 50% increase in enforcement (equal costs) 15.49 5.81 1.17 1.05 Notes: These are the results from simulations of the model, only including the sample of individuals who cannot migrate legally. Alternatively, I can use the model to study how migration changes in response to variations in U.S. wages. Lessem and Nakajima (2015) show that downturns in the U.S. economy more adversely affect illegal immigrant wages than native wages, due to the frequent renegotiation of labour contracts in the former population. In the fourth row of Table 15, I show the counterfactual outcomes after a 10% decrease in U.S. wages. This decrease substantially discourages immigration, reducing the number of people who move and the duration of each move. Overall, this decreases the number of years spent in the U.S. by around 13%, a much larger effect than from the 10% increase in Mexican wages. This difference is mostly driven by the fact that a 10% decrease in U.S. wages is larger in magnitude than a 10% increase in Mexican wages due to higher wage levels in the U.S. To put these results into perspective, I compare them to the findings in Hanson and Spilimbergo (1999), who find a wage elasticity of migration with response to Mexican wages of between $$-0.64$$ and $$-0.86$$. My results are not directly comparable, since Hanson and Spilimbergo (1999) are looking at changes in apprehensions, which is a proxy for static migration rates. On the other hand, I calculate how the total number of years a person spends in the U.S. responds to wage changes. Nonetheless, I find an elasticity of $$-0.54$$, which is quite close to their range. I can also compare my and their wage elasticity with respect to U.S. wages. Hanson and Spilimbergo (1999) find a wage elasticity of U.S. wages ranging between 0.9 and 1.64. I find an elasticity of 1.17, which falls within that range. My model allows for internal migration as well as Mexico to U.S. migration, which enables me to study how non-uniform changes in the Mexican economy affect migration patterns. For example, there could be increases or decreases in wages in certain locations in Mexico, and this would not affect everyone directly. However, since people can move internally, changes in wages in alternate Mexican locations could still affect U.S. migration patterns. To put a bound on this, I simulate a version of the model where all wages except those in a person's home location increase by 10%. This change increases the value of living in all Mexican locations except for one's home location, which will increase internal migration. The results are in the third row of Table 15. There is a slight increase in the percentage of the sample that moves to the U.S., which is surprising given that the value of living in Mexico has increased. However, consider the mechanisms in the model. Due to the increased wages in alternate locations in Mexico, internal migration goes up, which means people are more likely to be living in a non-home location. If starting from a location other than their home location, people are more likely to move to the U.S., since moving to the U.S. from one's home location means forgoing their home premium. Another reason for this is that increased internal migration rates can cause people to move to locations that have lower costs of moving to the U.S. The durations of each trip remain the same as in the case of the 10% increase in wages in all Mexican locations. This is an interesting set of results that could not have been discussed without a model that allows for internal as well as international migration. The results in Table 15 look at the sample as a whole, but I can also use the model to isolate the role that family decisions play. Consider a married man living in the U.S. without his spouse. As Mexican wages increase, his wife will be less likely to join him in the U.S., providing an extra incentive for him to return home. To isolate this effect, I run a counterfactual where I increase Mexican wages but hold female migration rates at the baseline level. These results (looking at only married men in the full history sample) are in Table 16. The first row shows the baseline case, and the second row shows a counterfactual with a 10% increase in Mexican wages. In the third row, I increase Mexican wages but keep female migration rates at the baseline level. In this case, I see an increase in the number of years a married man spends in the U.S. as compared to the original counterfactual. In the original counterfactual, the increased Mexican wages cause a decrease in migration durations of about 2.74%, as compared to a 1.76% decrease when female migration rates do not adjust. Table 16. Counterfactuals: married men only Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 23.03 4.85 1.28 1.43 10 increase in Mexican wages 22.11 4.72 1.28 1.33 $$\quad$$ Transition probability constant 22.48 4.77 1.27 1.36 50 equal costs increase in enforcement 20.34 5.17 1.23 1.30 $$\quad$$ Transition probability constant 20.67 5.18 1.23 1.32 Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 23.03 4.85 1.28 1.43 10 increase in Mexican wages 22.11 4.72 1.28 1.33 $$\quad$$ Transition probability constant 22.48 4.77 1.27 1.36 50 equal costs increase in enforcement 20.34 5.17 1.23 1.30 $$\quad$$ Transition probability constant 20.67 5.18 1.23 1.32 Notes: These are the results from simulations of the model, only including the sample of married men who cannot migrate legally. Table 16. Counterfactuals: married men only Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 23.03 4.85 1.28 1.43 10 increase in Mexican wages 22.11 4.72 1.28 1.33 $$\quad$$ Transition probability constant 22.48 4.77 1.27 1.36 50 equal costs increase in enforcement 20.34 5.17 1.23 1.30 $$\quad$$ Transition probability constant 20.67 5.18 1.23 1.32 Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 23.03 4.85 1.28 1.43 10 increase in Mexican wages 22.11 4.72 1.28 1.33 $$\quad$$ Transition probability constant 22.48 4.77 1.27 1.36 50 equal costs increase in enforcement 20.34 5.17 1.23 1.30 $$\quad$$ Transition probability constant 20.67 5.18 1.23 1.32 Notes: These are the results from simulations of the model, only including the sample of married men who cannot migrate legally. 8.2. Increased border enforcement Next, I calculate how increased enforcement affects immigration, assuming that the number of person-hours allocated to enforcement at each crossing point increases by 50%. This provides insight as to how immigration would respond to further increases in border enforcement. The results of this counterfactual are in the fifth row of Table 15. The percentage of the sample that moves decreases, the number of moves per migrant slightly decreases, and the duration of each move increases, with this last effect reflecting dynamic considerations. Overall, this increase in enforcement reduces the average amount of time a person lives in the U.S. by about 3%. In the model, individuals not only choose where to live, but also choose where to cross the border when moving to the U.S. illegally. Each crossing point has a different estimated fixed cost and enforcement level. My model can be used to "optimally" allocate border enforcement in the counterfactual. I again assume a 50% total increase in enforcement, where the extra resources are now allocated to minimize illegal immigration rates, assuming that this is the government's objective. The solution to the government's problem in my model indicates that the cost of crossing at each sector of the border should be equal. Due to the wide variation in the estimated fixed costs across border patrol sectors, it is not possible to reach this point with a 50% increase in enforcement. To get closest to this point, the extra resources should be allocated to the sectors of the border with the lowest fixed costs of crossing. These points also have the highest enforcement levels, but even after accounting for the effects of enforcement, the costs of crossing there are still lowest. The last row of Table 15 shows the overall effects of this policy change. As with the uniform increase in enforcement, fewer people move, and the duration of each move increases. When the extra enforcement is allocated following this equal-costs strategy, the average number of years spent in the U.S. decreases by 7%, whereas it decreased by around 3% with the uniform increase in enforcement. This shows that the effect of increased enforcement depends on the allocation of the extra resources. As with wages, there is a secondary effect on return migration rates for married men. As enforcement increases, durations of stay increase, as discussed above. However, for married men living in the U.S. alone, the increase in migration costs makes it less likely that their wives will join them in the U.S. This gives an extra incentive for men to return to Mexico, pushing the duration of stays in the U.S. downward. This same mechanism makes married men less likely to move to the U.S., since the value of living there is now lower because their wives are less likely to move. The composition of the migrant workforce changes in this alternate policy environment. In the baseline case, looking at the full history sample, 17.2% of the person-years spent in the U.S. are by married individuals. After the equal costs increase in enforcement, 16.9% of those person-years are by married people. To isolate these mechanisms, in the next counterfactual I increase border enforcement while holding female migration rates constant. Table 16 shows the results of this counterfactual for the sample of married men. The first row shows the baseline case, and the fourth row shows the results for a 50% equal-costs increase in enforcement. The fifth row runs the same counterfactual, holding female migration rates at the baseline level. When the migration rates are held constant, there is an even larger increase in durations for married men as enforcement increases as compared to the original counterfactual scenario. 8.3. Wages and enforcement It is interesting to compare the effects of increased enforcement to increased Mexican wages. The 50% increase in enforcement, allocated following the equal-costs strategy, decreases immigration by about 7%. I compare that to the approximate increase in Mexican wages required to reach the same goal. This would occur with close to a 14% increase in Mexican wages, which is a relatively small narrowing of the Mexico–U.S. wage gap. In comparison, a 50% increase in border enforcement is an expensive policy. Expenditures on border enforcement were estimated to equal $\$$ 2.2 billion in 2002, meaning that this policy could cost over $\$$1 billion (Hanson, 2005). Furthermore, changes in enforcement levels can affect the wage elasticity of migration, which is an issue that has been of interest to policymakers. I compare reactions to a 10% increase in Mexican wages. In the baseline case, this results in a 5.4% decrease in years spent in the U.S. When enforcement is increased by 50% following the equal-costs strategy, a 10% increase in Mexican wages has a larger effect on immigration behaviour, reducing the years spent in the U.S. by 6%. This effect is almost completely due to having a larger effect on the number of people who choose to move to the U.S. After enforcement increases, an increase in Mexican wages has a larger effect on the number of people who move. In both the normal and increased enforcement cases, as Mexican wages increase, durations of each trip to the U.S. decrease, but by similar amounts. 9. Conclusion In this article, I estimate a discrete-choice dynamic programming model where people pick from a set of locations in the U.S. and Mexico in each period. I allow for a person's decisions to depend on the location of his spouse, where individuals in a household make decisions sequentially. I use this model to understand how wage differentials and U.S. border enforcement affect an individual's immigration decisions. I allow for differences in the model according to whether a person can immigrate to the U.S. legally. For illegal immigrants, the moving cost depends on U.S. border enforcement. Border enforcement is measured using data from U.S. CBP on the number of person-hours spent patrolling different regions of the border at each point in time. I use this cross-sectional and time series variation in enforcement, combined with individual decisions on where to cross the border, to identify the effects of enforcement on immigration decisions. After estimating the model, I find that increases in Mexican wages reduce immigration from Mexico to the U.S. and increase return migration rates. Simulations show that a 10% increase in Mexican wages reduces the average number of years that a person lives in the U.S. over a lifetime by around 5%. Increases in border enforcement would decrease both immigration and return migration, with the latter effect occurring because, as enforcement increases, individuals living in the U.S. expect that it will be harder to re-enter the country in the future. Married men's durations of stay also adjust to changes in their wives' behaviour. Because moving to the U.S. is now more costly, women are less likely to join their husbands in the U.S., providing an extra incentive for the husbands to return home. Overall, a uniform 50% increase in enforcement would reduce the amount of time that individuals in the sample spent in the U.S. by approximately 3%. If instead the same increase in enforcement were allocated along in the border in a way to minimize immigration rates, the number of years that the average person in the sample lived in the U.S. would drop by about 7%. These results indicate that the effects of enforcement are dependent on the allocation of the extra resources. These results have important implications. The U.S. government is considering increasing border enforcement in the future. Hanson (2005) reports that expenditures on border enforcement equalled approximately $\$$ 2.2 billion in 2002. I find that about an extra $\$$1 billion in expenditures would decrease immigration by 7%. Furthermore, I find that the effects of increased enforcement strongly depend on the allocation of resources along the border. Over the past 20 years, enforcement levels have increased substantially, and the growth in enforcement has been concentrated at certain sectors of the border. If the goal of the U.S. government is to reduce illegal immigration rates, then my model suggests that this has been the correct strategy. Furthermore, if the U.S. increases enforcement in the future, my results indicate that the government should continue to follow this pattern. My results imply that increases in Mexican wages reduce illegal immigration. In the paper, I simulate the effects of a 10% growth in Mexican wages, finding that it significantly reduces the amount of immigration, even though there is still a large U.S.–Mexico wage gap. Because of the large moving costs and a strong preference for living at one's home location, illegal immigration will decrease substantially as the wage differential is reduced. Furthermore, wage growth does not have to be uniformly distributed in Mexico to affect immigration. Empirical evidence shows that wage growth has not been uniform and that regional wage disparities within Mexico have grown, particularly since North American Free Trade Agreement. The areas with the most growth are the ones with access to foreign trade and investment. In this article, I study immigration in a partial equilibrium framework, not allowing for general equilibrium effects. Increases in immigration could drive down wages in the U.S. or cause higher wages in Mexico. However, there is no clear conclusion with regard to these general equilibrium effects. Kennan (2013) develops a model that predicts that migration will change wage levels but not the wage ratios between countries. The empirical evidence is mixed, as some research found a small effect of immigration on U.S. wages while other authors have found larger effects.54 In my model, I also assume that legal immigration status is exogenously determined. In reality, legal immigration rates are determined by how many have applied for visas, which is likely affected by the current number of illegal immigrants (since many people apply for visas after moving to the U.S.). Both of these equilibrium effects pose important questions that could be addressed in future work. This article is a first step in that direction and helps to provide the foundation for such an analysis. The article is also limited in that it does not allow for a relationship between savings and migration decisions, as in Thom (2010) and Adda et al. (2015). This is an additional area for future research on this topic. The editor in charge of this paper was Stephane Bonhomme. Acknowledgements I thank the referees and editor for their suggestions on this paper. I also thank Limor Golan, John Kennan, Brian Kovak, Sang Yoon Lee, Salvador Navarro, Chris Taber, Yuya Takahashi, Jim Walker, and participants at seminars at UW-Madison, Carnegie Mellon, Ohio State, Penn State, Kent State, and American University for helpful comments and advice. Maria Cellar provided excellent research assistance. All errors are my own. Footnotes 1. In 2004, remittances comprised 2.2% of Mexico's GDP, contributing more foreign exchange to Mexico than tourism or foreign direct investment (Hanson, 2006). 2. Hong (2010) applies a similar framework to Mexico–U.S. immigration, focusing on the legalization process. 3. Cerrutti and Massey (2001) find that women usually move to the U.S. following a family member, whereas men are much more likely to move on their own. Massey and Espinosa (1997) find that illegal immigrants are more likely to return to Mexico if they are married. 4. Another paper that looks at savings decisions is Adda et al. (2015), who develop a lifecycle model where migrants decide optimal migration lengths, along with savings and investment in human capital. They estimate this model using panel data on immigrants to Germany, and study the relationship between return migration intentions and human capital investments. In comparison to my work, this paper studies the decisions of migrants after they enter the host country. 5. Kossoudji and Cobb-Clark (2000) and Kossoudji and Cobb-Clark (2002) find that illegal immigrants receive lower wages than legal immigrants and are less likely to work in high-skill occupations when in the U.S. 6. Gathmann (2008) studies the behaviour of repeat migrants and finds that they switch their crossing point in response to an increase in enforcement at the initial crossing point. 7. Blejer et al. (1978), Crane et al. (1990), Passel et al. (1990), Donato et al. (1992), Kossoudji (1992) and find that migrants who are caught at the border attempt to enter the U.S. again. 8. I assume that once an illegal immigrant enters the U.S., there is no chance that he will be deported. Espenshade (1994) finds that only 1–2% of illegal immigrants living in the U.S. are caught and deported in each year. 9. An alternative approach would be to model the household problem, where the household jointly decides where the husband and wife will live in each period. However, this is computationally difficult, as the state space would have to contain the location of the husband and wife. Technically, the state space in my model also contains the locations of both individuals, but using my framework I am able to make certain assumptions that substantially reduce the state space and make the problem computationally feasible. These assumptions are explained in Section 6.4. 10. I do not allow for any expectations of divorce in the model. 11. I assume that legal status is an absorbing state: once a person is a legal immigrant, he cannot lose the ability to move legally. 12. In equations (12) and (13), I assume that people who are married will remain so, since there is no chance of their marital status changing. 13. The data and a discussion of the survey methodology are posted on the MMP website: mmp.opr.princeton.edu. 14. In most cases, I at least know the country a person is living in, if not his exact location. In 99% of the person-year observations, I know the country the surveyed people are living in. 15. The enforcement data end at 2004. Therefore, I only include location decisions up to 2004. 16. The MMP website shows a map of included communities: http://mmp.opr.princeton.edu/research/maps-en.aspx. 17. The MMP attempts to track individuals in the U.S., but has had limited success, so I do not include these observations. 18. I thank Gordon Hanson for providing these data. 19. This table only uses the full history sample because I do not have information on marital status at each point in time for the partial history sample. 20. I do not include married women with a spouse in Mexico in the sample, since their migration rates are close to zero. 21. Columns (2)–(4) control for marital status, and therefore only include data from the full history sample, since I do not know marital status at each point in time in the partial history data. 22. This regression does not include married women whose spouse is living in Mexico, since I dropped the rare cases where the woman was in the U.S. while the man was in Mexico. This term would not be identified in the regression because this group has 0 migration rates. 23. Column (1) uses the full and partial history samples, whereas the other columns only use the full history sample. 24. When I use all of the MMP data, this number is even higher. This is because the estimation sample is quite young, since I only consider people who are aged 17 or younger in 1980, so I am dropping the older respondents who were likely to have moved more times. 25. The empirical trends on follow what is normally found in the internal migration literature. For example, see Greenwood (1997). 26. The sectors are San Diego and El Centro in California; Yuma and Tucson in Arizona; El Paso in New Mexico; and Marfa, Del Rio, Laredo, and the Rio Grande Valley in Texas. 27. The data report the levels of patrol on a monthly basis. This graph shows the average for each year. This graph shows seven lines, instead of one line for each of the nine sectors, because in two cases, I combined two sectors that have low activity. 28. In 1993, Operation Hold the Line increased enforcement at El Paso. There was a large growth in enforcement in 1994 in San Diego due to Operation Gatekeeper. The Illegal Immigration Reform and Immigrant Responsibility Act of 1996 also allocated more resources to border enforcement. 29. One concern could be that the border patrol hours are not adequately controlling for the levels of enforcement, as there are other mechanisms that the U.S. government uses to monitor the border. Technology such as stadium lighting, infrared cameras, and ground sensors is used to aid border patrol agents. However, border patrol hours are highly correlated with total expenditures on border patrol. 30. There are 32 Mexican states. Grouping them into 24 locations, by combining nearby states, allows me to speed up computation substantially. The groupings were done by combining nearby states. Table A3 in Online Appendix A shows sample sizes and which states were combined in the estimation. 31. I assume that once a person moves to the U.S., he cannot move to a new location in the U.S., and can only choose between his current location and all locations in Mexico. This assumption is made because the data show very little movement across U.S. locations. 32. Del Rio was combined with Marfa, and Yuma was combined with El Centro. 33. Ideally, I would model the labour supply decisions of women. However, the MMP does not provide yearly labour force decisions, so this is not possible. The MMP provides some wage information. Therefore, if I observe a wage for a woman in the sample, I know she is a worker type. For the others, I have to integrate over these probabilities. 34. There is limited wage information when people are in Mexico. This is for the "last domestic wage" as well as wages for internal migrations in Mexico. However, these wages are often hard to match to specific points in time, and due to severe fluctuations in the Mexican economy, this often leads to imprecise estimates. 35. These are one-period shocks that do not persist. 36. Munshi (2003) finds that a Mexican immigrant living in the U.S. is more likely to be employed and to hold a higher paying non-agricultural job if his network is larger. This variable is only available for illegal immigrants, because it is not in the CPS data, which are partially used in the estimation of the wage process for legal immigrants. 37. These are estimated in a first stage using CPS data due to the small sample sizes in the MMP. For illegal wages, there is just an overall time trend, and for legal wages, the time trends are in the returns to education. 38. This is assuming that all individuals from Mexico in the CPS are legal immigrants. I do not know legal status in the CPS, but assume that all respondents are legal since illegal immigrants should be hesitant to participate in government surveys. 39. When a person is moving to the U.S. illegally, I calculate the distance from a state in Mexico to a border crossing point plus the distance from the border crossing point to the location in the U.S. 40. An alternative specification is to scale the number of payoff shocks according to the population size at the destination. 41. Massey and Espinosa (1997), Curran and Rivero-Fuentes (2003), and Colussi (2006) find evidence that networks affect immigration decisions. 42. I thank Craig McIntosh for providing the railroad data. 43. For years past 2004, I assume that people expect enforcement to remain constant in future periods. 44. Keeping the location of the spouse in the state space would mean that the state space has $$28^2$$ elements regarding location, for a person's location as well as his spouse, which would be quite slow to compute. Using country instead of location allows me to capture the empirical trends of interest. 45. I only include married couples as part of the same household, for in this case the likelihood is at the household level due to the joint nature of migration decisions in the model. Many households start as unmarried but become married in a future period. In the estimation, I calculate the likelihood at the household level, where each person makes decisions as a single agent before getting married, and then the couple's decisions relate to one another once married. 46. The parameters are fixed so that the total probability that a man or a woman is each type is set at 1/3. This gives a system of equations for the match probabilities. Even though there are nine parameters, I only have to estimate four and then the remainder are pinned down. 47. Online Appendix E discusses computational issues. 48. This was rejected using a significance level of 0.01. 49. The estimated parameters are for a static distribution, but the wages do change over time. The time trends in wages are estimated in a first stage and inputted into the model. For illegal wages, there is a constant time trend. For legal wages, the time trend depends on education. The state fixed effects are also estimated in a first stage. 50. This could be explained by climate, in that the weather in Illinois is much colder than in Texas or California. 51. An alternative way to control for this is to scale the number of payoff shocks by the population size at the destination. 52. One possible solution would be to allow for a completely different set of parameters for legal immigrants. I chose to not do this due to the computation time to estimate even more parameters. In addition, the counterfactuals focus on illegal immigrants, so it is less important to get a good fit of the data for legal migrants. 53. This counterfactual is limited because there is no unobserved heterogeneity over Mexican wages in the model. 54. LaLonde and Topel (1997), Smith and Edmonston (1997), and Borjas (1999) find a weak correlation between immigration inflows and wage changes for low skilled U.S. workers. Borjas et al. (1997) find larger effects. REFERENCES ADDA J. , DUSTMANN C. and GORLACH J.-S. ( 2015 ), "The Dynamics of Return Migration, Human Capital Accumulation, and Wage Assimilation" (Working Paper) . ANGELUCCI M. ( 2012 ), "U.S. Border Enforcement and the Net Flow of Mexican Illegal Migration" , Economic Development and Cultural Change , 60 , 311 – 357 . Google Scholar Crossref Search ADS BLEJER M. , JOHNSON H. and PORZECANSKI A. ( 1978 ), "An Analysis of the Economic Determinants of Legal and Illegal Mexican Migration to the United States" , in Simon J. eds, Research in Population Economics: An Annual Compliation of Research ( Greenwich, CT : JAI Press ) 217 – 231 . BOHN S. and PUGATCH T. ( 2015 ), "U.S. Border Enforcement and Mexican Immigrant Location Choice" , Demography , 52 , 1543 – 1570 . Google Scholar Crossref Search ADS PubMed BORJAS G. ( 1987 ), "Self-Selection and the Earnings of Immigrants" , The American Economic Review , 77 , 531 – 553 . BORJAS G. ( 1999 ), "The Economic Analysis of Immigration" , in Ashenfelter O. C. and Card D. (eds), Handbook of Labor Economics ( North Holland : Elsevier ) 1697 – 1760 . BORJAS G. J. , FREEMAN R. B. and KATZ L. F. ( 1997 ), "How Much do Immigration and Trade Affect Labor Market Outcomes" , Brookings Papers on Economic Activity , 1 , 1 – 90 . Google Scholar Crossref Search ADS CERRUTTI M. and MASSEY D. S. ( 2001 ), "On the Auspices of Female Migration from Mexico to the United States" , Demography , 38 , 187 – 200 . Google Scholar Crossref Search ADS PubMed CHIQUIAR D. and HANSON G. ( 2005 ), "International Migration, Self-Selection, and the Distribution of Wages: Evidence from Mexico and the United States" , Journal of Political Economy , 113 , 239 – 281 . Google Scholar Crossref Search ADS COLUSSI A. ( 2006 ), "Migrants' Networks: An Estimable Model of Illegal Mexican Migration" (Working Paper) . CORNELIUS W. ( 1989 ), "Impacts of the 1986 U.S. Immigration Law on Emigration from Rural Mexican Sending Communities" , Population and Development Review , 15 , 689 – 705 . Google Scholar Crossref Search ADS CRANE K. , ASCH B. , HEILBRUNN J. Z. et al. ( 1990 ), "The Effect of Employer Sanctions on the Flow of Undocumented Immigrants to the United States" (Discussion paper, Program for Research on Immigration Policy, the RAND Corporation (Report JRI-03) and the Urban Institute (UR Report 90-8)) . CURRAN S. R. and RIVERO-FUENTES E. ( 2003 ), "Engendering Migrant Networks: The Case of Mexican Migration" , Demography , 40 , 289 – 307 . Google Scholar Crossref Search ADS PubMed DONATO K. , DURAND J. and MASSEY D. ( 1992 ), "Stemming the Tide? Assessing the Deterrent Effect of the Immigration Reform and Control Act" , Demography , 29 , 139 – 157 . Google Scholar Crossref Search ADS PubMed DURAND J. , MASSEY D. and ZENTENO R. ( 2001 ), "Mexican Immigration to the United States: Continuties and Changes" , Latin American Research Review , 36 , 107 – 127 . Google Scholar PubMed ESPENSHADE T. ( 1990 ), "Undocumented Migration to the United States: Evidence from a Repeated Trials Model" , in Bean F. Edmonston B. and Passel J. (eds), Undocumented Migration to the United States: IRCA and the Experience of the 1980's ( Washington, DC : Urban Institute ) 159 – 182 . ESPENSHADE T. ( 1994 ), "Does the Threat of Border Apprehension Deter Undocumented U.S. Immigration" , Population and Development Review , 20 , 871 – 892 . Google Scholar Crossref Search ADS GATHMANN C. ( 2008 ), "Effects of Enforcement on Illegal Markets: Evidence from Migrant Smuggling Across the Southwestern Border" , Journal of Public Economics , 92 , 1926 – 1941 . Google Scholar Crossref Search ADS GEMICI A. ( 2011 ), "Family Migration and Labor Market Outcomes" (Working Paper) . GREENWOOD M. J. ( 1997 ), "Internal Migration in Developed Countries" , in Handbook of Population and Family Economics , pp. 647 – 720 . HANSON G. ( 2005 ), Why Does Immigration Divide America: Public Finance and Political Opposition to Open Borders ( Washington, DC : Istitute for International Economics ). HANSON G. ( 2006 ), "Illegal Migration from Mexico to the United States" , Journal of Economic Literature , 44 , 869 – 924 . Google Scholar Crossref Search ADS HANSON G. and SPILIMBERGO A. ( 1999 ), "Illegal Immigration, Border Enforcement, and Relative Wages: Evidence from Apprehensions at the U.S.-Mexico Border" , American Economic Review , 89 , 1337 – 1357 . Google Scholar Crossref Search ADS HONG G. ( 2010 ), "U.S. and Domestic Migration Decisions of Mexican Workers" (Working Paper) . IBARRARAN P. and LUBOTSKY D. ( 2005 ), "Mexican Immigration and Self-Selection: New Evidence from the 2000 Mexican Census" (NBER Working Paper No. 11456) . KENNAN J. ( 2013 ), "Open Borders" , Review of Economic Dynamics , 16 , L1 – L13 . Google Scholar Crossref Search ADS KENNAN J. and WALKER J. ( 2011 ), "The Effect of Expected Income on Individual Migration Decisions" , Econometrica , 79 , 211 – 251 . Google Scholar Crossref Search ADS KOSSOUDJI S. ( 1992 ), "Playing Cat and Mouse at the U.S.-Mexican Border" , Demography , 29 , 159 – 180 . Google Scholar Crossref Search ADS PubMed KOSSOUDJI S. and COBB-CLARK D. ( 2000 ), "IRCA's Impact on the Occupational Concentration and Mobility of Newly-Legalized Mexican Men" , Journal of Population Economics , 13 , 81 – 98 . Google Scholar Crossref Search ADS KOSSOUDJI S. ( 2002 ), "Coming out of the Shadows: Learning about Legal Status and Wages from the Legalized Population" , Journal of Labor Economics , 20 ( 3 ), 598 – 628 . Google Scholar Crossref Search ADS KROGSTAD J. M. , PASSEL J. and COHEN D. ( 2017 ), "5 Facts about Illegal Immigration in the U.S." (Pew Research Center) . LACUESTA A. ( 2006 ), "Emigration and Human Capital: Who Leaves, Who Comes Back, and What Difference Does it Make?" , Documentos de Trabajo No 0620, Banco de Espana . LALONDE R. and TOPEL R. ( 1997 ), "Economic Impact of International Migration and Migrants" , in Rosenzweig M. R. and Stark O. (eds), Handbook of Population and Family Economics ( Elsevier Science ) 799 – 850 . LESSEM R. and NAKAJIMA K. ( 2015 ), "Immigrant Wages and Recessions: Evidence from Undocumented Mexicans" (Working Paper) . Lindstrom D. P. ( 1996 ), "Economic Opportunity in Mexico and Return Migration from the United States" , Demography , 33 ( 3 ), 357 – 374 . Google Scholar Crossref Search ADS PubMed MASKIN E. and TIROLE J. ( 1988 ), "A Theory of Dynamic Oligopoly, I: Overview and Quantity Competition with Large Fixed Costs" , Econometrica , 56 , 549 – 569 . Google Scholar Crossref Search ADS MASSEY D. S. ( 2007 ), "Understanding America's Immigration "Crisis" , Proceedings of the American Philosophical Society , 151 ( 3 ), 309 – 327 . MASSEY D. S. and ESPINOSA K. E. ( 1997 ), "What's Driving Mexico-U.S. Migration? A Theoretical, Empirical, and Policy Analysis" , The American Journal of Sociology , 102 , 939 – 999 . Google Scholar Crossref Search ADS MCFADDEN D. ( 1973 ), "Conditional Logit Analysis of Qualitative Choice Behavior" , in Zarembka P. (ed.), Frontiers in Econometrics ( Academic Press ). MEXICAN MIGRATION PROJECT ( 2011 ), "MMP128" , mmp.opr.princeton.edu. MUNSHI K. ( 2003 ), "Networks in the Modern Economy: Mexican Migrants in the U.S. Labor Market" , The Quarterly Journal of Economics , 118 , 549 – 599 . Google Scholar Crossref Search ADS ORRENIUS P. and ZAVODNY M. ( 2005 ), "Self-Selection among Undocumented Immigrants from Mexico" , Journal of Development Economics , 78 , 215 – 240 . Google Scholar Crossref Search ADS PASSEL J. , BEAN F. and EDMONSTON B. ( 1990 ), "Undocumented Migration since IRCA: An Overall Assessment" , in Bean F. Edmonston B. and Passel J. (eds), Undocumented Migration to the United States: IRCA and the Experience of the 1980s ( Washington, DC : Urban Institute ) 251 – 265 . RENDÓN S. and CUECUECHA A. ( 2010 ), "International Job Search: Mexicans in and out of the U.S." , Review of Economics of the Household , 8 , 53 – 82 . Google Scholar Crossref Search ADS REYES B. I. and MAMEESH L. ( 2002 ), "Why Does Immigrant Trip Duration Vary Across U.S. Destinations?" , Social Science Quarterly , 83 , 580 – 593 . Google Scholar Crossref Search ADS RUGGLES S. , ALEXANDER J. T. , GENADEK K. ( 2010 ), "Intergrated Public Use Microdata Series: Version 5.0" ( Machinereadable database, Minneapolis : University of Minnesota ). RUST J. ( 1987 ), "Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher" , Econometrica , 55 , 999 – 1033 . Google Scholar Crossref Search ADS SMITH J. and EDMONSTON B. ( 1997 ), The New Americans: Economic, Demographic and Fiscal Effects of Immigration ( Washington, DC : National Academy Press ). THOM K. ( 2010 ), "Repeated Circular Migration: Theory and Evidence from Undocumented Migrants" (Working Paper) . © The Author(s) 2017. Published by Oxford University Press on behalf of The Review of Economic Studies Limited. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The Review of Economic Studies Oxford University Press http://www.deepdyve.com/lp/oxford-university-press/mexico-u-s-immigration-effects-of-wages-and-border-enforcement-3QJZkMW0Ru Rebecca, Lessem, The Review of Economic Studies , Volume 85 (4) – Oct 1, 2018 /lp/ou_press/mexico-u-s-immigration-effects-of-wages-and-border-enforcement-3QJZkMW0Ru The Review of Economic Studies / Economics, Econometrics and Finance / Economics and Econometrics © The Author(s) 2017. Published by Oxford University Press on behalf of The Review of Economic Studies Limited. 10.1093/restud/rdx078 Abstract In this article, I study how relative wages and border enforcement affect immigration from Mexico to the U.S. To do this, I develop a discrete choice dynamic programming model where people choose from a set of locations in both the U.S. and Mexico, while accounting for the location of one's spouse when making decisions. I estimate the model using data on individual immigration decisions from the Mexican Migration Project. Counterfactuals show that a 10% increase in Mexican wages reduces migration rates and durations, overall decreasing the number of years spent in the U.S. by about 5%. A 50% increase in enforcement reduces migration rates and increases durations of stay in the U.S., and the overall effect is a 7% decrease in the number of years spent in the U.S. 1. Introduction Approximately 11 million Mexican immigrants were living illegally in the U.S. in 2015 (Krogstad et al., 2017). This large migrant community affects the economies of both countries. For example, migrants send remittances back home, which support development in Mexico.1 In the U.S., concern about illegal immigration affects political debate and policy. Border enforcement has been increasing since the mid-1980s, and it grew by a factor of 13 between 1986 and 2002 (Massey, 2007). This was a major issue in the 2016 presidential election, where President Trump campaigned on the promise of a wall between the two countries to cut down on illegal immigration. Despite these large concerns about illegal immigration from Mexico, much about the individual decisions and mechanisms remains poorly understood. In this article, I study how wage differentials and U.S. border enforcement affect an individual's immigration decisions. Given the common pattern of repeat and return migration in the data, changes in policy affect both current and future decisions. For example, increased enforcement not only reduces initial migration rates, but also increases the duration of stay in the U.S. by making it more costly for people to come back to the U.S. after returning home. To capture such intertemporal effects, I analyse this problem in a dynamic setting where people choose from multiple locations each period, following Kennan and Walker's (2011).2 The model extends Kennan and Walker's (2011)'s framework in two dimensions. First, I allow for moves across an international border, where people choose from a set of locations which includes both states in the U.S. and in Mexico, necessitating different treatment of illegal and legal immigration. By observing individual legal status, where illegal immigrants crossed the border, and U.S. border enforcement, which varies across locations and time, I can capture various trade-offs of immigration decisions. Secondly, I allow for interactions within the decisions of husbands and wives. The data show that this is important, in that 5.7% of women with a husband in the U.S. move each year, compared to on overall female migration rate of 0.6%, suggesting a positive utility of living in the same place.3 Therefore, a married man living in the U.S. alone will consider the likelihood that his wife will join him, which is endogenous given that she also makes active decisions. This affects reactions to the policy environment. For example, as enforcement increases, a married man living in the U.S. alone knows that his wife is less likely to join him, giving him an extra incentive to return to Mexico. To capture these types of mechanisms, we need a model that allows for interactions within married couples. The most similar paper on Mexico–U.S. immigration is Thom (2010), which estimates a dynamic migration model where men choose which country to live in, focusing on savings decisions as an incentive for repeat and return migration.4 In comparison, in my model, people choose from multiple locations in both countries, allowing for both internal and international migration. I also allow for a relationship between the decisions of married couples, enabling me to study how family interactions affect the counterfactual outcomes. Gemici (2011) studies family migration by estimating a dynamic model of migration decisions with intra-household bargaining using U.S. data. In her model, married couples make a joint decision on where to live together, whereas the data from Mexico show that couples often live in different locations. In this article, I estimate a discrete-choice dynamic programming model where individuals choose from a set of locations in Mexico and the U.S. in each period. Individuals' choices depend on the location of their spouse. To make this computationally feasible, I model household decisions in a sequential process: first, the household head picks a location, and then the spouse decides where to live. The model differentiates between legal and illegal immigrants, who face different moving costs and a different wage distribution in the U.S.5 Border enforcement, measured as the number of person-hours spent patrolling the border, affects the moving cost only for illegal immigrants. To evaluate the effectiveness of border enforcement, I use a new identification strategy, which accounts for the variation in the allocation of enforcement resources along the border and over time. In the model, individuals who move to the U.S. illegally also choose where to cross the border. The data show that as enforcement at the main crossing point increased, migrants shifted their behaviour and crossed at alternate points.6 Past work, which for the most part uses aggregate enforcement levels, misses this component of the effect of increased border patrol on immigration decisions. I estimate the model using data on individual immigration decisions from the Mexican Migration Project (MMP). I use the estimated model to perform several counterfactuals, finding that increases in Mexican wages decrease both immigration rates and the duration of stays in the U.S. A 10% increase in Mexican wages reduces the average number of years that a person lives in the U.S. by about 5%. Estimation of a dynamic model captures mechanisms that could not be studied in a static model. As enforcement increases, fewer people move, but those that do are more reluctant to return home, knowing that it will be harder to re-enter the U.S. in the future. This increases the duration of stays in the U.S. Policy changes also have differential effects with marital status. As enforcement increases, it becomes harder for women to join their husbands in the U.S., giving married men an extra incentive to return home, and thereby pushing their migration durations downwards. I hold female migration rates constant in the counterfactual to isolate this effect, and then see an even larger increase in men's durations of stay in the U.S. Overall, simulations show that a 50% increase in enforcement, distributed uniformly along the border, reduces the average amount of time that an individual in the sample spends in the U.S. over a lifetime by approximately 3%. If total enforcement increased by 50%, not uniformly but instead concentrated at the points along the border where it would have the largest effect, the number of years spent in the U.S. per person would decrease by about 7%. Following U.S. policy changes in the 1990s, most new resources were allocated to certain points along the border, and this research suggests that this is the optimal policy from the perspective of reducing illegal immigration rates. The remainder of the article is organized as follows. Section 2 reviews the literature, and Section 3 explains the model. Section 4 details the data, and Section 5 provides descriptive statistics. The estimation is explained in Section 6, and the results are in Section 7. The counterfactuals are in Section 8, and Section 9 concludes the article. 2. Related Literature Wages are understood to be the main driving force behind immigration from Mexico to the U.S. Hanson and Spilimbergo (1999) find that an increase in U.S. wages relative to Mexican wages positively affects apprehensions at the border, implying that more people attempted to move illegally. Rendón and Cuecuecha (2010) estimate a model of job search, savings, and migration, finding that migration and return migration depend not only on wage differentials, but also on job turnover and job-to-job transitions. In my model, the value of a location depends on expected earnings there, allowing for wage differentials to affect migration decisions. I can quantify how responsive migration decisions are to changes in the wage distribution. To estimate the effect of border enforcement on immigration decisions, some research uses the structural break caused by the 1986 Immigration Reform and Control Act (IRCA), one of the first policies aimed at decreasing illegal immigration. This law increased border enforcement and legalized many illegal immigrants living in the U.S. Espenshade (1990, 1994) finds that there was a decline in apprehensions at the U.S. border in the year after IRCA was implemented, but no lasting effect. Using survey data from communities in Mexico, Cornelius (1989) and Donato et al. (1992) find that IRCA had little or no effect on illegal immigration. After the implementation of IRCA, there was a steady increase in border enforcement over time. Hanson and Spilimbergo (1999) find that increased enforcement led to a greater number of apprehensions at the border. This provides one mechanism for increased enforcement to affect moving costs, as immigrants may have to make a greater number of attempts to successfully cross the border. Changes in enforcement can affect not only initial but also return migration decisions, and some of the past literature has looked at this. Angelucci (2012), using the MMP data, finds that border enforcement affects initial and return migration rates. Her framework permits analysis of initial and return migration decisions separately using a reduced-form framework. By estimating a structural model, I can perform counterfactual analyses to calculate the net effect of changes in enforcement on illegal immigration. The model in this article allows for an individual's characteristics to affect migration decisions. Past literature has studied this, mostly in a static setting, to understand what factors are important. I build on this work by including the relevant characteristics found to impact migration decisions in my dynamic setting. There is a large literature on the selection of migrants, starting with the theoretical model in Borjas (1987), which predicts that migrants will be negatively selected. This is empirically supported in Ibarraran and Lubotsky (2005). However, Chiquiar and Hanson (2005) find that Mexican immigrants in the U.S. are more educated than non-migrants in Mexico. They find evidence of intermediate selection of immigrants, as do Lacuesta (2006) and Orrenius and Zavodny (2005). Past work also looks at the determinants of the duration of stays in the U.S.; for example, see Reyes and Mameesh (2002), Massey and Espinosa (1997), and Lindstrom (1996). 3. Model The basic structure of the model follows Kennan and Walker's (2011), where each person chooses where to live each period. The value of living in a location depends on the expected wages there, as well as the cost of moving. Since the model is dynamic, individuals also consider the value of being in each location in future periods. At the start of a period, each person sees a set of payoff shocks to living in each location, and then chooses the location with the highest total valuation. The shocks are random, independent and identically distributed (i.i.d.) across locations and time, and unobserved by the econometrician. I assume that the payoff shocks follow a type I extreme value distribution, and solve the model following McFadden (1973) and Rust (1987). I assume a finite horizon, so the model can be solved using backward induction. The model extends Kennan and Walker's (2011)'s framework in two dimensions: (1) by allowing for moves across an international border, which necessitates different treatment of illegal and legal immigration, and (2) modelling the interactions within married couples. The model includes elements to account for the fact that people are moving across an international border, which is different than domestic migration in a couple of important ways. When deciding where to live, people choose from a set of locations, defined as states, in both the U.S. and in Mexico. Migration decisions are substantially affected by whether or not people can move to the U.S. legally, and to account for this, the model differentiates between legal and illegal migrants. Legal immigration status is assumed to be exogenous to the model, and people can transition to legal status in future periods. Legal immigration status affects wage offers in the U.S., since we expect that legal immigrants will have access to better job opportunities in the U.S. labour market. In addition, U.S. border enforcement only affects the moving costs for illegal immigrants. I assume that all people who choose to move to the U.S. illegally are successful, so the effects of increased enforcement just come through the increased moving cost.7 This is due to an increased cost of hiring a smuggler (Gathmann, 2008) or an increase in the expected number of attempts before successfully crossing. Illegal immigrants moving to the U.S. choose both a location and a border crossing point, where the cost of moving varies at each crossing point due to differences in the fixed costs and enforcement levels at each point.8 In this article, I also extend Kennan and Walker's (2011)'s framework by allowing for the decisions of married individuals to depend on where their spouse is living. Decisions are made individually, but utility depends on whether a person is in the same location as his spouse. Since individuals' decisions are related, this is a game between the husband and wife. I solve for a Markov perfect equilibrium (Maskin and Tirole, 1988). I make some assumptions on the timing of decisions to ensure that there is only one equilibrium. For each household, I define a primary and a secondary mover, which empirically is the husband and wife, respectively. In each period, the primary mover picks a location first, so he does not know his spouse's location when he makes this choice. After the primary mover makes a decision, the secondary mover learns her payoff shocks and decides where to live.9 This setup allows for people to make migration decisions that are affected by the location of their spouse. Single people's decisions are not affected by a spouse, but they can transition over marital status in future periods, and therefore know that at some point they could have utility differentials based on their spouse's location. In the remainder of this section, I describe a model without any unobserved heterogeneity. In the estimation, there will be three sources of unobserved heterogeneity, over (1) moving costs, (2) wages in the U.S., and (3) whether or not women choose to participate in the labour market. This is explained in more detail when I discuss the estimation in Section 6. 3.1. Model setup 3.1.1. Primary and secondary movers I solve separate value functions for primary and secondary movers, denoted with superscripts 1 and 2, respectively. In the empirical implementation, men are the primary movers, and women are the secondary movers. A married person's decisions depend on the location of his spouse, whose characteristics I denote with the superscript $$s$$. Single men and women make decisions as individuals, but know that they could become married in future periods. I account for these differences by keeping track of marital status $$m_t$$, where $$m_t=1$$ is a married person and $$m_t=2$$ is a single person. 3.1.2. State variables People learn their legal status at the start of each period. I assume that once a person is able to immigrate legally, this option remains with that person forever. I use $$z_t$$ to indicate whether or not a person can move to the U.S. legally, where $$z_t=1$$ means a person can move to the U.S. legally and $$z_t=2$$ means that he cannot. State variables also include a person's location in the previous period ($$\ell_{t-1}$$), their characteristics $$X_t$$, and their marital status $$m_t$$. When a married secondary mover picks a location, the primary mover has already chosen where to live in that period, so the location of the spouse ($$\ell_t^s$$) is known and is part of the state space. For the primary mover, who makes the first decision, the location of the spouse in the previous period ($$\ell_{t-1}^s$$) is part of the state space. The characteristics and legal status of one's spouse ($$X_t^s$$ and $$z_t^s$$) are also part of the state space. To simplify notation, denote $$\Delta_t$$ as the characteristics and legal status of an individual and his spouse, so $$\Delta_t=\{X_t,z_t,X^s_t,z^s_t\}$$. 3.1.3. Choice set Denote the set of locations in the U.S. as $$J_{U}$$, those in Mexico as $$J_{M}$$, and the set of border crossing points as $$C$$. If moving to the U.S. illegally, a person has to pick both a location and a border crossing point. Denote the choice set as $$J(\ell_{t-1},z_t)$$, where \begin{eqnarray} J(\ell_{t-1},z_t)=\left\{ \begin{array}{ll} J_M\cup (J_U\times C) & \text{if } \ell_{t-1}\in J_M \text{ and } z_t=2\\[3pt] J_M\cup J_U & \text{otherwise.} \end{array}\right. \end{eqnarray} (1) 3.1.4. Payoff shocks I denote the set of payoff shocks at time $$t$$ as $$\eta_t=\{\eta_{jt}\}$$, where $$j$$ indexes locations. I assume that these follow an extreme value type I distribution. 3.1.5. Utility The utility flow depends on a person's location $$j$$, characteristics $$X_t$$, legal status $$z_t$$, marital status $$m_t$$, and spouse's location $$\ell_t^s$$, and it is written as $$u(j,X_t,z_t,m_t,\ell_t^s)$$. This allows for utility to depend on wages, which are a function of a person's characteristics and location. Utility also depends on whether or not a person is at his home location, and increases for married couples who are living in the same place. 3.1.6. Moving costs The moving cost depends on which locations a person is moving between, and that person's characteristics and legal status. I denote the cost of moving from location $$\ell_{t-1}$$ to location $$j$$ as $$c_t(\ell_{t-1},j,X_t,z_t)$$. The moving cost is normalized to zero if staying at the same location. 3.1.7. Transition probabilities There are transitions over legal status, spouse's location for married couples, and marital status for people who are single.10 The primary mover is uncertain of his spouse's location in the current period. For example, if he moves to the U.S., he is not sure whether or not his wife will follow. The secondary mover knows her spouse's location in the current period, but is unsure of her spouse's location in the next period. For example, she may move to the U.S. to join her husband, but does not know whether or not he will remain there in the next period. Single people can get married in future periods. Furthermore, if someone gets married, he does not know where his new spouse will be living. Marrying someone who is living in the U.S. will affect decisions differently than marrying someone who is in Mexico. For the primary mover, denote the probability of being in the state with legal status $$z_{t+1}$$, marital status $$m_{t+1}$$, and having a spouse in location $$\ell^s_{t}$$ in this period as $$\rho^1_{t}(z_{t+1},m_{t+1},\ell^s_{t}| j,\Delta_t,m_t,\ell_{t-1}^s)$$. This depends on his location $$j$$, his characteristics, as well as his marital status and his spouse's previous-period location (if married). For the secondary mover, the transition probability is written as $$\rho^2_{t}(z_{t+1},m_{t+1},\ell^s_{t+1}| j,\Delta_t,m_{t},\ell_{t}^s)$$. 3.2. Value function In this section, I derive the value functions for primary and secondary movers. Because the problem is solved by backward induction and the secondary mover makes the last decision, it is logical to start with the secondary mover's problem. 3.2.1. Secondary movers The secondary mover's state space includes her previous-period location, her characteristics and those of her spouse, her marital status, and the location of her spouse. After seeing her payoff shocks, she chooses the location with the highest value: \begin{equation} V^2_t(\ell_{t-1},\Delta_t,m_t,\ell_t^s, \eta_t)= \max_{j\in J(\ell_{t-1},z_t)} v^2_t(j,\ell_{t-1},\Delta_t,m_t,\ell_t^s) +\eta_{jt}. \label{eqn:VF2} \end{equation} (2) The value of living in each location has a deterministic and a random component ($$v_t^2(\cdot)$$ and $$\eta_t$$, respectively). The deterministic component of living in a location consists of the flow payoff plus the discounted expected value of living there at the start of the next period: \begin{eqnarray} v^2_{t}(\cdot)&=& \tilde{v}^2_t(j,\ell_{t-1},\Delta_t,m_t,\ell_t^s)+ \beta \sum_{z_{t+1},m_{t+1},\ell^s_{t+1}}\Big( \rho^2_{t}( z_{t+1},m_{t+1},\ell^s_{t+1}|j,\Delta_t,m_t,\ell_t^s)\notag\\ &\times& E_{\eta}\left[V^2_{t+1}(j,\Delta_{t+1},m_{t+1}, \ell_{t+1}^s, \eta_{t+1})\right]\Big). \label{eqn:deterministic} \end{eqnarray} (3) The flow payoff of living in location $$j$$, denoted as $$\tilde{v}_t(\cdot)$$, consists of utility net of moving costs, and is defined as \begin{equation} \tilde{v}^2_t(j,\ell_{t-1},\Delta_t,m_t,\ell_t^s)= u(j,X_t,z_t,m_t,\ell_t^s)-c_t(\ell_{t-1},j,X_t,z_t). \label{eqn:flow} \end{equation} (4) The second part of the deterministic component in equation (3) is the expected future value of living in a location. The transition probabilities, written as $$\rho^2(\cdot)$$, are over legal status, marital status, and location of primary mover. I integrate out the future payoff shocks using the properties of the extreme value distribution, following McFadden (1973) and Rust (1987). For a given legal status, marital status, and location of primary mover, the expected continuation value is given by \begin{eqnarray} &&E_{\eta}\left[V^2_{t+1} (j,\Delta_{t+1},m_{t+1},\ell^s_{t+1}, \eta_{t+1}) \right]\notag\\ &&=E_{\eta}\left[\max_{k\in J(j,z_{t+1})}v^2_{t+1} (k,j,\Delta_{t+1},m_{t+1},\ell_{t+1}^s) +\eta_{k,t+1}\right]\notag\\ &&=\log\left( \sum_{k\in J(j,z_{t+1})} \exp \Big(v_{t+1}^2(k,j,\Delta_{t+1},m_{t+1},\ell^s_{t+1} )\Big) \right)+\gamma \text{ ,} \label{eqn:expect2} \end{eqnarray} (5) where $$\gamma$$ is Euler's constant ($$\gamma\approx 0.58$$). I calculate the probability that a person will choose location $$j$$ at time $$t$$, which will be used for two purposes. First, this is the choice probability, necessary to calculate the likelihood function. Secondly, the choice probability is used to calculate the transition probabilities for the primary mover, who is concerned with the probability that his spouse lives in a given location in this period. I assume that he has all of the same information as the secondary mover, but since the primary mover makes the first decision, the secondary mover's payoff shocks have not yet been realized, so I can only calculate the probability that the secondary mover will make a given decision. Since I assume that the payoff shocks are distributed with an extreme value distribution, the choice probabilities take a logit form, again following McFadden (1973) and Rust (1987). The probability that a person picks location $$j$$ is given by the following formula: \begin{equation} P_t^2(j|\ell_{t-1},\Delta_t,m_t,\ell_t^s)= \frac{\exp\left(v^2_t(j,\ell_{t-1},\Delta_t,m_t,\ell_t^s)\right)} {\sum_{k\in J(\ell_{t-1},z_{t})} \exp\Big(v^2_t(k,\ell_{t-1},\Delta_t,m_t,\ell_t^s)\Big)}\text{ .}\label{eqn:secondary_prob} \end{equation} (6) 3.2.2. Primary movers I define the value function for the primary mover as follows: \begin{equation} V^1_t(\ell_{t-1},\Delta_t,m_t,\ell_{t-1}^s, \eta_t)= \max_{j\in J(\ell_{t-1},z_t)} v^1_t(j,\ell_{t-1},\Delta_t,m_t,\ell_{t-1}^s) +\eta_{jt} \text{ .} \label{eqn:VF1} \end{equation} (7) In comparison with the secondary mover, the primary mover does not know where his spouse is living in this period, and only knows her previous-period location $$\ell_{t-1}^s$$. As before, the deterministic component of living in a location includes the flow utility and the expected continuation value. However, in this case, I do not know the exact flow utility, since the secondary mover's location has not been determined. I instead calculate the expected flow utility: \begin{eqnarray} &&E_{\ell_t^s}\left[\tilde{v}_t(j, \ell_{t-1},\Delta_t,m_t,\ell_t^s )|\ell_{t-1}^s\right]=\notag\\&& \sum_{k\in J(\ell_{t-1},z_t)} P_t^2(k|\ell_{t-1}^s,\Delta_t^s,m_t^s,j) u(j,X_t,z_t,m_t,k) -c(\ell_{t-1},j,X_t,z_t) \text{ .} \label{eqn:EU} \end{eqnarray} (8) This is calculated using the probability $$P^2_t(\cdot)$$ that the secondary mover will pick a given location, defined in equation (6). Denoting the transition probabilities as $$\rho^1(\cdot)$$, I can write the deterministic component of living in a location as: \begin{eqnarray} v^1_{t}(\cdot)&=& E_{\ell_t^s}\left[\tilde{v}_t( j,\ell_{t-1},\Delta_t,m_t,\ell_t^s )|\ell_{t-1}^s\right] + \beta \sum_{z_{t+1},m_{t+1},\ell^s_{t}}\Big(\rho^1_{t}( z_{t+1},m_{t+1},\ell^s_{t}|j,\Delta_t, m_t, \ell_{t-1}^s)\notag \\ &\times& E_{\eta}\left[V^1_{t+1}(j,\Delta_{t+1} ,m_{t+1},\ell_{t}^s, \eta_{t+1})\right]\Big). \end{eqnarray} (9) For a given state, the continuation value is calculated by integrating over the distribution of future payoff shocks: \begin{eqnarray} &&E_{\eta}\left[V^1_{t+1} (j,\Delta_{t+1},m_{t+1},\ell_{t}^s,\eta_{t+1})\right]\notag \\ &=&E_{\eta}\left[\max_{k\in J^1(j,z_{t+1})}v^1_{t+1} (k,j,\Delta_{t+1},m_{t+1},\ell_{t}^s) +\eta_{j,t+1}\right]\notag\\ &=&\log\left(\sum_{k\in J(j,z_{t+1})} \exp v_{t+1}^1\Big(k,j,\Delta_{t+1},m_{t+1},\ell_{t}^s )\Big) \right)+\gamma. \label{eqn:exp1} \end{eqnarray} (10) I calculate the probabilities that the primary mover picks each location in a period, which are used to calculate the likelihood function. They also are a part of the transition probabilities for the secondary mover. Using the properties of the extreme value distribution, the probability that a primary mover picks location $$j$$ is given by \begin{equation} P_t^1(j|\ell_{t-1},\Delta_t,m_t,\ell_{t-1}^s)= \frac{\exp\Big(v^1_t(j,\ell_{t-1},\Delta_t,m_t, \ell_{t-1}^s)\Big)} {\sum_{k\in J(\ell_{t-1},z_{t})} \exp\Big(v^1_t(k,\ell_{t-1},\Delta_t,m_t, \ell_{t-1}^s)\Big)}\text{ .}\label{eqn:primary_prob} \end{equation} (11) 3.2.3. Transition probabilities In this section, I calculate the transition probabilities. There is uncertainty over future legal status, future marital status (if single), and the location of one's spouse (if married). I assume that the probability that a person has a given legal status in the next period depends on his characteristics and his current legal status.11 For people who are married, the transition probabilities are also over a spouse's future decisions. I assume that the agent has the same information as the spouse about the spouse's future decisions. This means that the probability that a person's spouse lives in a given location is given by his choice probabilities. A single person can become married in future periods with some probability. If he gets married, there is also uncertainty over where his new spouse is living. Recall that $$\rho^1(\cdot)$$ and $$\rho^2(\cdot)$$ are the transition probabilities for primary and secondary movers. These give the probability that a person has a given legal status, marital status, and if married, has a spouse living in a certain location in the next period. \begin{eqnarray} \rho^1_t(z_{t+1},m_{t+1},\ell^s_t| \ell_t,\Delta_t,m_t, \ell_{t-1}^s)=\left\{ \begin{array}{ll} \delta(z_{t+1}|z_t,X_t)P^2_t(\ell^s_t|\ell_{t-1}^s,\Delta_t^s,m_t^s, \ell_t) & \text{if } m_t=1 \\[3pt] \delta(z_{t+1}|z_t,X_t)\psi^1(m_{t+1},\ell_t^s|X_t,\ell_t) & \text{if } m_t=2 \text{ .}\label{eqn:lambda1} \end{array} \right. \end{eqnarray} (12) \begin{eqnarray} \rho^2_t(z_{t+1},m_{t+1},\ell^s_{t+1}| \ell_t, \Delta_t,m_t,\ell_{t}^s)=\left\{ \begin{array}{ll} \delta(z_{t+1}|z_t,X_t)P^1_{t+1}(\ell^s_{t+1}|\ell_{t}^s,\Delta_{t+1}^s,m_{t+1}^s, \ell_{t}) & \text{if } m_t=1 \\[3pt] \delta(z_{t+1}|z_t,X_t)\psi^2(m_{t+1},\ell_{t+1}^s|X_t,\ell_t) & \text{if } m_t=2\text{ .}\label{eqn:lambda2} \end{array} \right. \end{eqnarray} (13) The function $$\delta(\cdot)$$ gives the probability that a person has a given legal status in the next period. For primary movers, there is uncertainty over where the secondary mover will live in the current period. This is represented by the function $$P^2_t(\cdot)$$, which comes from the secondary mover's choice probabilities defined in equation (6). Likewise, for secondary movers, there is uncertainty over the primary mover's location in the next period. This is represented by the function $$P^1_{t+1}(\cdot)$$, which comes from the primary mover's choice probabilities defined in equation (11). Single people could become married in future periods, and the probability of this happening is written as $$\psi^k(\cdot)$$, with $$k=1,2$$ for primary and secondary movers, respectively. If he gets married, there is a probability his new spouse lives in each location. If he does not get married, then he continues to make decisions as a single person.12 4. Data I estimate the model using data from the MMP, a joint project of Princeton University and the University of Guadalajara.13 The MMP is a repeated cross-sectional data set that started in 1982, and is still ongoing. The project aims to understand the decisions and outcomes relating to immigration for Mexican individuals. To my knowledge, this is the most detailed source of information on immigration decisions between the U.S. and Mexico, most importantly on illegal immigrants, which are underrepresented in most U.S.-based surveys. The survey asks questions on when and where people lived in the U.S., how they got across the border, and what the wage outcomes in the U.S. were, which is the set of information necessary to estimate the model detailed in the previous section. For household heads and spouses, the MMP collects a lifetime migration history, asking people which country and state they lived in each year. This information is used to construct a panel data set that contains each person's location at each point in time. I also know if and when each person is allowed to move to the U.S. legally. For people who move to the U.S. illegally, the MMP records when and where they cross the border. The MMP also collects information on the remaining members of the household. The inclusion of these respondents allows me to cover a wider age range than if I were to just use the household head and spouse data. Although the MMP does not ask for the lifetime migration histories for this group, it asks many questions related to migration. The survey asks for the migrants' wages, location, and legal status for their first and last trip to the U.S., as well as their total number of U.S. trips. For people who have moved to the U.S. two or fewer times, I know their full history of U.S. migration, although when they are in Mexico I may not know their precise location. For people who have moved more than two times, there are gaps in the sample for years when a migration is not reported. I will have to integrate over the missing information to compensate for the lack of full histories for each person.14 In addition, in this group, I do not know these people's marital status at each point in time, and they are also not matched to a spouse in the data, so I cannot include the marriage interactions component of the model for this group. I call this sample the "partial history" sample, whereas I call the group of household heads and spouses the "full history" sample. One question in this article is how changes in border enforcement affect immigration decisions. Border patrol was fairly low and constant up to the 1986 IRCA. Because the data have lifetime histories, the sample spans many years. Computing the value function for each year is costly, so I limit the sample time frame to years in which there are changes in enforcement levels. For this reason, I study behaviour starting in 1980. To avoid an initial condition problem, I only include individuals who were aged 17 in 1980 or after. This leaves me with a sample size of 6,457 for the full history sample, where I observe each person's location from age 17 until the year surveyed.15 The partial history sample is larger, consisting of 41,069 individuals. One downside of the data is that the MMP sample is not representative of Mexico, as the surveyed communities are mostly those in rural areas with high migration propensities. Western-central Mexico, the region with the highest migration rates historically, is oversampled.16 Over time, the MMP sampling frame has shifted to other areas in Mexico, thus covering areas with lower migration rates. Because the MMP collects retrospective data, I have information on migration decisions in earlier years in these communities that are surveyed later, mitigating this problem somewhat. Another restriction of the data is that the sample misses permanent migrants, because the survey is administered in Mexico.17 Therefore, the results of this article apply to this specific section of the Mexican population. In Online Appendix A, I compare the MMP sample to the Current Population Study (CPS) (restricting the sample to Mexicans living in the U.S.) and to Mexican census data, to get an understanding of the limitations of the data. Table A1 in Online Appendix A shows that the MMP sample has substantially more men than the CPS, which is unsurprising due to the prevalence of temporary migrants in the MMP. The CPS sample also has higher levels of education. Table A2 compares the MMP sample to the Mexican census data. The MMP sample is younger, most likely because of my sample selection criteria explained in the previous paragraph. The MMP sample also has higher education levels. Unlike other data sources, the MMP has wage data when people are in the U.S. illegally, allowing me to estimate the wage distribution for illegal immigrants living in the U.S. In comparison, other datasets report country of birth but not legal status, and I expect that datasets such as the CPS will be biased towards legal immigrants, since illegal immigrants are likely to avoid government surveys. Because legal immigration is relatively rare in the MMP data, I combine MMP wages with CPS data on Mexicans living in the U.S. to get a larger sample size to study the legal wage distribution. The MMP also records wages in Mexico; however, there are limited wage observations per person and the data give imprecise estimates. Therefore, for Mexican wages, I use data from Mexican labour force surveys: the Encuesta Nacional de Ingresos y Gastos de los Hogares (ENIGH) in 1989, 1992, and 1994 , and the Encuesta Nacional de Empleo (ENE) from 1995 to 2004. To measure border enforcement, I use data from U.S. Customs and Border Protection (CBP) on the number of person-hours spent patrolling each sector of the border.18 CBP divides the U.S.–Mexico border into nine regions, and the data report the person-hours spent patrolling each sector. 5. Descriptive Statistics Tables 1 and 2 show the characteristics of the sample, divided into five groups: people who move internally, people who move to the U.S., people who move internally and to the U.S., non-migrants, and people who can immigrate legally. Table 1 shows this information for the full history sample, and Table 2 for the partial history sample. For the partial history sample, there is no information on internal movers, since the MMP has insufficient information to isolate this group. These tables show that most U.S. migrants are male. Each row shows the percentage of a group ($$i.e.$$ internal movers) with a given level of education. People who move to the U.S. have the least education. The literature finds that returns to education are higher in Mexico than in the U.S., possibly explaining why educated people are less likely to immigrate. In addition, illegal immigrants do not have access to the full U.S. labour market, and therefore may not be able to find jobs that require higher levels of education. People who can immigrate legally make up close to 3% of the full history sample and about 2.4% of the partial history sample. Table 1. Characteristics of full history sample Internal movers (%) Moves to U.S. (%) Moves internally and to the U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 60.53 91.51 89.51 50.82 90.63 60.66 Percent married 67.59 81.01 78.40 75.74 92.19 76.24 Average age 29.95 30.13 30.74 29.73 30.86 29.88 Years of education 0–4 16.07 18.03 14.81 17.72 11.46 17.33 5–8 39.47 43.61 43.83 40.48 53.13 41.33 9–11 28.67 30.34 26.54 30.83 22.92 30.17 12 9.42 5.92 8.64 7.66 8.85 7.64 13$$+$$ 6.37 2.10 6.27 3.30 3.65 3.53 Observations 722 1,048 162 4,333 192 6,457 Internal movers (%) Moves to U.S. (%) Moves internally and to the U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 60.53 91.51 89.51 50.82 90.63 60.66 Percent married 67.59 81.01 78.40 75.74 92.19 76.24 Average age 29.95 30.13 30.74 29.73 30.86 29.88 Years of education 0–4 16.07 18.03 14.81 17.72 11.46 17.33 5–8 39.47 43.61 43.83 40.48 53.13 41.33 9–11 28.67 30.34 26.54 30.83 22.92 30.17 12 9.42 5.92 8.64 7.66 8.85 7.64 13$$+$$ 6.37 2.10 6.27 3.30 3.65 3.53 Observations 722 1,048 162 4,333 192 6,457 Notes: Calculated using data from the full history sample in the MMP. For education, the table gives the percentage of each group ($$i.e.$$ internal movers) that has a given level of education. Table 1. Characteristics of full history sample Internal movers (%) Moves to U.S. (%) Moves internally and to the U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 60.53 91.51 89.51 50.82 90.63 60.66 Percent married 67.59 81.01 78.40 75.74 92.19 76.24 Average age 29.95 30.13 30.74 29.73 30.86 29.88 Years of education 0–4 16.07 18.03 14.81 17.72 11.46 17.33 5–8 39.47 43.61 43.83 40.48 53.13 41.33 9–11 28.67 30.34 26.54 30.83 22.92 30.17 12 9.42 5.92 8.64 7.66 8.85 7.64 13$$+$$ 6.37 2.10 6.27 3.30 3.65 3.53 Observations 722 1,048 162 4,333 192 6,457 Internal movers (%) Moves to U.S. (%) Moves internally and to the U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 60.53 91.51 89.51 50.82 90.63 60.66 Percent married 67.59 81.01 78.40 75.74 92.19 76.24 Average age 29.95 30.13 30.74 29.73 30.86 29.88 Years of education 0–4 16.07 18.03 14.81 17.72 11.46 17.33 5–8 39.47 43.61 43.83 40.48 53.13 41.33 9–11 28.67 30.34 26.54 30.83 22.92 30.17 12 9.42 5.92 8.64 7.66 8.85 7.64 13$$+$$ 6.37 2.10 6.27 3.30 3.65 3.53 Observations 722 1,048 162 4,333 192 6,457 Notes: Calculated using data from the full history sample in the MMP. For education, the table gives the percentage of each group ($$i.e.$$ internal movers) that has a given level of education. Table 2. Characteristics of partial history sample Moves to U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 71.85 43.80 65.79 48.94 Percent married 58.72 53.40 70.32 54.68 Average age 26.02 24.92 28.21 25.18 0–4 years education 8.96 9.59 6.64 9.42 5–8 years education 40.05 29.99 36.42 31.80 9–11 years education 34.07 31.48 32.90 31.94 12 years education 11.84 14.39 15.29 13.99 13$$+$$ years education 5.09 14.55 8.75 12.85 Observations 6,742 33,333 994 41,069 Moves to U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 71.85 43.80 65.79 48.94 Percent married 58.72 53.40 70.32 54.68 Average age 26.02 24.92 28.21 25.18 0–4 years education 8.96 9.59 6.64 9.42 5–8 years education 40.05 29.99 36.42 31.80 9–11 years education 34.07 31.48 32.90 31.94 12 years education 11.84 14.39 15.29 13.99 13$$+$$ years education 5.09 14.55 8.75 12.85 Observations 6,742 33,333 994 41,069 Notes: Calculated using data from the partial history sample in the MMP. For education, the table gives the percentage of each group ($$i.e.$$ people that move to the U.S.) that has a given level of education. Table 2. Characteristics of partial history sample Moves to U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 71.85 43.80 65.79 48.94 Percent married 58.72 53.40 70.32 54.68 Average age 26.02 24.92 28.21 25.18 0–4 years education 8.96 9.59 6.64 9.42 5–8 years education 40.05 29.99 36.42 31.80 9–11 years education 34.07 31.48 32.90 31.94 12 years education 11.84 14.39 15.29 13.99 13$$+$$ years education 5.09 14.55 8.75 12.85 Observations 6,742 33,333 994 41,069 Moves to U.S. (%) Non-migrant (%) Legal immigrant (%) Whole sample (%) Percent male 71.85 43.80 65.79 48.94 Percent married 58.72 53.40 70.32 54.68 Average age 26.02 24.92 28.21 25.18 0–4 years education 8.96 9.59 6.64 9.42 5–8 years education 40.05 29.99 36.42 31.80 9–11 years education 34.07 31.48 32.90 31.94 12 years education 11.84 14.39 15.29 13.99 13$$+$$ years education 5.09 14.55 8.75 12.85 Observations 6,742 33,333 994 41,069 Notes: Calculated using data from the partial history sample in the MMP. For education, the table gives the percentage of each group ($$i.e.$$ people that move to the U.S.) that has a given level of education. 5.1. Migration decisions Between 1980 and 2004, an average of 2.5% of the people in the sample living in Mexico moved to the U.S. in each year. Table 3 looks at the effects of family interactions on migration rates.19 The migration behaviour of married men is very similar to that of single men. However, there are stark differences in the migration decisions of married and single women. I compare married women, whose husband is in the U.S. to single women, and show that these married women have substantially higher migration rates.20 This suggests that husband's decisions have an important effect on female migration decisions. Table 3. Family and migration rates Married men (%) Single men (%) Married women (spouse in U.S.) (%) Single women (%) 0–4 years education 3.44 4.10 1.74 0.81 5–8 years education 4.92 4.55 3.27 1.43 9–11 years education 3.82 3.26 3.45 1.30 12 years education 2.36 2.60 6.25 1.21 13$$+$$ years education 1.14 1.00 10.00 0.58 Total 4.04 3.74% 3.22% 1.17% Married men (%) Single men (%) Married women (spouse in U.S.) (%) Single women (%) 0–4 years education 3.44 4.10 1.74 0.81 5–8 years education 4.92 4.55 3.27 1.43 9–11 years education 3.82 3.26 3.45 1.30 12 years education 2.36 2.60 6.25 1.21 13$$+$$ years education 1.14 1.00 10.00 0.58 Total 4.04 3.74% 3.22% 1.17% Notes: This table calculates average annual Mexico to U.S. migration rates in the full history sample. For married women, I only include those whose husband is living in the U.S. Table 3. Family and migration rates Married men (%) Single men (%) Married women (spouse in U.S.) (%) Single women (%) 0–4 years education 3.44 4.10 1.74 0.81 5–8 years education 4.92 4.55 3.27 1.43 9–11 years education 3.82 3.26 3.45 1.30 12 years education 2.36 2.60 6.25 1.21 13$$+$$ years education 1.14 1.00 10.00 0.58 Total 4.04 3.74% 3.22% 1.17% Married men (%) Single men (%) Married women (spouse in U.S.) (%) Single women (%) 0–4 years education 3.44 4.10 1.74 0.81 5–8 years education 4.92 4.55 3.27 1.43 9–11 years education 3.82 3.26 3.45 1.30 12 years education 2.36 2.60 6.25 1.21 13$$+$$ years education 1.14 1.00 10.00 0.58 Total 4.04 3.74% 3.22% 1.17% Notes: This table calculates average annual Mexico to U.S. migration rates in the full history sample. For married women, I only include those whose husband is living in the U.S. To further analyse the determinants of migration decisions, I estimate the probability that a person who lives in Mexico moves to the U.S. in a given year using probit regressions. The marginal effects are reported in Table 4. The first two columns include both genders, and the third and fourth columns allow for separate effects for men and women, respectively.21 In all regressions but column (4), the effect of age on migration is negative and statistically significant, supporting the human capital model, which predicts that younger people are more likely to move because they have more time to earn higher wages. Using family members as a measure of networks, I find that having a family member in the U.S. makes a person more likely to immigrate. Legal immigrants are more likely to move, as are people who have moved to the U.S. before. Columns (2)–(4) include controls for marital status. Column (2), which includes both men and women, indicates that single men, married men, and married women are more likely to move than single women. Column (3) only includes men, and shows no difference between married and single men. Column (4), which only includes women, again shows that married women whose spouse is in the U.S. are more likely to immigrate than single women. Since married women only move to the U.S. when their husband is in the U.S., it is important to include these sorts of interactions in a model.22 Table 4. Migration probit regression Dependent variable = 1 if moves to the U.S. Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education 0.00680*** 0.00302 0.00242 0.00674** (0.000959) (0.00194) (0.00260) (0.00256) 9–11 years education 0.00511*** –0.000856 –0.00347 0.00713** (0.00102) (0.00217) (0.00289) (0.00259) 12 years education –0.00130 –0.00393 –0.00754 0.00714* (0.00120) (0.00326) (0.00440) (0.00326) 13$$+$$ years education –0.0191*** –0.0144** –0.0203*** 0.00194 (0.00147) (0.00462) (0.00604) (0.00538) Age –0.00365*** –0.00266* –0.00319* –0.0000882 (0.000521) (0.00120) (0.00160) (0.00122) Age squared 0.0000408*** 0.0000164 0.0000193 –0.0000144 (0.0000105) (0.0000231) (0.0000307) (0.0000248) Family in U.S. 0.0104*** 0.0161*** 0.0206*** 0.00427** (0.000722) (0.00149) (0.00201) (0.00140) Legal immigrant 0.0771*** 0.0503*** 0.0627*** 0.0185*** (0.00356) (0.00703) (0.00979) (0.00393) Has moved to U.S. before 0.0465*** 0.0476*** 0.0599*** 0.0141*** (0.00144) (0.00245) (0.00321) (0.00288) Single man 0.0471*** (0.00306) Married man 0.0466*** –0.00119 (0.00317) (0.00219) Married woman 0.0366*** 0.0119*** (0.00480) (0.00179) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 421,638 69,344 50,610 16,288 Dependent variable = 1 if moves to the U.S. Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education 0.00680*** 0.00302 0.00242 0.00674** (0.000959) (0.00194) (0.00260) (0.00256) 9–11 years education 0.00511*** –0.000856 –0.00347 0.00713** (0.00102) (0.00217) (0.00289) (0.00259) 12 years education –0.00130 –0.00393 –0.00754 0.00714* (0.00120) (0.00326) (0.00440) (0.00326) 13$$+$$ years education –0.0191*** –0.0144** –0.0203*** 0.00194 (0.00147) (0.00462) (0.00604) (0.00538) Age –0.00365*** –0.00266* –0.00319* –0.0000882 (0.000521) (0.00120) (0.00160) (0.00122) Age squared 0.0000408*** 0.0000164 0.0000193 –0.0000144 (0.0000105) (0.0000231) (0.0000307) (0.0000248) Family in U.S. 0.0104*** 0.0161*** 0.0206*** 0.00427** (0.000722) (0.00149) (0.00201) (0.00140) Legal immigrant 0.0771*** 0.0503*** 0.0627*** 0.0185*** (0.00356) (0.00703) (0.00979) (0.00393) Has moved to U.S. before 0.0465*** 0.0476*** 0.0599*** 0.0141*** (0.00144) (0.00245) (0.00321) (0.00288) Single man 0.0471*** (0.00306) Married man 0.0466*** –0.00119 (0.00317) (0.00219) Married woman 0.0366*** 0.0119*** (0.00480) (0.00179) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 421,638 69,344 50,610 16,288 Notes: Standard errors, clustered at the household level, in parentheses. $$^{*}p<0.05$$, $$^{**}p<0.01$$, $$^{***}p<0.001$$. Table is reporting marginal effects from a probit regression. The sample includes individuals who were living in Mexico at the start of the period. Column (1) uses the whole sample, and columns (2)–(4) only include the full history sample. For education, the excluded group is people with four or fewer years of education. Married women whose spouse is in Mexico are not included in the regression. Table 4. Migration probit regression Dependent variable = 1 if moves to the U.S. Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education 0.00680*** 0.00302 0.00242 0.00674** (0.000959) (0.00194) (0.00260) (0.00256) 9–11 years education 0.00511*** –0.000856 –0.00347 0.00713** (0.00102) (0.00217) (0.00289) (0.00259) 12 years education –0.00130 –0.00393 –0.00754 0.00714* (0.00120) (0.00326) (0.00440) (0.00326) 13$$+$$ years education –0.0191*** –0.0144** –0.0203*** 0.00194 (0.00147) (0.00462) (0.00604) (0.00538) Age –0.00365*** –0.00266* –0.00319* –0.0000882 (0.000521) (0.00120) (0.00160) (0.00122) Age squared 0.0000408*** 0.0000164 0.0000193 –0.0000144 (0.0000105) (0.0000231) (0.0000307) (0.0000248) Family in U.S. 0.0104*** 0.0161*** 0.0206*** 0.00427** (0.000722) (0.00149) (0.00201) (0.00140) Legal immigrant 0.0771*** 0.0503*** 0.0627*** 0.0185*** (0.00356) (0.00703) (0.00979) (0.00393) Has moved to U.S. before 0.0465*** 0.0476*** 0.0599*** 0.0141*** (0.00144) (0.00245) (0.00321) (0.00288) Single man 0.0471*** (0.00306) Married man 0.0466*** –0.00119 (0.00317) (0.00219) Married woman 0.0366*** 0.0119*** (0.00480) (0.00179) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 421,638 69,344 50,610 16,288 Dependent variable = 1 if moves to the U.S. Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education 0.00680*** 0.00302 0.00242 0.00674** (0.000959) (0.00194) (0.00260) (0.00256) 9–11 years education 0.00511*** –0.000856 –0.00347 0.00713** (0.00102) (0.00217) (0.00289) (0.00259) 12 years education –0.00130 –0.00393 –0.00754 0.00714* (0.00120) (0.00326) (0.00440) (0.00326) 13$$+$$ years education –0.0191*** –0.0144** –0.0203*** 0.00194 (0.00147) (0.00462) (0.00604) (0.00538) Age –0.00365*** –0.00266* –0.00319* –0.0000882 (0.000521) (0.00120) (0.00160) (0.00122) Age squared 0.0000408*** 0.0000164 0.0000193 –0.0000144 (0.0000105) (0.0000231) (0.0000307) (0.0000248) Family in U.S. 0.0104*** 0.0161*** 0.0206*** 0.00427** (0.000722) (0.00149) (0.00201) (0.00140) Legal immigrant 0.0771*** 0.0503*** 0.0627*** 0.0185*** (0.00356) (0.00703) (0.00979) (0.00393) Has moved to U.S. before 0.0465*** 0.0476*** 0.0599*** 0.0141*** (0.00144) (0.00245) (0.00321) (0.00288) Single man 0.0471*** (0.00306) Married man 0.0466*** –0.00119 (0.00317) (0.00219) Married woman 0.0366*** 0.0119*** (0.00480) (0.00179) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 421,638 69,344 50,610 16,288 Notes: Standard errors, clustered at the household level, in parentheses. $$^{*}p<0.05$$, $$^{**}p<0.01$$, $$^{***}p<0.001$$. Table is reporting marginal effects from a probit regression. The sample includes individuals who were living in Mexico at the start of the period. Column (1) uses the whole sample, and columns (2)–(4) only include the full history sample. For education, the excluded group is people with four or fewer years of education. Married women whose spouse is in Mexico are not included in the regression. The data on return migration rates show that 9% of all migrants living in the U.S. move to Mexico each year. Raw statistics show that men have higher return migration rates than women. Suspecting that return migration rates for married men are affected by the location of their wives, in Table 5, looking at only men in the full history sample, I split the sample by marital status and wife's location. Married men whose wife is in Mexico are much more likely to return home, whereas those whose wife is living in the U.S. have a much lower return migration rate. Table 5. Family and male return migration rates Wife in Mexico (%) Wife in U.S. (%) Single (%) 0–4 years education 40.55 15.38 33.39 5–8 years education 33.59 22.22 31.70 9–11 years education 39.83 16.22 29.43 12 years education 48.84 9.09 26.19 13$$+$$ years education 29.41 0.00 35.09 Total 36.61 17.88 30.96 Wife in Mexico (%) Wife in U.S. (%) Single (%) 0–4 years education 40.55 15.38 33.39 5–8 years education 33.59 22.22 31.70 9–11 years education 39.83 16.22 29.43 12 years education 48.84 9.09 26.19 13$$+$$ years education 29.41 0.00 35.09 Total 36.61 17.88 30.96 Notes: This table reports the average annual return migration rates, using the the full history sample. Table 5. Family and male return migration rates Wife in Mexico (%) Wife in U.S. (%) Single (%) 0–4 years education 40.55 15.38 33.39 5–8 years education 33.59 22.22 31.70 9–11 years education 39.83 16.22 29.43 12 years education 48.84 9.09 26.19 13$$+$$ years education 29.41 0.00 35.09 Total 36.61 17.88 30.96 Wife in Mexico (%) Wife in U.S. (%) Single (%) 0–4 years education 40.55 15.38 33.39 5–8 years education 33.59 22.22 31.70 9–11 years education 39.83 16.22 29.43 12 years education 48.84 9.09 26.19 13$$+$$ years education 29.41 0.00 35.09 Total 36.61 17.88 30.96 Notes: This table reports the average annual return migration rates, using the the full history sample. Using a probit regression, I estimate the probability that a person currently living in the U.S. returns to Mexico in a given year. The marginal effects are shown in Table 6. Columns (1) and (2) use data for both genders, and columns (3) and (4) use data for men and women, respectively.23 All specifications except for column (4) show that that legal immigrants are less likely to return home. Columns (2)–(4) control for marital status, and additionally split the sample for married men based on whether their spouse is living in Mexico or the U.S. Married men with a wife in Mexico are more likely to return migrate than single men, whereas married men whose wife is in the U.S. are less likely to return migrate than single men. This suggests that moving home to be with one's spouse is a strong incentive for return migration. Table 6. Return migration probit regression Dependent variable$$=$$1 if moves from U.S. to Mexico Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education –0.0263*** 0.0109 0.00962 0.120 (0.00699) (0.0279) (0.0288) (0.0983) 9–11 years education –0.0320*** –0.00796 –0.00648 0.113 (0.00734) (0.0308) (0.0321) (0.101) 12 years education –0.0429*** –0.0125 –0.00367 0.00428 (0.00898) (0.0434) (0.0473) (0.116) 13$$+$$ years education –0.0194 0.0134 0.0515 –0.242 (0.0109) (0.0542) (0.0608) (0.150) Age –0.00495 0.0181 0.0218 0.0410 (0.00321) (0.0133) (0.0138) (0.0455) Age squared 0.000104 –0.000237 –0.000293 –0.000844 (0.0000617) (0.000248) (0.000257) (0.000901) Family in U.S. 0.0313*** –0.0304 –0.0349 0.0480 (0.00482) (0.0208) (0.0218) (0.0519) Legal immigrant –0.0725*** –0.284*** –0.295*** –0.0167 (0.00794) (0.0299) (0.0311) (0.0838) Single man 0.0794* (0.0395) Married man, wife in U.S. –0.0709 –0.149** (0.0631) (0.0530) Married man, wife in Mexico 0.121** 0.0442* (0.0430) (0.0223) Married woman 0.0590 0.0711 (0.0587) (0.0552) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 40,268 5,624 5,185 425 Dependent variable$$=$$1 if moves from U.S. to Mexico Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education –0.0263*** 0.0109 0.00962 0.120 (0.00699) (0.0279) (0.0288) (0.0983) 9–11 years education –0.0320*** –0.00796 –0.00648 0.113 (0.00734) (0.0308) (0.0321) (0.101) 12 years education –0.0429*** –0.0125 –0.00367 0.00428 (0.00898) (0.0434) (0.0473) (0.116) 13$$+$$ years education –0.0194 0.0134 0.0515 –0.242 (0.0109) (0.0542) (0.0608) (0.150) Age –0.00495 0.0181 0.0218 0.0410 (0.00321) (0.0133) (0.0138) (0.0455) Age squared 0.000104 –0.000237 –0.000293 –0.000844 (0.0000617) (0.000248) (0.000257) (0.000901) Family in U.S. 0.0313*** –0.0304 –0.0349 0.0480 (0.00482) (0.0208) (0.0218) (0.0519) Legal immigrant –0.0725*** –0.284*** –0.295*** –0.0167 (0.00794) (0.0299) (0.0311) (0.0838) Single man 0.0794* (0.0395) Married man, wife in U.S. –0.0709 –0.149** (0.0631) (0.0530) Married man, wife in Mexico 0.121** 0.0442* (0.0430) (0.0223) Married woman 0.0590 0.0711 (0.0587) (0.0552) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 40,268 5,624 5,185 425 Notes: Standard errors, clustered at the household level, in parentheses. $$^{*}p<0.05$$, $$^{**}p<0.01$$, $$^{***}p<0.001$$. Table is reporting marginal effects from a probit regression. The sample includes individuals who were living in the U.S. at the start of the period. Column 1 uses the whole sample, and columns (2)–(4) only use the full history sample. The excluded group for education is people with four or fewer years of education. Table 6. Return migration probit regression Dependent variable$$=$$1 if moves from U.S. to Mexico Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education –0.0263*** 0.0109 0.00962 0.120 (0.00699) (0.0279) (0.0288) (0.0983) 9–11 years education –0.0320*** –0.00796 –0.00648 0.113 (0.00734) (0.0308) (0.0321) (0.101) 12 years education –0.0429*** –0.0125 –0.00367 0.00428 (0.00898) (0.0434) (0.0473) (0.116) 13$$+$$ years education –0.0194 0.0134 0.0515 –0.242 (0.0109) (0.0542) (0.0608) (0.150) Age –0.00495 0.0181 0.0218 0.0410 (0.00321) (0.0133) (0.0138) (0.0455) Age squared 0.000104 –0.000237 –0.000293 –0.000844 (0.0000617) (0.000248) (0.000257) (0.000901) Family in U.S. 0.0313*** –0.0304 –0.0349 0.0480 (0.00482) (0.0208) (0.0218) (0.0519) Legal immigrant –0.0725*** –0.284*** –0.295*** –0.0167 (0.00794) (0.0299) (0.0311) (0.0838) Single man 0.0794* (0.0395) Married man, wife in U.S. –0.0709 –0.149** (0.0631) (0.0530) Married man, wife in Mexico 0.121** 0.0442* (0.0430) (0.0223) Married woman 0.0590 0.0711 (0.0587) (0.0552) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 40,268 5,624 5,185 425 Dependent variable$$=$$1 if moves from U.S. to Mexico Whole sample Full history sample Men Women (1) (2) (3) (4) 5–8 years education –0.0263*** 0.0109 0.00962 0.120 (0.00699) (0.0279) (0.0288) (0.0983) 9–11 years education –0.0320*** –0.00796 –0.00648 0.113 (0.00734) (0.0308) (0.0321) (0.101) 12 years education –0.0429*** –0.0125 –0.00367 0.00428 (0.00898) (0.0434) (0.0473) (0.116) 13$$+$$ years education –0.0194 0.0134 0.0515 –0.242 (0.0109) (0.0542) (0.0608) (0.150) Age –0.00495 0.0181 0.0218 0.0410 (0.00321) (0.0133) (0.0138) (0.0455) Age squared 0.000104 –0.000237 –0.000293 –0.000844 (0.0000617) (0.000248) (0.000257) (0.000901) Family in U.S. 0.0313*** –0.0304 –0.0349 0.0480 (0.00482) (0.0208) (0.0218) (0.0519) Legal immigrant –0.0725*** –0.284*** –0.295*** –0.0167 (0.00794) (0.0299) (0.0311) (0.0838) Single man 0.0794* (0.0395) Married man, wife in U.S. –0.0709 –0.149** (0.0631) (0.0530) Married man, wife in Mexico 0.121** 0.0442* (0.0430) (0.0223) Married woman 0.0590 0.0711 (0.0587) (0.0552) State fixed effects Yes Yes Yes Yes Time fixed effects Yes Yes Yes Yes Observations 40,268 5,624 5,185 425 Notes: Standard errors, clustered at the household level, in parentheses. $$^{*}p<0.05$$, $$^{**}p<0.01$$, $$^{***}p<0.001$$. Table is reporting marginal effects from a probit regression. The sample includes individuals who were living in the U.S. at the start of the period. Column 1 uses the whole sample, and columns (2)–(4) only use the full history sample. The excluded group for education is people with four or fewer years of education. One of the motivations for the dynamic model estimated in this article is that repeat migration is common. In the sample, the average number of moves to the U.S. per migrant is 1.64 for men and 1.14 for women , showing that many migrants move more than once.24 Women move less and are less likely to return migrate, implying that when women move, their decision is more likely to be permanent. The average durations illustrate this more clearly. Overall, the average migration duration is 4.4 years. It is slightly higher for legal than illegal movers (4.83 versus 4.35 years, respectively). The average duration for men is 4.15 years, and the average duration for women is 5.20 years, again indicating that when women move, their decision is more likely to be permanent. This section shows that it is crucial to allow for a relationship between spouses' decisions. The model in this article accounts for the following trends observed in the data: (1) women are more likely to move if their husband is in the U.S., and (2) men are less likely to return migrate if their spouse is living with them in the U.S. By including both male and female decisions in the model, I can study how their interactions affect the counterfactual outcomes. A key component of the model is that individuals are choosing from a set of locations in both the U.S. and Mexico, instead of just picking between the two countries. This is an important contribution of this article, in that most of the past work on Mexico to U.S. migration does not allow for internal migration. Internal migration is fairly common, as close to 30% of the people in the full history sample moves internally, making it important to allow for people to choose from locations in both countries.25 Due to these high rates, changes in wages in Mexico, even outside of one's home location, could affect the decision on whether or not to move to the U.S. The model accounts for this by letting people choose from a set of locations in both countries. 5.2. Border enforcement To measure border enforcement, I use data from U.S. CBP on the number of person-hours spent patrolling the border. CBP divides the U.S.–Mexico border into nine sectors, as shown in Figure 1, each of which gets a different allocation of resources each year.26Figure 2 shows the number of person-hours spent patrolling each region of the border over time.27 Relative to the levels observed today, border patrol was fairly low in the early 1980s. Enforcement was initially highest at San Diego and grew the fastest there. Enforcement also grew substantially at Tucson and the Rio Grande Valley, although the growth started later than at San Diego. In most of the other sectors, there was a small amount of growth in enforcement, mostly starting in the late 1990s. Figure 1 View largeDownload slide Border patrol sectors. Notes: Map downloaded from U.S. CBP website. Figure 1 View largeDownload slide Border patrol sectors. Notes: Map downloaded from U.S. CBP website. Figure 2 View largeDownload slide Hours patrolling the border Notes: Data on enforcement from U.S. CBP. Figure 2 View largeDownload slide Hours patrolling the border Notes: Data on enforcement from U.S. CBP. Much of the variation in Figure 2 can be explained by changes in U.S. policy. The 1986 IRCA called for increased enforcement along the U.S.–Mexico border. However, changes in enforcement were small until the early 1990s, when new policies further increased border patrol.28 Illegal immigrants surveyed in the MMP reported the closest city in Mexico to where they crossed the border. I use this information to match each individual to a border patrol sector. Figure 3 shows the percentage of illegal immigrants who cross the border at each crossing point in each year. Initially, the largest share of people crossed the border near San Diego. However, as enforcement there increased, fewer people crossed at San Diego. Before 1995, about 50% of illegal immigrants crossed the border at San Diego. This decreased to 27% post-1995. At the same time, the share of people crossing at Tucson increased. I use this variation in behaviour, combined with the changes in enforcement at each sector over time, to identify the effect of border enforcement on immigration decisions.29 Figure 3 View largeDownload slide Border crossing locations (MMP). Notes: In this figure, I use data from the MMP to calculate the share of illegal migrants that cross at each border patrol sector in each year. Figure 3 View largeDownload slide Border crossing locations (MMP). Notes: In this figure, I use data from the MMP to calculate the share of illegal migrants that cross at each border patrol sector in each year. 6. Estimation I estimate the model using maximum likelihood. I assume that a person has 28 location choices, which include 24 locations in Mexico and four in the U.S. The Mexico locations are loosely defined as states; however, some states are grouped when they border each other and have smaller sample sizes.30 The locations in the U.S. are California, Texas, Illinois, and the remainder of states that are grouped into one location choice.31 I restrict decisions so that a married woman cannot move to the U.S. unless her husband is living there. This simplifies computation, and is empirically grounded since it is very rare in the data for the wife to live in the U.S. while the husband is in Mexico. Illegal immigrants moving to the U.S. also choose where to cross the border. The U.S. government divides the border into nine regions. However, very few people in the data cross at some of these points, making identification of the fixed cost of crossing difficult. I reduce the number of crossing points to seven to avoid this problem.32 Therefore, an illegal immigrant has twenty-eight choices in the U.S., the four locations combined with the seven crossing points. I define a time period as one year, and use a one-year discount rate of 0.95. I assume that people solve the model starting at the age of 17 and work until the age of 65. There are three sources of unobserved heterogeneity in the model. The first is over moving cost type, and this is at the household level. In particular, I assume that there are two types, where one group (the stayers) has infinitely high moving costs and will never move to the U.S. The second source of unobserved heterogeneity is over wage outcomes when living in the U.S., and I assume that this is at the individual level. These values are known by the individual but unobserved by the econometrician. The data show that many women do not work, and therefore would not be affected by wage differentials. To account for this, there is a third set of unobserved heterogeneity that allows for women to be a worker or a non-worker type, where decisions of non-worker types are not affected by wages. I integrate over the probability that a woman is a worker type, which is taken from aggregate statistics on female labour force participation from the World Bank's World Development Indicators.33 Identification of the wage parameters and the fixed cost of moving follows the arguments in Kennan and Walker's (2011). My model also has the parameters related to illegal immigration, where identification of the border enforcement term comes from comparing the rate that people cross at each border patrol sector over time as enforcement hours are reallocated. The intuition for how these parameters are identified is discussed in Online Appendix B. 6.1. Wages I estimate three sets of wage functions: when people are in Mexico, in the U.S. illegally, and in the U.S. legally. For all three situations, wages have a deterministic and a random component, where the latter is realized each period after a person decides where to live. This means that when making migration decisions, people only consider their expected wage in each location. Wages in Mexico are estimated in a first-stage regression. The MMP data do not have sufficient information on individual wages in Mexico, so I cannot learn about how individual variations in wage draws affect migration decisions.34 Instead, I use data from Mexican labour force surveys, which have more accurate information on Mexican wages in each year, to estimate this wage distribution. Using data from the ENIGH in 1989, 1992, and 1994 and the ENE from 1995 to 2004, I estimate wage regressions in each year: \begin{eqnarray} w^{M}_{ijt} = \beta^{Mt} X_{it} + \gamma^{Mt}_j + \epsilon_{ijt}\text{ .}\label{eqn:wagesmex} \end{eqnarray} (14) In equation (14), $$X_{it}$$ are individual characteristics, $$\beta^{Mt}$$ are the returns to these characteristics when in Mexico at time $$t$$, and $$\gamma^{Mt}$$ are state fixed effects, which also vary over time. The first two columns of Table 7 show the results of the wage regression for Mexican wages in 1989 and 2004, the first and last years where I have these data. The regressions for all years are in Online Appendix A. There are strong returns to education and experience in these data, which have fluctuated significantly over the time period analysed. Note that in equation (14), there is no unobserved heterogeneity in wages, so unobserved types are independent of Mexican wages. I make this assumption due to the lack of reliable wage information in the MMP when individuals live in Mexico. Unfortunately, the lack of individual-level heterogeneity over Mexican wages is a limitation of this analysis. Table 7. Wage regressions in Mexico Dependent variable: Wage in Mexico 1989 2004 1989–2004 (1) (2) (3) Age 2.89*** 1.28*** 1.38*** (0.23) (0.01) (0.005) Age-squared –0.28*** –0.13*** –0.14*** (0.03) (0.002) (0.001) Male 0.78 0.15*** 0.18*** (0.10) (0.005) (0.002) 5–8 years education 1.12*** 0.46*** 0.72*** (0.13) (0.008) (0.02) 9–11 years education 1.74*** 0.95*** 1.36*** (0.14) (0.008) (0.01) 12 years education 2.96*** 1.26*** 2.49*** (0.15) (0.01) (0.02) 13$$+$$ years education 5.60*** 2.87*** 3.99*** (0.15) (0.009) (0.02) 0–4 years education $$\times$$ time –0.02*** (0.0004) 5–8 years education $$\times$$ time –0.02*** (0.001) 9–11 years education $$\times$$ time –0.03*** (0.001) 12 years education $$\times$$ time –0.09*** (0.002) 13+ years education $$\times$$ time –0.78*** (0.001) State fixed effects Yes Yes Yes $$R^{2}$$ 0.19 0.28 0.29 Dependent variable: Wage in Mexico 1989 2004 1989–2004 (1) (2) (3) Age 2.89*** 1.28*** 1.38*** (0.23) (0.01) (0.005) Age-squared –0.28*** –0.13*** –0.14*** (0.03) (0.002) (0.001) Male 0.78 0.15*** 0.18*** (0.10) (0.005) (0.002) 5–8 years education 1.12*** 0.46*** 0.72*** (0.13) (0.008) (0.02) 9–11 years education 1.74*** 0.95*** 1.36*** (0.14) (0.008) (0.01) 12 years education 2.96*** 1.26*** 2.49*** (0.15) (0.01) (0.02) 13$$+$$ years education 5.60*** 2.87*** 3.99*** (0.15) (0.009) (0.02) 0–4 years education $$\times$$ time –0.02*** (0.0004) 5–8 years education $$\times$$ time –0.02*** (0.001) 9–11 years education $$\times$$ time –0.03*** (0.001) 12 years education $$\times$$ time –0.09*** (0.002) 13+ years education $$\times$$ time –0.78*** (0.001) State fixed effects Yes Yes Yes $$R^{2}$$ 0.19 0.28 0.29 Notes: Standard errors in parentheses. $$^{*}p<0.05$$, $$^{**}p<0.01$$, $$^{***}p<0.001$$. Age is divided by 10. For education, the excluded group is people with less than five years of education. The dependent variable is hourly wages, in 2000 dollars using PPP exchange rates. Column (3) has data from 1989, 1992, and 1994–2004. Time is (year-1989). Quadratic and cubic terms for time also included in column (3). Table 7. Wage regressions in Mexico Dependent variable: Wage in Mexico 1989 2004 1989–2004 (1) (2) (3) Age 2.89*** 1.28*** 1.38*** (0.23) (0.01) (0.005) Age-squared –0.28*** –0.13*** –0.14*** (0.03) (0.002) (0.001) Male 0.78 0.15*** 0.18*** (0.10) (0.005) (0.002) 5–8 years education 1.12*** 0.46*** 0.72*** (0.13) (0.008) (0.02) 9–11 years education 1.74*** 0.95*** 1.36*** (0.14) (0.008) (0.01) 12 years education 2.96*** 1.26*** 2.49*** (0.15) (0.01) (0.02) 13$$+$$ years education 5.60*** 2.87*** 3.99*** (0.15) (0.009) (0.02) 0–4 years education $$\times$$ time –0.02*** (0.0004) 5–8 years education $$\times$$ time –0.02*** (0.001) 9–11 years education $$\times$$ time –0.03*** (0.001) 12 years education $$\times$$ time –0.09*** (0.002) 13+ years education $$\times$$ time –0.78*** (0.001) State fixed effects Yes Yes Yes $$R^{2}$$ 0.19 0.28 0.29 Dependent variable: Wage in Mexico 1989 2004 1989–2004 (1) (2) (3) Age 2.89*** 1.28*** 1.38*** (0.23) (0.01) (0.005) Age-squared –0.28*** –0.13*** –0.14*** (0.03) (0.002) (0.001) Male 0.78 0.15*** 0.18*** (0.10) (0.005) (0.002) 5–8 years education 1.12*** 0.46*** 0.72*** (0.13) (0.008) (0.02) 9–11 years education 1.74*** 0.95*** 1.36*** (0.14) (0.008) (0.01) 12 years education 2.96*** 1.26*** 2.49*** (0.15) (0.01) (0.02) 13$$+$$ years education 5.60*** 2.87*** 3.99*** (0.15) (0.009) (0.02) 0–4 years education $$\times$$ time –0.02*** (0.0004) 5–8 years education $$\times$$ time –0.02*** (0.001) 9–11 years education $$\times$$ time –0.03*** (0.001) 12 years education $$\times$$ time –0.09*** (0.002) 13+ years education $$\times$$ time –0.78*** (0.001) State fixed effects Yes Yes Yes $$R^{2}$$ 0.19 0.28 0.29 Notes: Standard errors in parentheses. $$^{*}p<0.05$$, $$^{**}p<0.01$$, $$^{***}p<0.001$$. Age is divided by 10. For education, the excluded group is people with less than five years of education. The dependent variable is hourly wages, in 2000 dollars using PPP exchange rates. Column (3) has data from 1989, 1992, and 1994–2004. Time is (year-1989). Quadratic and cubic terms for time also included in column (3). I use the results of year-by-year regressions to calculate an expected wage for each person in each location in Mexico and year. Because I do not have wage data for every year in the estimation, I need to compute expected wages in the missing years. To do this, I run a wage regression using all of the available data, including time trends in the returns to education, which allows for (1) changes in wage levels over time and (2) changes in the returns to education. The results of this regression are in the third column of Table 7. This allows me to calculate expected wages in Mexico in all years and states, using the year-by-year regressions when possible and the regression with all years of data when I do not have data for that year. To estimate the model, I also need to make assumptions on people's beliefs on future wages. It is unlikely that people had perfect foresight over what would happen to Mexican wages over this period, especially due to the severe fluctuations in Mexico's economy. To specify wage expectations, I use the results from the wage regression to impute an expected wage for each person in each location and time, denoted as $$\hat{w}^M_{ijt}$$. I assume that people expect there is some chance (denoted as $$p_{loss}$$) of a large wage drop (at rate $$\alpha$$) in each period that causes them to earn less than this expected wage.35 Then I can write each person's wage expectations as \begin{equation} E w^M_{ijt}=\left\{ \begin{array}{ll} \hat{w}^M_{ijt} & \text{with probability } 1-p_{\rm loss}\\ (1-\alpha) \hat{w}^M_{ijt} & \text{with probability } p_{\rm loss}\text{ .} \end{array} \right. \end{equation} (15) The probability $$p_{\rm loss}$$ of this wage drop is given by the fraction of years Mexico experienced negative wage growth. The expected wage drop ($$\alpha$$) is equal to the average wage drop in these bad years. For wages in the U.S., the parameters are estimated jointly with the moving cost and utility parameters. There is a separate wage process for legal and illegal immigrants, written as \begin{eqnarray} w^{ill}_{ijt} &=& \beta^{ill} X_{it} + \gamma^{ill}_j + \kappa^{ill}_i + \epsilon^{ill}_{ijt}\label{eqn:wage_illegal}\\ \end{eqnarray} (16) \begin{eqnarray} w^{leg}_{ijt}&=&\beta^{leg}X_{it}+\gamma^{leg}_j+\kappa^{leg}_i+\epsilon^{leg}_{ijt}.\label{eqn:wage_legal} \end{eqnarray} (17) Wages depend on demographic characteristics $$X_{it}$$, which include education, gender, age, and whether or not a person has family living in the U.S.36 I include time trends to allow for changes over time, as well as location fixed effects $$\gamma_j$$.37 The match component, which is the source of unobserved heterogeneity over wages, is written as $$\kappa_i=\{\kappa_i^{ill},\kappa^{leg}_i\}$$. When estimating these terms, I assume the legal and illegal fixed effects are each drawn from a symmetric three point distribution where each value is equally likely. There is a correlation between the unobserved types of husbands and wives. Each individual knows the value of his fixed effect if he were to move to the U.S. For legal immigrants, the MMP only has a small number of observations with wage information, making it difficult to precisely estimate the wage parameters. I combine the MMP wage observations with CPS data to estimate this wage process. I use data on Mexican-born individuals in the CPS, jointly with the MMP wage observations for legal immigrants, to estimate this set of wage parameters.38 For the CPS data, I do not have information on their migration decisions, so these individuals contribute to the likelihood through just their wages. 6.2. Moving costs Here I explain the determinants of moving costs for the mover types in the model. The full parameterization of the moving cost function is explained in Online Appendix C. The cost of moving includes a fixed cost, and also depends on the distance between locations, calculated as the driving distances between the most populous cities in each state.39 The cost of moving also depends on age, which captures other effects of age on immigration that are not accounted for in the model or the wage distribution. The population size of the destination also affects moving costs, to account for the empirical fact that people are more likely to move to larger locations.40 For people moving to the U.S., I allow the moving cost to depend on education. Networks, defined as the people that an individual knows who are already living in the U.S., can affect the cost of moving to the U.S. for that person.41 Empirical evidence shows that migration rates vary across states, suggesting that people from high-migration states have larger networks. I exploit differences in state-level immigration patterns, which have been well-documented empirically, to measure a person's network. I use the distance to the railroad as a proxy for regional network effects.42 When immigration from Mexico to the U.S. began in the early 1900s, U.S. employers used railroads to transport laborers across the border, meaning that the first migrants came from communities located near the railroad (Durand et al., 2001). These communities still have the highest immigration rates today. U.S. border enforcement affects the border crossing costs for illegal immigrants. However, there is potential endogeneity in that enforcement at each sector could be affected by the number of migrants crossing there. To account for this, I follow Bohn and Pugatch (2015) and use the enforcement levels, lagged by 2 periods, to predict future enforcement. Budget allocations for border enforcement are typically determined two years ahead of time, although extra resources can be allocated when needed due to unexpected shocks. The two-year-lagged values of border enforcement levels represent the best predictor of future enforcement needs before these shocks hit. This controls for endogeneity of enforcement and migration flows at each sector. This setup assumes perfect foresight, which is a strong assumption. I have estimated the model assuming myopic expectations and the results were similar. The cost of moving through a specific border patrol sector depends on the predicted enforcement levels there, as well as a fixed cost of crossing through that point. Some of the border crossing points consistently have low enforcement, yet few people choose to cross there. I assume that there are other reasons, constant across time, that account for this trend, such as being in a desert where it is dangerous to cross. The estimated fixed costs account for these factors. Since the model is dynamic, I need to make assumptions on people's beliefs on future levels of border enforcement. I assume that people have perfect foresight on border enforcement.43 6.3. Transition rates The transition probabilities defined in Section 3.2.3 are over spouse locations, legal status, and marriage rates. The transitions over spouse's location come from the choice probabilities in the model. The legal status and marriage transition rates come from the data. Using the MMP data, I estimate the probability that a person switches from illegal to legal status with a probit regression that controls for education, family networks, and gender. I assume the amnesty due to IRCA in 1986 was unanticipated. People could only be legalized under IRCA if they had lived in the U.S. continuously since 1982. Therefore, this policy would only affect immigration decisions if it was anticipated four to five years prior to implementation, making this assumption reasonable. The results of this regression, shown in column (1) of Table A7 in Online Appendix A, indicate that having family in the U.S. and being male strongly affects the probability of being granted legal status. I use the results of this regression to impute a probability that each person is granted legal status, which is used as exogenously given transition rates when estimating the model. In the model, single people know that there is some probability that they will get married in future periods. I estimate marriage rates using a probit regression. Column (2) of Table A7 in Online Appendix A shows how different factors affect the probability of becoming married. I use these results for the transition probabilities in the model estimation. 6.4. Utility function Utility depends on a person's expected wage, which is a function of his location and characteristics. A person's utility increases if he is living at his home location, which is defined as the state in which he was born. I allow for utility to increase if a person is in the same country as his spouse. Alternatively, I could have assumed that this depends on being in the same location as one's spouse, but this would significantly increase computation. My methodology only requires me to track the country of his spouse instead of the exact location, and yet still captures the empirical trend that people make migration decisions to be near their spouse.44 I also allow for higher utility in the U.S. if a person also has family members living there. The full parameterization of the utility function is explained in Online Appendix C. 6.5. Likelihood function In this section, I explain the derivation of the likelihood function; the full details are explained in Online Appendix D. I estimate the model using maximum likelihood. I calculate the likelihood function at the household level, where I integrate over the probability that each household is of a specific moving cost type, the probability that each person has a specific wage fixed effect, and the probability that the woman is a worker type.45 For each household, I observe a history of location choices for the primary and secondary mover. These choices depend on moving cost type $$\tau$$, where I assume there are mover and stayer types, where the stayer types have infinitely high costs of moving to the U.S. Women can be worker or non-worker types, where utility for the non-worker types is not affected by wages. For each person, I observe wage draws when in the U.S. There is unobserved heterogeneity in the wage draws. These are individual specific terms, known by every member of the household and unobserved by the econometrician. I allow for a correlation between the unobserved types of husbands and wives. First, I explain how I calculate the likelihood function conditional on moving cost type and wage type. The migration probabilities for each period come from the choice probabilities defined in equations (11) and (6). For secondary movers, I differentiate between the choice probabilities for worker and non-worker types, where the utility for non-worker types is not affected by wages. I calculate the probability of seeing an observed history for a household when the woman is a worker type and when she is a non-worker type. I then integrate over the probability that the woman is a worker type. The previous explanation was for calculating the likelihood conditional on moving cost and wage type. To calculate the full likelihood, I have to incorporate the probability that a household has moving cost type $$\tau$$ and each individual has a given wage type. I estimate the probability that a household has moving cost type $$\tau$$. I allow for a correlation between the types of husbands and wives, by estimating the probability that a woman with a given wage-type is married to a man with a given type. This allows for assortative matching in the labour market, if the estimates reveal that a high-wage-type man is most likely to be married to a high-wage-type woman.46 7. Results Table 8 reports the utility parameter estimates.47 The results show that people prefer to live at their home location, and that men living in the same location as their spouse have higher utility. There is no statistically significant effect for women, which can be explained due to the assumptions in the model. Because women rarely move to the U.S. without their husband, I assumed that a married woman cannot live in the U.S. unless her husband is there. Without this assumption, I would get a much larger preference for living in the same location as one's spouse for women, since women do not move to the U.S. without their husbands. In addition, people with family in the U.S. have higher utility when living in the U.S. than those who do not. There are mover and stayer types in the model; the estimation is set so that the fixed cost of moving to the U.S. is infinity for stayer types so they will never choose to make that move. I find that the probability that a household is a mover type is close to 70%. Table 8. Utility parameter estimates Wage term 0.056 (0.0022) Home bias 0.20 (0.0040) With spouse (men) 0.36 (0.053) With spouse (women) 0.032 (0.042) Family in U.S. 0.029 (0.012) Probability (mover type) 0.68 (0.022) Log-likelihood –232,643.05 Wage term 0.056 (0.0022) Home bias 0.20 (0.0040) With spouse (men) 0.36 (0.053) With spouse (women) 0.032 (0.042) Family in U.S. 0.029 (0.012) Probability (mover type) 0.68 (0.022) Log-likelihood –232,643.05 Notes: Standard errors in parentheses. Table 8. Utility parameter estimates Wage term 0.056 (0.0022) Home bias 0.20 (0.0040) With spouse (men) 0.36 (0.053) With spouse (women) 0.032 (0.042) Family in U.S. 0.029 (0.012) Probability (mover type) 0.68 (0.022) Log-likelihood –232,643.05 Wage term 0.056 (0.0022) Home bias 0.20 (0.0040) With spouse (men) 0.36 (0.053) With spouse (women) 0.032 (0.042) Family in U.S. 0.029 (0.012) Probability (mover type) 0.68 (0.022) Log-likelihood –232,643.05 Notes: Standard errors in parentheses. In a separate exercise, I estimated a simpler version of this model, taking away the utility preference for living at the same location as a person's spouse. This leads to a significant change in the likelihood at the optimal point, where equality of the likelihood with the original and simpler model was rejected by a likelihood ratio test.48 This shows that the inclusion of this part of the model substantially improves its ability to model decisions. Table 9. Immigrant wage estimates Illegal Legal Match probabilities Age 2.63 6.17 Low–low 0.32 (1.14) (0.13) (2.17) Age-squared –0.44 –0.64 Low–medium 0.01 (0.21) (0.016) (2.01) 5–8 years education 1.23 1.49 Medium–low 0.01 (0.20) (0.16) (2.05) 9–11 years education 1.93 2.80 Medium–medium 0.28 (0.21) (0.16) (1.96) 12 years education 2.24 4.83 (0.24) (0.15) 13$$+$$ years education 2.05 6.87 (0.37) (0.16) Family in U.S. –0.51 (0.22) Male 1.31 2.76 (0.28) (0.047) Match component 2.29 0.98 (0.25) (0.60) Constant 0.96 –6.96 (1.44) (0.27) Standard deviation of wages 2.52 5.08 (0.082) (0.078) Illegal Legal Match probabilities Age 2.63 6.17 Low–low 0.32 (1.14) (0.13) (2.17) Age-squared –0.44 –0.64 Low–medium 0.01 (0.21) (0.016) (2.01) 5–8 years education 1.23 1.49 Medium–low 0.01 (0.20) (0.16) (2.05) 9–11 years education 1.93 2.80 Medium–medium 0.28 (0.21) (0.16) (1.96) 12 years education 2.24 4.83 (0.24) (0.15) 13$$+$$ years education 2.05 6.87 (0.37) (0.16) Family in U.S. –0.51 (0.22) Male 1.31 2.76 (0.28) (0.047) Match component 2.29 0.98 (0.25) (0.60) Constant 0.96 –6.96 (1.44) (0.27) Standard deviation of wages 2.52 5.08 (0.082) (0.078) Notes: Standard errors in parentheses. The excluded term is people with less than five years of education. Age is divided by 10. The match components are drawn from a three-point symmetric distribution around zero. The first component in the match probability is for the husband, and the second is for the wife. The wage equations include time trends in education and location fixed effects from the CPS. Table 9. Immigrant wage estimates Illegal Legal Match probabilities Age 2.63 6.17 Low–low 0.32 (1.14) (0.13) (2.17) Age-squared –0.44 –0.64 Low–medium 0.01 (0.21) (0.016) (2.01) 5–8 years education 1.23 1.49 Medium–low 0.01 (0.20) (0.16) (2.05) 9–11 years education 1.93 2.80 Medium–medium 0.28 (0.21) (0.16) (1.96) 12 years education 2.24 4.83 (0.24) (0.15) 13$$+$$ years education 2.05 6.87 (0.37) (0.16) Family in U.S. –0.51 (0.22) Male 1.31 2.76 (0.28) (0.047) Match component 2.29 0.98 (0.25) (0.60) Constant 0.96 –6.96 (1.44) (0.27) Standard deviation of wages 2.52 5.08 (0.082) (0.078) Illegal Legal Match probabilities Age 2.63 6.17 Low–low 0.32 (1.14) (0.13) (2.17) Age-squared –0.44 –0.64 Low–medium 0.01 (0.21) (0.016) (2.01) 5–8 years education 1.23 1.49 Medium–low 0.01 (0.20) (0.16) (2.05) 9–11 years education 1.93 2.80 Medium–medium 0.28 (0.21) (0.16) (1.96) 12 years education 2.24 4.83 (0.24) (0.15) 13$$+$$ years education 2.05 6.87 (0.37) (0.16) Family in U.S. –0.51 (0.22) Male 1.31 2.76 (0.28) (0.047) Match component 2.29 0.98 (0.25) (0.60) Constant 0.96 –6.96 (1.44) (0.27) Standard deviation of wages 2.52 5.08 (0.082) (0.078) Notes: Standard errors in parentheses. The excluded term is people with less than five years of education. Age is divided by 10. The match components are drawn from a three-point symmetric distribution around zero. The first component in the match probability is for the husband, and the second is for the wife. The wage equations include time trends in education and location fixed effects from the CPS. Table 10 shows the parameters of the immigrant wage distribution, for both legal and illegal immigrants. There are stronger returns to education for legal immigrants than for illegal immigrants, reflecting that high-skilled legal immigrants can access jobs that reward these skills.49 The age profile has the standard concave shape for legal immigrants. For illegal immigrants, wages increase slightly at young ages, but then decrease. For the age range that comprises most of the sample, the wage profile is essentially flat, since the steeper drop-off in wages does not begin until older ages. Table 10. Moving cost estimates Mexico to U.S. Return migration Internal migration Fixed cost for men 3.43 3.59 3.53 (0.47) (0.37) (0.12) Fixed cost for women 2.22 6.40 3.55 (0.45) (0.38) (0.12) Distance (legal) 0.60 –0.91 0.0000027 (0.18) (0.086) (0.046) Age 0.0047 0.062 0.13 (0.013) (0.014) (0.0050) Population size 0.0053 –0.00016 –0.014 (0.00034) (0.0013) (0.00091) Distance to railroad 0.30 (0.027) 5–8 years education –0.047 (0.089) 9–11 years education –0.21 (0.084) 12 years education 0.67 (0.12) 13$$+$$ years education 0.98 (0.18) Mexico to U.S. Return migration Internal migration Fixed cost for men 3.43 3.59 3.53 (0.47) (0.37) (0.12) Fixed cost for women 2.22 6.40 3.55 (0.45) (0.38) (0.12) Distance (legal) 0.60 –0.91 0.0000027 (0.18) (0.086) (0.046) Age 0.0047 0.062 0.13 (0.013) (0.014) (0.0050) Population size 0.0053 –0.00016 –0.014 (0.00034) (0.0013) (0.00091) Distance to railroad 0.30 (0.027) 5–8 years education –0.047 (0.089) 9–11 years education –0.21 (0.084) 12 years education 0.67 (0.12) 13$$+$$ years education 0.98 (0.18) Notes: Standard errors in parentheses. Distance measured in thousands of miles. Population divided by 100,000. Table 10. Moving cost estimates Mexico to U.S. Return migration Internal migration Fixed cost for men 3.43 3.59 3.53 (0.47) (0.37) (0.12) Fixed cost for women 2.22 6.40 3.55 (0.45) (0.38) (0.12) Distance (legal) 0.60 –0.91 0.0000027 (0.18) (0.086) (0.046) Age 0.0047 0.062 0.13 (0.013) (0.014) (0.0050) Population size 0.0053 –0.00016 –0.014 (0.00034) (0.0013) (0.00091) Distance to railroad 0.30 (0.027) 5–8 years education –0.047 (0.089) 9–11 years education –0.21 (0.084) 12 years education 0.67 (0.12) 13$$+$$ years education 0.98 (0.18) Mexico to U.S. Return migration Internal migration Fixed cost for men 3.43 3.59 3.53 (0.47) (0.37) (0.12) Fixed cost for women 2.22 6.40 3.55 (0.45) (0.38) (0.12) Distance (legal) 0.60 –0.91 0.0000027 (0.18) (0.086) (0.046) Age 0.0047 0.062 0.13 (0.013) (0.014) (0.0050) Population size 0.0053 –0.00016 –0.014 (0.00034) (0.0013) (0.00091) Distance to railroad 0.30 (0.027) 5–8 years education –0.047 (0.089) 9–11 years education –0.21 (0.084) 12 years education 0.67 (0.12) 13$$+$$ years education 0.98 (0.18) Notes: Standard errors in parentheses. Distance measured in thousands of miles. Population divided by 100,000. Table 11 shows the moving cost parameters (excluding the parts related to illegal immigration). There are three moving cost functions: Mexico to U.S. migration, return migration, and internal migration. The first component of the moving cost is the fixed cost of moving, which I allow to vary with gender. The moving cost also depends on the distance between locations. For Mexico to U.S. (legal) migration, the cost increases in distance, as expected, and I do not see a statistically significant effect of distance on internal migration decisions. For return migration, the moving cost decreases with distance. The location in Illinois has the highest return migration rates, and is the furthest from the border.50 This behaviour is most likely driving this parameter estimate. Moving costs also depend on population size, in that I would expect people to be more likely to move to larger locations.51 For internal migration, the moving cost decreases with population size, indicating that people are more likely to move to larger locations. For Mexico to U.S. migration, the effect is positive but small. Population size is perhaps not an accurate proxy in this case, since migrants may care more about the number of people from their community in a location than the total population size. Table 11. Illegal immigration parameter estimates Distance 1.23 (0.056) Enforcement 0.04 (0.0069) Fixed cost 1.17 (0.39) Crossing point fixed costs El Paso, TX –1.07 (0.26) San Diego, CA –4.01 (0.23) Laredo, TX –0.37 (0.28) Rio Grande Valley, TX 0.065 (0.30) Tucson, AZ –2.05 (0.24) El Centro, TX –2.36 (0.24) Distance 1.23 (0.056) Enforcement 0.04 (0.0069) Fixed cost 1.17 (0.39) Crossing point fixed costs El Paso, TX –1.07 (0.26) San Diego, CA –4.01 (0.23) Laredo, TX –0.37 (0.28) Rio Grande Valley, TX 0.065 (0.30) Tucson, AZ –2.05 (0.24) El Centro, TX –2.36 (0.24) Notes: Standard errors in parentheses. Enforcement measured in 10,000 person-hours. Distance measured in thousands of miles. Table 11. Illegal immigration parameter estimates Distance 1.23 (0.056) Enforcement 0.04 (0.0069) Fixed cost 1.17 (0.39) Crossing point fixed costs El Paso, TX –1.07 (0.26) San Diego, CA –4.01 (0.23) Laredo, TX –0.37 (0.28) Rio Grande Valley, TX 0.065 (0.30) Tucson, AZ –2.05 (0.24) El Centro, TX –2.36 (0.24) Distance 1.23 (0.056) Enforcement 0.04 (0.0069) Fixed cost 1.17 (0.39) Crossing point fixed costs El Paso, TX –1.07 (0.26) San Diego, CA –4.01 (0.23) Laredo, TX –0.37 (0.28) Rio Grande Valley, TX 0.065 (0.30) Tucson, AZ –2.05 (0.24) El Centro, TX –2.36 (0.24) Notes: Standard errors in parentheses. Enforcement measured in 10,000 person-hours. Distance measured in thousands of miles. Table 11 shows the parameter estimates relating to illegal immigration. Distance increases the cost of moving, where the distance is calculated as the distance from the Mexican state to the crossing point plus the distance from the crossing point to the U.S. destination. This allows for the location choices and crossing point decisions to be related. I find that moving costs increase with border enforcement. I estimate a separate fixed cost for each border crossing point. The crossing points with low levels of enforcement, but where people do not cross, have high fixed costs. For example, San Diego is where the greatest share of people cross, but it also has the highest enforcement. Therefore the estimation finds that this point has the lowest fixed costs. 7.1. Model fit To look at the model fit, I first show statistics on annual Mexico to U.S. and return migration rates, comparing the values in the data to model predictions. The first row of Table 12 shows the whole sample, and the second two rows split the sample by legal status. The model fits migration rates for illegal immigrants well, but is unable to match the high migration rates and overestimates return migration rates for legal migrants. Legal immigrants are a small part of the sample. The model allows for different moving costs and wages for legal immigrants, but since most of the other parameters are the same, the model cannot fit the data for legal immigrants well.52 The last four rows split the sample by marital status, first looking at the full history sample and then at the partial history sample. The full history sample is split into married primary movers, married secondary movers, and people who are single. The model underpredicts both the migration rates of married men and women, although it does capture that primary movers are much more likely to move than secondary movers. Table 13 splits the sample by education, looking at the same summary statistics, and again shows that the model is fitting the annual migration rates relatively well. Looking at this along another dimension, Figures 4 and 5 show the annual migration rates over time, and Figures 6 and 7 split the sample by age. The model is fitting the general trends relatively well. However, it is overestimating return migration rates in the later years. Figure 4 View largeDownload slide Model fit: mexico to U.S. migration rates by year. Notes: For each year, I calculate the average Mexico to U.S. migration rate, in the data and as predicted by the model. Figure 4 View largeDownload slide Model fit: mexico to U.S. migration rates by year. Notes: For each year, I calculate the average Mexico to U.S. migration rate, in the data and as predicted by the model. Figure 5 View largeDownload slide Model fit: return migration rates by year. Notes: For each year, I calculate the average return migration rate, in the data and as predicted by the model. Figure 5 View largeDownload slide Model fit: return migration rates by year. Notes: For each year, I calculate the average return migration rate, in the data and as predicted by the model. Figure 6 View largeDownload slide Model fit: Mexico to U.S. migration rates by age. Notes: For each age, I calculate the average Mexico to U.S. migration rate, in the data and as predicted by the model. Figure 6 View largeDownload slide Model fit: Mexico to U.S. migration rates by age. Notes: For each age, I calculate the average Mexico to U.S. migration rate, in the data and as predicted by the model. Figure 7 View largeDownload slide Model fit: return migration rates by age. Notes: For each age, I calculate the average return migration rate, in the data and as predicted by the model. Figure 7 View largeDownload slide Model fit: return migration rates by age. Notes: For each age, I calculate the average return migration rate, in the data and as predicted by the model. Table 12. Model fit: annual migration rates Mexico to U.S. migration rate Return migration rate Model (%) Data (%) Model (%) Data (%) Whole sample 2.60 2.37 10.1 8.50 Illegal immigrants 2.53 2.19 10.1 9.31 Legal immigrants 16.55 40.83 9.78 4.52 Full history sample $$\quad$$ Primary movers 1.45 3.30 22.26 29.03 $$\quad$$ Secondary movers 0.00073 0.0027 33.87 25.48 $$\quad$$ Single people 3.46 2.15 10.89 24.93 Partial history sample 2.63 2.46 9.33 5.64 Mexico to U.S. migration rate Return migration rate Model (%) Data (%) Model (%) Data (%) Whole sample 2.60 2.37 10.1 8.50 Illegal immigrants 2.53 2.19 10.1 9.31 Legal immigrants 16.55 40.83 9.78 4.52 Full history sample $$\quad$$ Primary movers 1.45 3.30 22.26 29.03 $$\quad$$ Secondary movers 0.00073 0.0027 33.87 25.48 $$\quad$$ Single people 3.46 2.15 10.89 24.93 Partial history sample 2.63 2.46 9.33 5.64 Notes: I calculate the model-predicted Mexico to U.S. and return migration rates for all individuals in the sample, and compare them to rates in the data. For Mexico to U.S. migration, I use all people in Mexico at the start of the period. For return migration, I use all people in the U.S. at the start of the period. Table 12. Model fit: annual migration rates Mexico to U.S. migration rate Return migration rate Model (%) Data (%) Model (%) Data (%) Whole sample 2.60 2.37 10.1 8.50 Illegal immigrants 2.53 2.19 10.1 9.31 Legal immigrants 16.55 40.83 9.78 4.52 Full history sample $$\quad$$ Primary movers 1.45 3.30 22.26 29.03 $$\quad$$ Secondary movers 0.00073 0.0027 33.87 25.48 $$\quad$$ Single people 3.46 2.15 10.89 24.93 Partial history sample 2.63 2.46 9.33 5.64 Mexico to U.S. migration rate Return migration rate Model (%) Data (%) Model (%) Data (%) Whole sample 2.60 2.37 10.1 8.50 Illegal immigrants 2.53 2.19 10.1 9.31 Legal immigrants 16.55 40.83 9.78 4.52 Full history sample $$\quad$$ Primary movers 1.45 3.30 22.26 29.03 $$\quad$$ Secondary movers 0.00073 0.0027 33.87 25.48 $$\quad$$ Single people 3.46 2.15 10.89 24.93 Partial history sample 2.63 2.46 9.33 5.64 Notes: I calculate the model-predicted Mexico to U.S. and return migration rates for all individuals in the sample, and compare them to rates in the data. For Mexico to U.S. migration, I use all people in Mexico at the start of the period. For return migration, I use all people in the U.S. at the start of the period. Table 13. Model fit: annual migration rates Mexico to U.S. migration rate Return migration rate Years of education Model (%) Data (%) Model (%) Data (%) 0–4 2.38 2.07 11.55 11.87 5–8 2.99 2.97 10.42 8.93 9–11 3.41 2.69 10.65 8.10 12 1.81 1.92 7.23 6.0 13$$+$$ 0.81 0.79 7.96 7.47 Mexico to U.S. migration rate Return migration rate Years of education Model (%) Data (%) Model (%) Data (%) 0–4 2.38 2.07 11.55 11.87 5–8 2.99 2.97 10.42 8.93 9–11 3.41 2.69 10.65 8.10 12 1.81 1.92 7.23 6.0 13$$+$$ 0.81 0.79 7.96 7.47 Notes: I calculate the model-predicted Mexico to U.S. and return migration rates for all individuals in the sample, and compare them to rates in the data. For Mexico to U.S. migration, I use all people in Mexico at the start of the period. For return migration, I use all people in the U.S. at the start of the period. Table 13. Model fit: annual migration rates Mexico to U.S. migration rate Return migration rate Years of education Model (%) Data (%) Model (%) Data (%) 0–4 2.38 2.07 11.55 11.87 5–8 2.99 2.97 10.42 8.93 9–11 3.41 2.69 10.65 8.10 12 1.81 1.92 7.23 6.0 13$$+$$ 0.81 0.79 7.96 7.47 Mexico to U.S. migration rate Return migration rate Years of education Model (%) Data (%) Model (%) Data (%) 0–4 2.38 2.07 11.55 11.87 5–8 2.99 2.97 10.42 8.93 9–11 3.41 2.69 10.65 8.10 12 1.81 1.92 7.23 6.0 13$$+$$ 0.81 0.79 7.96 7.47 Notes: I calculate the model-predicted Mexico to U.S. and return migration rates for all individuals in the sample, and compare them to rates in the data. For Mexico to U.S. migration, I use all people in Mexico at the start of the period. For return migration, I use all people in the U.S. at the start of the period. Next I look at the fit of the dynamic aspects of the model. In Table 14, I calculate three statistics in the data: the percentage of the sample that moves to the U.S., the number of moves to the U.S. per migrant, and the average duration of each move to the U.S. I then simulated the model and calculated the model's predicted values for each of these variables. The model has too few people moving, and those that move stay for longer than in the data. The number of moves per migrant matches the data very well. Table 14. Model fit: lifetime behaviour Model Data Percent that move (%) 17.23 19.2 Years per move 5.51 4.39 Number of moves per migrant 1.20 1.18 Model Data Percent that move (%) 17.23 19.2 Years per move 5.51 4.39 Number of moves per migrant 1.20 1.18 Notes: These numbers are based on simulations of the model using the data in the sample. Table 14. Model fit: lifetime behaviour Model Data Percent that move (%) 17.23 19.2 Years per move 5.51 4.39 Number of moves per migrant 1.20 1.18 Model Data Percent that move (%) 17.23 19.2 Years per move 5.51 4.39 Number of moves per migrant 1.20 1.18 Notes: These numbers are based on simulations of the model using the data in the sample. Figure 8 shows the model fit for wages of illegal immigrants, splitting the sample by age. For younger ages, the model fits the data well, although it does tend to overestimate wages. For older ages, the model is underestimating wages. The model also estimates wages for legal immigrants. Using the model estimates, I find that the average illegal immigrant would earn 18% more as a legal immigrant. In comparison, Kossoudji and Cobb-Clark (2002) estimate a wage penalty from being an undocumented immigrant of 14–24%. My results fall within their range. Figure 8 View largeDownload slide Model fit: wages for illegal immigrants. Notes: I calculate the average wage of people living in the U.S. illegally, in the data and as predicted by the model. Figure 8 View largeDownload slide Model fit: wages for illegal immigrants. Notes: I calculate the average wage of people living in the U.S. illegally, in the data and as predicted by the model. 8. Counterfactuals In the counterfactuals, I study how changes in relative wages and U.S. border enforcement affect immigration decisions. I find that increased Mexican wages reduce migration rates and the duration of stays in the U.S. Increased border enforcement reduces migration rates and increases return migration rates. However, for married men living in the U.S. alone, there is a secondary effect on return migration, in that it is now harder for their wives to move to the U.S., providing the men an extra incentive to return home. I isolate this effect in this counterfactual. In all of these counterfactuals, I only include the population of illegal immigrants to focus on the group most affected by policy changes. In each counterfactual, I simulate the model in the baseline and in the alternate policy environments. I then calculate the percentage of the sample that moves to the U.S., the average number of moves to the U.S. per migrant, the average number of years spent living in the U.S. per move, and the average number of years a person lives in the U.S. over a lifetime. These summary statistics indicate the changes in immigration behaviour in these alternate environments. 8.1. Changes in wages In the first counterfactual, I look at the effect of a 10% increase in Mexican wages, holding U.S. wages constant.53 Over time, as Mexico's economy grows, the wage gap between the two countries will decrease. This counterfactual analyses how this will affect illegal immigration. The first row of Table 15 shows the baseline simulation, and the second row shows the results after a 10% increase in Mexican wages. After this change, fewer people move to the U.S., and for those that move, the duration of each trip decreases. This reflects a higher value of living in Mexico in the counterfactual due to the higher wages. These effects combine to decrease the average number of years that a person lives in the U.S. by around 5%. Table 15. Counterfactuals Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 17.23 5.51 1.20 1.14 10% increase in Mexican wages 16.53 5.43 1.20 1.08 $$\qquad$$ in all locations but home 17.51 5.44 1.20 1.14 10% decrease in U.S. wages 15.55 5.29 1.20 0.99 50% increase in enforcement 16.35 5.67 1.18 1.10 50% increase in enforcement (equal costs) 15.49 5.81 1.17 1.05 Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 17.23 5.51 1.20 1.14 10% increase in Mexican wages 16.53 5.43 1.20 1.08 $$\qquad$$ in all locations but home 17.51 5.44 1.20 1.14 10% decrease in U.S. wages 15.55 5.29 1.20 0.99 50% increase in enforcement 16.35 5.67 1.18 1.10 50% increase in enforcement (equal costs) 15.49 5.81 1.17 1.05 Notes: These are the results from simulations of the model, only including the sample of individuals who cannot migrate legally. Table 15. Counterfactuals Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 17.23 5.51 1.20 1.14 10% increase in Mexican wages 16.53 5.43 1.20 1.08 $$\qquad$$ in all locations but home 17.51 5.44 1.20 1.14 10% decrease in U.S. wages 15.55 5.29 1.20 0.99 50% increase in enforcement 16.35 5.67 1.18 1.10 50% increase in enforcement (equal costs) 15.49 5.81 1.17 1.05 Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 17.23 5.51 1.20 1.14 10% increase in Mexican wages 16.53 5.43 1.20 1.08 $$\qquad$$ in all locations but home 17.51 5.44 1.20 1.14 10% decrease in U.S. wages 15.55 5.29 1.20 0.99 50% increase in enforcement 16.35 5.67 1.18 1.10 50% increase in enforcement (equal costs) 15.49 5.81 1.17 1.05 Notes: These are the results from simulations of the model, only including the sample of individuals who cannot migrate legally. Alternatively, I can use the model to study how migration changes in response to variations in U.S. wages. Lessem and Nakajima (2015) show that downturns in the U.S. economy more adversely affect illegal immigrant wages than native wages, due to the frequent renegotiation of labour contracts in the former population. In the fourth row of Table 15, I show the counterfactual outcomes after a 10% decrease in U.S. wages. This decrease substantially discourages immigration, reducing the number of people who move and the duration of each move. Overall, this decreases the number of years spent in the U.S. by around 13%, a much larger effect than from the 10% increase in Mexican wages. This difference is mostly driven by the fact that a 10% decrease in U.S. wages is larger in magnitude than a 10% increase in Mexican wages due to higher wage levels in the U.S. To put these results into perspective, I compare them to the findings in Hanson and Spilimbergo (1999), who find a wage elasticity of migration with response to Mexican wages of between $$-0.64$$ and $$-0.86$$. My results are not directly comparable, since Hanson and Spilimbergo (1999) are looking at changes in apprehensions, which is a proxy for static migration rates. On the other hand, I calculate how the total number of years a person spends in the U.S. responds to wage changes. Nonetheless, I find an elasticity of $$-0.54$$, which is quite close to their range. I can also compare my and their wage elasticity with respect to U.S. wages. Hanson and Spilimbergo (1999) find a wage elasticity of U.S. wages ranging between 0.9 and 1.64. I find an elasticity of 1.17, which falls within that range. My model allows for internal migration as well as Mexico to U.S. migration, which enables me to study how non-uniform changes in the Mexican economy affect migration patterns. For example, there could be increases or decreases in wages in certain locations in Mexico, and this would not affect everyone directly. However, since people can move internally, changes in wages in alternate Mexican locations could still affect U.S. migration patterns. To put a bound on this, I simulate a version of the model where all wages except those in a person's home location increase by 10%. This change increases the value of living in all Mexican locations except for one's home location, which will increase internal migration. The results are in the third row of Table 15. There is a slight increase in the percentage of the sample that moves to the U.S., which is surprising given that the value of living in Mexico has increased. However, consider the mechanisms in the model. Due to the increased wages in alternate locations in Mexico, internal migration goes up, which means people are more likely to be living in a non-home location. If starting from a location other than their home location, people are more likely to move to the U.S., since moving to the U.S. from one's home location means forgoing their home premium. Another reason for this is that increased internal migration rates can cause people to move to locations that have lower costs of moving to the U.S. The durations of each trip remain the same as in the case of the 10% increase in wages in all Mexican locations. This is an interesting set of results that could not have been discussed without a model that allows for internal as well as international migration. The results in Table 15 look at the sample as a whole, but I can also use the model to isolate the role that family decisions play. Consider a married man living in the U.S. without his spouse. As Mexican wages increase, his wife will be less likely to join him in the U.S., providing an extra incentive for him to return home. To isolate this effect, I run a counterfactual where I increase Mexican wages but hold female migration rates at the baseline level. These results (looking at only married men in the full history sample) are in Table 16. The first row shows the baseline case, and the second row shows a counterfactual with a 10% increase in Mexican wages. In the third row, I increase Mexican wages but keep female migration rates at the baseline level. In this case, I see an increase in the number of years a married man spends in the U.S. as compared to the original counterfactual. In the original counterfactual, the increased Mexican wages cause a decrease in migration durations of about 2.74%, as compared to a 1.76% decrease when female migration rates do not adjust. Table 16. Counterfactuals: married men only Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 23.03 4.85 1.28 1.43 10 increase in Mexican wages 22.11 4.72 1.28 1.33 $$\quad$$ Transition probability constant 22.48 4.77 1.27 1.36 50 equal costs increase in enforcement 20.34 5.17 1.23 1.30 $$\quad$$ Transition probability constant 20.67 5.18 1.23 1.32 Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 23.03 4.85 1.28 1.43 10 increase in Mexican wages 22.11 4.72 1.28 1.33 $$\quad$$ Transition probability constant 22.48 4.77 1.27 1.36 50 equal costs increase in enforcement 20.34 5.17 1.23 1.30 $$\quad$$ Transition probability constant 20.67 5.18 1.23 1.32 Notes: These are the results from simulations of the model, only including the sample of married men who cannot migrate legally. Table 16. Counterfactuals: married men only Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 23.03 4.85 1.28 1.43 10 increase in Mexican wages 22.11 4.72 1.28 1.33 $$\quad$$ Transition probability constant 22.48 4.77 1.27 1.36 50 equal costs increase in enforcement 20.34 5.17 1.23 1.30 $$\quad$$ Transition probability constant 20.67 5.18 1.23 1.32 Percent that move (%) Years per move (%) Moves per mover (%) Years in U.S. per person (%) Baseline 23.03 4.85 1.28 1.43 10 increase in Mexican wages 22.11 4.72 1.28 1.33 $$\quad$$ Transition probability constant 22.48 4.77 1.27 1.36 50 equal costs increase in enforcement 20.34 5.17 1.23 1.30 $$\quad$$ Transition probability constant 20.67 5.18 1.23 1.32 Notes: These are the results from simulations of the model, only including the sample of married men who cannot migrate legally. 8.2. Increased border enforcement Next, I calculate how increased enforcement affects immigration, assuming that the number of person-hours allocated to enforcement at each crossing point increases by 50%. This provides insight as to how immigration would respond to further increases in border enforcement. The results of this counterfactual are in the fifth row of Table 15. The percentage of the sample that moves decreases, the number of moves per migrant slightly decreases, and the duration of each move increases, with this last effect reflecting dynamic considerations. Overall, this increase in enforcement reduces the average amount of time a person lives in the U.S. by about 3%. In the model, individuals not only choose where to live, but also choose where to cross the border when moving to the U.S. illegally. Each crossing point has a different estimated fixed cost and enforcement level. My model can be used to "optimally" allocate border enforcement in the counterfactual. I again assume a 50% total increase in enforcement, where the extra resources are now allocated to minimize illegal immigration rates, assuming that this is the government's objective. The solution to the government's problem in my model indicates that the cost of crossing at each sector of the border should be equal. Due to the wide variation in the estimated fixed costs across border patrol sectors, it is not possible to reach this point with a 50% increase in enforcement. To get closest to this point, the extra resources should be allocated to the sectors of the border with the lowest fixed costs of crossing. These points also have the highest enforcement levels, but even after accounting for the effects of enforcement, the costs of crossing there are still lowest. The last row of Table 15 shows the overall effects of this policy change. As with the uniform increase in enforcement, fewer people move, and the duration of each move increases. When the extra enforcement is allocated following this equal-costs strategy, the average number of years spent in the U.S. decreases by 7%, whereas it decreased by around 3% with the uniform increase in enforcement. This shows that the effect of increased enforcement depends on the allocation of the extra resources. As with wages, there is a secondary effect on return migration rates for married men. As enforcement increases, durations of stay increase, as discussed above. However, for married men living in the U.S. alone, the increase in migration costs makes it less likely that their wives will join them in the U.S. This gives an extra incentive for men to return to Mexico, pushing the duration of stays in the U.S. downward. This same mechanism makes married men less likely to move to the U.S., since the value of living there is now lower because their wives are less likely to move. The composition of the migrant workforce changes in this alternate policy environment. In the baseline case, looking at the full history sample, 17.2% of the person-years spent in the U.S. are by married individuals. After the equal costs increase in enforcement, 16.9% of those person-years are by married people. To isolate these mechanisms, in the next counterfactual I increase border enforcement while holding female migration rates constant. Table 16 shows the results of this counterfactual for the sample of married men. The first row shows the baseline case, and the fourth row shows the results for a 50% equal-costs increase in enforcement. The fifth row runs the same counterfactual, holding female migration rates at the baseline level. When the migration rates are held constant, there is an even larger increase in durations for married men as enforcement increases as compared to the original counterfactual scenario. 8.3. Wages and enforcement It is interesting to compare the effects of increased enforcement to increased Mexican wages. The 50% increase in enforcement, allocated following the equal-costs strategy, decreases immigration by about 7%. I compare that to the approximate increase in Mexican wages required to reach the same goal. This would occur with close to a 14% increase in Mexican wages, which is a relatively small narrowing of the Mexico–U.S. wage gap. In comparison, a 50% increase in border enforcement is an expensive policy. Expenditures on border enforcement were estimated to equal $\$$ 2.2 billion in 2002, meaning that this policy could cost over $\$$1 billion (Hanson, 2005). Furthermore, changes in enforcement levels can affect the wage elasticity of migration, which is an issue that has been of interest to policymakers. I compare reactions to a 10% increase in Mexican wages. In the baseline case, this results in a 5.4% decrease in years spent in the U.S. When enforcement is increased by 50% following the equal-costs strategy, a 10% increase in Mexican wages has a larger effect on immigration behaviour, reducing the years spent in the U.S. by 6%. This effect is almost completely due to having a larger effect on the number of people who choose to move to the U.S. After enforcement increases, an increase in Mexican wages has a larger effect on the number of people who move. In both the normal and increased enforcement cases, as Mexican wages increase, durations of each trip to the U.S. decrease, but by similar amounts. 9. Conclusion In this article, I estimate a discrete-choice dynamic programming model where people pick from a set of locations in the U.S. and Mexico in each period. I allow for a person's decisions to depend on the location of his spouse, where individuals in a household make decisions sequentially. I use this model to understand how wage differentials and U.S. border enforcement affect an individual's immigration decisions. I allow for differences in the model according to whether a person can immigrate to the U.S. legally. For illegal immigrants, the moving cost depends on U.S. border enforcement. Border enforcement is measured using data from U.S. CBP on the number of person-hours spent patrolling different regions of the border at each point in time. I use this cross-sectional and time series variation in enforcement, combined with individual decisions on where to cross the border, to identify the effects of enforcement on immigration decisions. After estimating the model, I find that increases in Mexican wages reduce immigration from Mexico to the U.S. and increase return migration rates. Simulations show that a 10% increase in Mexican wages reduces the average number of years that a person lives in the U.S. over a lifetime by around 5%. Increases in border enforcement would decrease both immigration and return migration, with the latter effect occurring because, as enforcement increases, individuals living in the U.S. expect that it will be harder to re-enter the country in the future. Married men's durations of stay also adjust to changes in their wives' behaviour. Because moving to the U.S. is now more costly, women are less likely to join their husbands in the U.S., providing an extra incentive for the husbands to return home. Overall, a uniform 50% increase in enforcement would reduce the amount of time that individuals in the sample spent in the U.S. by approximately 3%. If instead the same increase in enforcement were allocated along in the border in a way to minimize immigration rates, the number of years that the average person in the sample lived in the U.S. would drop by about 7%. These results indicate that the effects of enforcement are dependent on the allocation of the extra resources. These results have important implications. The U.S. government is considering increasing border enforcement in the future. Hanson (2005) reports that expenditures on border enforcement equalled approximately $\$$ 2.2 billion in 2002. I find that about an extra $\$$1 billion in expenditures would decrease immigration by 7%. Furthermore, I find that the effects of increased enforcement strongly depend on the allocation of resources along the border. Over the past 20 years, enforcement levels have increased substantially, and the growth in enforcement has been concentrated at certain sectors of the border. If the goal of the U.S. government is to reduce illegal immigration rates, then my model suggests that this has been the correct strategy. Furthermore, if the U.S. increases enforcement in the future, my results indicate that the government should continue to follow this pattern. My results imply that increases in Mexican wages reduce illegal immigration. In the paper, I simulate the effects of a 10% growth in Mexican wages, finding that it significantly reduces the amount of immigration, even though there is still a large U.S.–Mexico wage gap. Because of the large moving costs and a strong preference for living at one's home location, illegal immigration will decrease substantially as the wage differential is reduced. Furthermore, wage growth does not have to be uniformly distributed in Mexico to affect immigration. Empirical evidence shows that wage growth has not been uniform and that regional wage disparities within Mexico have grown, particularly since North American Free Trade Agreement. The areas with the most growth are the ones with access to foreign trade and investment. In this article, I study immigration in a partial equilibrium framework, not allowing for general equilibrium effects. Increases in immigration could drive down wages in the U.S. or cause higher wages in Mexico. However, there is no clear conclusion with regard to these general equilibrium effects. Kennan (2013) develops a model that predicts that migration will change wage levels but not the wage ratios between countries. The empirical evidence is mixed, as some research found a small effect of immigration on U.S. wages while other authors have found larger effects.54 In my model, I also assume that legal immigration status is exogenously determined. In reality, legal immigration rates are determined by how many have applied for visas, which is likely affected by the current number of illegal immigrants (since many people apply for visas after moving to the U.S.). Both of these equilibrium effects pose important questions that could be addressed in future work. This article is a first step in that direction and helps to provide the foundation for such an analysis. The article is also limited in that it does not allow for a relationship between savings and migration decisions, as in Thom (2010) and Adda et al. (2015). This is an additional area for future research on this topic. The editor in charge of this paper was Stephane Bonhomme. Acknowledgements I thank the referees and editor for their suggestions on this paper. I also thank Limor Golan, John Kennan, Brian Kovak, Sang Yoon Lee, Salvador Navarro, Chris Taber, Yuya Takahashi, Jim Walker, and participants at seminars at UW-Madison, Carnegie Mellon, Ohio State, Penn State, Kent State, and American University for helpful comments and advice. Maria Cellar provided excellent research assistance. All errors are my own. Footnotes 1. In 2004, remittances comprised 2.2% of Mexico's GDP, contributing more foreign exchange to Mexico than tourism or foreign direct investment (Hanson, 2006). 2. Hong (2010) applies a similar framework to Mexico–U.S. immigration, focusing on the legalization process. 3. Cerrutti and Massey (2001) find that women usually move to the U.S. following a family member, whereas men are much more likely to move on their own. Massey and Espinosa (1997) find that illegal immigrants are more likely to return to Mexico if they are married. 4. Another paper that looks at savings decisions is Adda et al. (2015), who develop a lifecycle model where migrants decide optimal migration lengths, along with savings and investment in human capital. They estimate this model using panel data on immigrants to Germany, and study the relationship between return migration intentions and human capital investments. In comparison to my work, this paper studies the decisions of migrants after they enter the host country. 5. Kossoudji and Cobb-Clark (2000) and Kossoudji and Cobb-Clark (2002) find that illegal immigrants receive lower wages than legal immigrants and are less likely to work in high-skill occupations when in the U.S. 6. Gathmann (2008) studies the behaviour of repeat migrants and finds that they switch their crossing point in response to an increase in enforcement at the initial crossing point. 7. Blejer et al. (1978), Crane et al. (1990), Passel et al. (1990), Donato et al. (1992), Kossoudji (1992) and find that migrants who are caught at the border attempt to enter the U.S. again. 8. I assume that once an illegal immigrant enters the U.S., there is no chance that he will be deported. Espenshade (1994) finds that only 1–2% of illegal immigrants living in the U.S. are caught and deported in each year. 9. An alternative approach would be to model the household problem, where the household jointly decides where the husband and wife will live in each period. However, this is computationally difficult, as the state space would have to contain the location of the husband and wife. Technically, the state space in my model also contains the locations of both individuals, but using my framework I am able to make certain assumptions that substantially reduce the state space and make the problem computationally feasible. These assumptions are explained in Section 6.4. 10. I do not allow for any expectations of divorce in the model. 11. I assume that legal status is an absorbing state: once a person is a legal immigrant, he cannot lose the ability to move legally. 12. In equations (12) and (13), I assume that people who are married will remain so, since there is no chance of their marital status changing. 13. The data and a discussion of the survey methodology are posted on the MMP website: mmp.opr.princeton.edu. 14. In most cases, I at least know the country a person is living in, if not his exact location. In 99% of the person-year observations, I know the country the surveyed people are living in. 15. The enforcement data end at 2004. Therefore, I only include location decisions up to 2004. 16. The MMP website shows a map of included communities: http://mmp.opr.princeton.edu/research/maps-en.aspx. 17. The MMP attempts to track individuals in the U.S., but has had limited success, so I do not include these observations. 18. I thank Gordon Hanson for providing these data. 19. This table only uses the full history sample because I do not have information on marital status at each point in time for the partial history sample. 20. I do not include married women with a spouse in Mexico in the sample, since their migration rates are close to zero. 21. Columns (2)–(4) control for marital status, and therefore only include data from the full history sample, since I do not know marital status at each point in time in the partial history data. 22. This regression does not include married women whose spouse is living in Mexico, since I dropped the rare cases where the woman was in the U.S. while the man was in Mexico. This term would not be identified in the regression because this group has 0 migration rates. 23. Column (1) uses the full and partial history samples, whereas the other columns only use the full history sample. 24. When I use all of the MMP data, this number is even higher. This is because the estimation sample is quite young, since I only consider people who are aged 17 or younger in 1980, so I am dropping the older respondents who were likely to have moved more times. 25. The empirical trends on follow what is normally found in the internal migration literature. For example, see Greenwood (1997). 26. The sectors are San Diego and El Centro in California; Yuma and Tucson in Arizona; El Paso in New Mexico; and Marfa, Del Rio, Laredo, and the Rio Grande Valley in Texas. 27. The data report the levels of patrol on a monthly basis. This graph shows the average for each year. This graph shows seven lines, instead of one line for each of the nine sectors, because in two cases, I combined two sectors that have low activity. 28. In 1993, Operation Hold the Line increased enforcement at El Paso. There was a large growth in enforcement in 1994 in San Diego due to Operation Gatekeeper. The Illegal Immigration Reform and Immigrant Responsibility Act of 1996 also allocated more resources to border enforcement. 29. One concern could be that the border patrol hours are not adequately controlling for the levels of enforcement, as there are other mechanisms that the U.S. government uses to monitor the border. Technology such as stadium lighting, infrared cameras, and ground sensors is used to aid border patrol agents. However, border patrol hours are highly correlated with total expenditures on border patrol. 30. There are 32 Mexican states. Grouping them into 24 locations, by combining nearby states, allows me to speed up computation substantially. The groupings were done by combining nearby states. Table A3 in Online Appendix A shows sample sizes and which states were combined in the estimation. 31. I assume that once a person moves to the U.S., he cannot move to a new location in the U.S., and can only choose between his current location and all locations in Mexico. This assumption is made because the data show very little movement across U.S. locations. 32. Del Rio was combined with Marfa, and Yuma was combined with El Centro. 33. Ideally, I would model the labour supply decisions of women. However, the MMP does not provide yearly labour force decisions, so this is not possible. The MMP provides some wage information. Therefore, if I observe a wage for a woman in the sample, I know she is a worker type. For the others, I have to integrate over these probabilities. 34. There is limited wage information when people are in Mexico. This is for the "last domestic wage" as well as wages for internal migrations in Mexico. However, these wages are often hard to match to specific points in time, and due to severe fluctuations in the Mexican economy, this often leads to imprecise estimates. 35. These are one-period shocks that do not persist. 36. Munshi (2003) finds that a Mexican immigrant living in the U.S. is more likely to be employed and to hold a higher paying non-agricultural job if his network is larger. This variable is only available for illegal immigrants, because it is not in the CPS data, which are partially used in the estimation of the wage process for legal immigrants. 37. These are estimated in a first stage using CPS data due to the small sample sizes in the MMP. For illegal wages, there is just an overall time trend, and for legal wages, the time trends are in the returns to education. 38. This is assuming that all individuals from Mexico in the CPS are legal immigrants. I do not know legal status in the CPS, but assume that all respondents are legal since illegal immigrants should be hesitant to participate in government surveys. 39. When a person is moving to the U.S. illegally, I calculate the distance from a state in Mexico to a border crossing point plus the distance from the border crossing point to the location in the U.S. 40. An alternative specification is to scale the number of payoff shocks according to the population size at the destination. 41. Massey and Espinosa (1997), Curran and Rivero-Fuentes (2003), and Colussi (2006) find evidence that networks affect immigration decisions. 42. I thank Craig McIntosh for providing the railroad data. 43. For years past 2004, I assume that people expect enforcement to remain constant in future periods. 44. Keeping the location of the spouse in the state space would mean that the state space has $$28^2$$ elements regarding location, for a person's location as well as his spouse, which would be quite slow to compute. Using country instead of location allows me to capture the empirical trends of interest. 45. I only include married couples as part of the same household, for in this case the likelihood is at the household level due to the joint nature of migration decisions in the model. Many households start as unmarried but become married in a future period. In the estimation, I calculate the likelihood at the household level, where each person makes decisions as a single agent before getting married, and then the couple's decisions relate to one another once married. 46. The parameters are fixed so that the total probability that a man or a woman is each type is set at 1/3. This gives a system of equations for the match probabilities. Even though there are nine parameters, I only have to estimate four and then the remainder are pinned down. 47. Online Appendix E discusses computational issues. 48. This was rejected using a significance level of 0.01. 49. The estimated parameters are for a static distribution, but the wages do change over time. The time trends in wages are estimated in a first stage and inputted into the model. For illegal wages, there is a constant time trend. For legal wages, the time trend depends on education. The state fixed effects are also estimated in a first stage. 50. This could be explained by climate, in that the weather in Illinois is much colder than in Texas or California. 51. An alternative way to control for this is to scale the number of payoff shocks by the population size at the destination. 52. One possible solution would be to allow for a completely different set of parameters for legal immigrants. I chose to not do this due to the computation time to estimate even more parameters. In addition, the counterfactuals focus on illegal immigrants, so it is less important to get a good fit of the data for legal migrants. 53. This counterfactual is limited because there is no unobserved heterogeneity over Mexican wages in the model. 54. LaLonde and Topel (1997), Smith and Edmonston (1997), and Borjas (1999) find a weak correlation between immigration inflows and wage changes for low skilled U.S. workers. Borjas et al. (1997) find larger effects. REFERENCES ADDA J. , DUSTMANN C. and GORLACH J.-S. ( 2015 ), "The Dynamics of Return Migration, Human Capital Accumulation, and Wage Assimilation" (Working Paper) . ANGELUCCI M. ( 2012 ), "U.S. Border Enforcement and the Net Flow of Mexican Illegal Migration" , Economic Development and Cultural Change , 60 , 311 – 357 . Google Scholar Crossref Search ADS BLEJER M. , JOHNSON H. and PORZECANSKI A. ( 1978 ), "An Analysis of the Economic Determinants of Legal and Illegal Mexican Migration to the United States" , in Simon J. eds, Research in Population Economics: An Annual Compliation of Research ( Greenwich, CT : JAI Press ) 217 – 231 . BOHN S. and PUGATCH T. ( 2015 ), "U.S. Border Enforcement and Mexican Immigrant Location Choice" , Demography , 52 , 1543 – 1570 . Google Scholar Crossref Search ADS PubMed BORJAS G. ( 1987 ), "Self-Selection and the Earnings of Immigrants" , The American Economic Review , 77 , 531 – 553 . BORJAS G. ( 1999 ), "The Economic Analysis of Immigration" , in Ashenfelter O. C. and Card D. (eds), Handbook of Labor Economics ( North Holland : Elsevier ) 1697 – 1760 . BORJAS G. J. , FREEMAN R. B. and KATZ L. F. ( 1997 ), "How Much do Immigration and Trade Affect Labor Market Outcomes" , Brookings Papers on Economic Activity , 1 , 1 – 90 . Google Scholar Crossref Search ADS CERRUTTI M. and MASSEY D. S. ( 2001 ), "On the Auspices of Female Migration from Mexico to the United States" , Demography , 38 , 187 – 200 . Google Scholar Crossref Search ADS PubMed CHIQUIAR D. and HANSON G. ( 2005 ), "International Migration, Self-Selection, and the Distribution of Wages: Evidence from Mexico and the United States" , Journal of Political Economy , 113 , 239 – 281 . Google Scholar Crossref Search ADS COLUSSI A. ( 2006 ), "Migrants' Networks: An Estimable Model of Illegal Mexican Migration" (Working Paper) . CORNELIUS W. ( 1989 ), "Impacts of the 1986 U.S. Immigration Law on Emigration from Rural Mexican Sending Communities" , Population and Development Review , 15 , 689 – 705 . Google Scholar Crossref Search ADS CRANE K. , ASCH B. , HEILBRUNN J. Z. et al. ( 1990 ), "The Effect of Employer Sanctions on the Flow of Undocumented Immigrants to the United States" (Discussion paper, Program for Research on Immigration Policy, the RAND Corporation (Report JRI-03) and the Urban Institute (UR Report 90-8)) . CURRAN S. R. and RIVERO-FUENTES E. ( 2003 ), "Engendering Migrant Networks: The Case of Mexican Migration" , Demography , 40 , 289 – 307 . Google Scholar Crossref Search ADS PubMed DONATO K. , DURAND J. and MASSEY D. ( 1992 ), "Stemming the Tide? Assessing the Deterrent Effect of the Immigration Reform and Control Act" , Demography , 29 , 139 – 157 . Google Scholar Crossref Search ADS PubMed DURAND J. , MASSEY D. and ZENTENO R. ( 2001 ), "Mexican Immigration to the United States: Continuties and Changes" , Latin American Research Review , 36 , 107 – 127 . Google Scholar PubMed ESPENSHADE T. ( 1990 ), "Undocumented Migration to the United States: Evidence from a Repeated Trials Model" , in Bean F. Edmonston B. and Passel J. (eds), Undocumented Migration to the United States: IRCA and the Experience of the 1980's ( Washington, DC : Urban Institute ) 159 – 182 . ESPENSHADE T. ( 1994 ), "Does the Threat of Border Apprehension Deter Undocumented U.S. Immigration" , Population and Development Review , 20 , 871 – 892 . Google Scholar Crossref Search ADS GATHMANN C. ( 2008 ), "Effects of Enforcement on Illegal Markets: Evidence from Migrant Smuggling Across the Southwestern Border" , Journal of Public Economics , 92 , 1926 – 1941 . Google Scholar Crossref Search ADS GEMICI A. ( 2011 ), "Family Migration and Labor Market Outcomes" (Working Paper) . GREENWOOD M. J. ( 1997 ), "Internal Migration in Developed Countries" , in Handbook of Population and Family Economics , pp. 647 – 720 . HANSON G. ( 2005 ), Why Does Immigration Divide America: Public Finance and Political Opposition to Open Borders ( Washington, DC : Istitute for International Economics ). HANSON G. ( 2006 ), "Illegal Migration from Mexico to the United States" , Journal of Economic Literature , 44 , 869 – 924 . Google Scholar Crossref Search ADS HANSON G. and SPILIMBERGO A. ( 1999 ), "Illegal Immigration, Border Enforcement, and Relative Wages: Evidence from Apprehensions at the U.S.-Mexico Border" , American Economic Review , 89 , 1337 – 1357 . Google Scholar Crossref Search ADS HONG G. ( 2010 ), "U.S. and Domestic Migration Decisions of Mexican Workers" (Working Paper) . IBARRARAN P. and LUBOTSKY D. ( 2005 ), "Mexican Immigration and Self-Selection: New Evidence from the 2000 Mexican Census" (NBER Working Paper No. 11456) . KENNAN J. ( 2013 ), "Open Borders" , Review of Economic Dynamics , 16 , L1 – L13 . Google Scholar Crossref Search ADS KENNAN J. and WALKER J. ( 2011 ), "The Effect of Expected Income on Individual Migration Decisions" , Econometrica , 79 , 211 – 251 . Google Scholar Crossref Search ADS KOSSOUDJI S. ( 1992 ), "Playing Cat and Mouse at the U.S.-Mexican Border" , Demography , 29 , 159 – 180 . Google Scholar Crossref Search ADS PubMed KOSSOUDJI S. and COBB-CLARK D. ( 2000 ), "IRCA's Impact on the Occupational Concentration and Mobility of Newly-Legalized Mexican Men" , Journal of Population Economics , 13 , 81 – 98 . Google Scholar Crossref Search ADS KOSSOUDJI S. ( 2002 ), "Coming out of the Shadows: Learning about Legal Status and Wages from the Legalized Population" , Journal of Labor Economics , 20 ( 3 ), 598 – 628 . Google Scholar Crossref Search ADS KROGSTAD J. M. , PASSEL J. and COHEN D. ( 2017 ), "5 Facts about Illegal Immigration in the U.S." (Pew Research Center) . LACUESTA A. ( 2006 ), "Emigration and Human Capital: Who Leaves, Who Comes Back, and What Difference Does it Make?" , Documentos de Trabajo No 0620, Banco de Espana . LALONDE R. and TOPEL R. ( 1997 ), "Economic Impact of International Migration and Migrants" , in Rosenzweig M. R. and Stark O. (eds), Handbook of Population and Family Economics ( Elsevier Science ) 799 – 850 . LESSEM R. and NAKAJIMA K. ( 2015 ), "Immigrant Wages and Recessions: Evidence from Undocumented Mexicans" (Working Paper) . Lindstrom D. P. ( 1996 ), "Economic Opportunity in Mexico and Return Migration from the United States" , Demography , 33 ( 3 ), 357 – 374 . Google Scholar Crossref Search ADS PubMed MASKIN E. and TIROLE J. ( 1988 ), "A Theory of Dynamic Oligopoly, I: Overview and Quantity Competition with Large Fixed Costs" , Econometrica , 56 , 549 – 569 . Google Scholar Crossref Search ADS MASSEY D. S. ( 2007 ), "Understanding America's Immigration "Crisis" , Proceedings of the American Philosophical Society , 151 ( 3 ), 309 – 327 . MASSEY D. S. and ESPINOSA K. E. ( 1997 ), "What's Driving Mexico-U.S. Migration? A Theoretical, Empirical, and Policy Analysis" , The American Journal of Sociology , 102 , 939 – 999 . Google Scholar Crossref Search ADS MCFADDEN D. ( 1973 ), "Conditional Logit Analysis of Qualitative Choice Behavior" , in Zarembka P. (ed.), Frontiers in Econometrics ( Academic Press ). MEXICAN MIGRATION PROJECT ( 2011 ), "MMP128" , mmp.opr.princeton.edu. MUNSHI K. ( 2003 ), "Networks in the Modern Economy: Mexican Migrants in the U.S. Labor Market" , The Quarterly Journal of Economics , 118 , 549 – 599 . Google Scholar Crossref Search ADS ORRENIUS P. and ZAVODNY M. ( 2005 ), "Self-Selection among Undocumented Immigrants from Mexico" , Journal of Development Economics , 78 , 215 – 240 . Google Scholar Crossref Search ADS PASSEL J. , BEAN F. and EDMONSTON B. ( 1990 ), "Undocumented Migration since IRCA: An Overall Assessment" , in Bean F. Edmonston B. and Passel J. (eds), Undocumented Migration to the United States: IRCA and the Experience of the 1980s ( Washington, DC : Urban Institute ) 251 – 265 . RENDÓN S. and CUECUECHA A. ( 2010 ), "International Job Search: Mexicans in and out of the U.S." , Review of Economics of the Household , 8 , 53 – 82 . Google Scholar Crossref Search ADS REYES B. I. and MAMEESH L. ( 2002 ), "Why Does Immigrant Trip Duration Vary Across U.S. Destinations?" , Social Science Quarterly , 83 , 580 – 593 . Google Scholar Crossref Search ADS RUGGLES S. , ALEXANDER J. T. , GENADEK K. ( 2010 ), "Intergrated Public Use Microdata Series: Version 5.0" ( Machinereadable database, Minneapolis : University of Minnesota ). RUST J. ( 1987 ), "Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher" , Econometrica , 55 , 999 – 1033 . Google Scholar Crossref Search ADS SMITH J. and EDMONSTON B. ( 1997 ), The New Americans: Economic, Demographic and Fiscal Effects of Immigration ( Washington, DC : National Academy Press ). THOM K. ( 2010 ), "Repeated Circular Migration: Theory and Evidence from Undocumented Migrants" (Working Paper) . © The Author(s) 2017. Published by Oxford University Press on behalf of The Review of Economic Studies Limited. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) The Review of Economic Studies – Oxford University Press Published: Oct 1, 2018 "The Dynamics of Return Migration, Human Capital Accumulation, and Wage Assimilation" "U.S. Border Enforcement and the Net Flow of Mexican Illegal Migration" "An Analysis of the Economic Determinants of Legal and Illegal Mexican Migration to the United States" "U.S. Border Enforcement and Mexican Immigrant Location Choice" "Self-Selection and the Earnings of Immigrants" "The Economic Analysis of Immigration" "How Much do Immigration and Trade Affect Labor Market Outcomes" "On the Auspices of Female Migration from Mexico to the United States" "International Migration, Self-Selection, and the Distribution of Wages: Evidence from Mexico and the United States" "Migrants' Networks: An Estimable Model of Illegal Mexican Migration" "Impacts of the 1986 U.S. Immigration Law on Emigration from Rural Mexican Sending Communities" "The Effect of Employer Sanctions on the Flow of Undocumented Immigrants to the United States" "Engendering Migrant Networks: The Case of Mexican Migration" "Stemming the Tide? Assessing the Deterrent Effect of the Immigration Reform and Control Act" "Mexican Immigration to the United States: Continuties and Changes" "Undocumented Migration to the United States: Evidence from a Repeated Trials Model" "Does the Threat of Border Apprehension Deter Undocumented U.S. Immigration" "Effects of Enforcement on Illegal Markets: Evidence from Migrant Smuggling Across the Southwestern Border" "Family Migration and Labor Market Outcomes" "Internal Migration in Developed Countries" citation_publisher=Istitute for International Economics, Washington, DC; Why Does Immigration Divide America: Public Finance and Political Opposition to Open Borders "Illegal Migration from Mexico to the United States" "Illegal Immigration, Border Enforcement, and Relative Wages: Evidence from Apprehensions at the U.S.-Mexico Border" "U.S. and Domestic Migration Decisions of Mexican Workers" "Mexican Immigration and Self-Selection: New Evidence from the 2000 Mexican Census" "The Effect of Expected Income on Individual Migration Decisions" "Playing Cat and Mouse at the U.S.-Mexican Border" "IRCA's Impact on the Occupational Concentration and Mobility of Newly-Legalized Mexican Men" "Coming out of the Shadows: Learning about Legal Status and Wages from the Legalized Population" "Emigration and Human Capital: Who Leaves, Who Comes Back, and What Difference Does it Make?" "Economic Impact of International Migration and Migrants" "Immigrant Wages and Recessions: Evidence from Undocumented Mexicans" "Economic Opportunity in Mexico and Return Migration from the United States" "A Theory of Dynamic Oligopoly, I: Overview and Quantity Competition with Large Fixed Costs" "Understanding America's Immigration "Crisis" "What's Driving Mexico-U.S. Migration? A Theoretical, Empirical, and Policy Analysis" "Conditional Logit Analysis of Qualitative Choice Behavior" "Networks in the Modern Economy: Mexican Migrants in the U.S. Labor Market" "Self-Selection among Undocumented Immigrants from Mexico" "Undocumented Migration since IRCA: An Overall Assessment" "International Job Search: Mexicans in and out of the U.S." "Why Does Immigrant Trip Duration Vary Across U.S. Destinations?" "Intergrated Public Use Microdata Series: Version 5.0" "Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher" citation_publisher=National Academy Press, Washington, DC; The New Americans: Economic, Demographic and Fiscal Effects of Immigration "Repeated Circular Migration: Theory and Evidence from Undocumented Migrants" Rebecca, L.. (2018). Mexico–U.S. Immigration: Effects of Wages and Border Enforcement. The Review of Economic Studies, 85(4), Rebecca, Lessem. "Mexico–U.S. Immigration: Effects of Wages and Border Enforcement." The Review of Economic Studies 85.4 (2018).
CommonCrawl
Beat Frequency for Police Radar with Special Relativity "A radar speed trap operates on a frequency $v_o = 109 Hz$. What is the beat frequency between the transmitted signal and one received after reflection from a car moving at v = 30 m/s toward the radar. Do calculations with accuracy to linear terms in v/c." I'm not entirely sure how to approach this problem. I have a formula for the Doppler effect with time dilation, $f_{obs} = f_{source} \frac{\sqrt{1-\frac{v^2}{c^2}}}{1+\frac{v}{c}cos\theta}$, where v would be the speed of the source (or the signs could be switched if the observer is moving, right?). I also know I'm going to have to make two calculations - one for when the signal hits the car, and another for when it is reflected towards the source. To solve this problem, I tried using the formula above with v = 3 m/s to find the frequency that the observer would receive, but then the frequency would not change on the way back since the source is not moving (and is thus not subject to the doppler effect), right? Does that mean that the beat frequency is simply the difference between the initial signal, $v_o = 109 Hz$, and the frequency the observer receives? homework-and-exercises special-relativity frequency doppler-effect $\begingroup$ I don't think you need a relativistic calculation when the car is only travelling at 30 m/s. Just do a regular Doppler shift calculation. $\endgroup$ – John Rennie Jan 30 '15 at 18:02 $\begingroup$ That formula is for transverse doppler effect where the objects are moving perpendicular to the light at relativistic speeds. $\endgroup$ – Rick Feb 2 '15 at 20:37 but then the frequency would not change on the way back since the source is not moving (and is thus not subject to the doppler effect), right? No, that's not right. The Doppler effect of sound depends on the relative velocitiesof both source and listener, compared to the medium. The Doppler effect for light depends on the relative velocity of the source and listener. Then, upon reflection of the initial signal, the reflector becomes a source and the original source becomes a listener. So there are two frequency shifts for radar gun to car and back to gun. Aside: Police radar doesn't operate at 109 Hz. Are you sure that it isn't 109 GHz? At the low frequency the amount of shift won't be large enough to reliably and quickly detect the speed of the car. Bill NBill N Your formula is for using Doppler shift with relativistic speeds. For this problem all of the speeds involved are much smaller then the speed of light, making the problem not relative. Here's a derivation of a formula: The radar gun will produce waves with frequency $f_0$ traveling at the speed of light $c$ toward the car. This results in a wavelength $\lambda_0$ of $\frac{c}{f_0}$ The car will be hit with these $\lambda_0$ waves and the time between peaks will be distance/speed resulting in an observed frequency $f_1$ of $\frac{c+v}{\lambda_0}$ The car will reflect the wave at the same frequency but this time the car will be moving in the same direction as the light so this results in a wavelength $\lambda_1$ of $\frac{c-v}{f_1}$ This wavelength is then observed by the radar gun as a wave with frequency $f_2=\frac{c}{\lambda_1}$ Finally the radar gun measures the beat frequency of $|f_0-f_2|$ Combining all the equations: $$\lambda_0=\frac{c}{f_0}$$ $$f_1=\frac{f_0(c+v)}{c}$$ $$\lambda_1=\frac{c(c-v)}{f_0(c+v)}$$ $$f_2=f_0\frac{c+v}{c-v}$$ So the final beat frequency is: $$f_0\left|\frac{c+v}{c-v}-1\right|$$ Rearranging gives: $$2 f_0\left|\frac{\frac{v}{c}+(\frac{v}{c})^2}{1-(\frac{v}{c})^2}\right|$$ Now the question statement asks for an answer the only considers linear terms of $\frac{v}{c}$ so the square terms can be approximated as zero. This leaves: $$2 f_0\left|\frac{v}{c}\right|$$ Note that if the radar was pointed at a car moving away, this could be represented as a negative v which in this linearized form doesn't make a difference, so radar can be used in either direction. RickRick Not the answer you're looking for? Browse other questions tagged homework-and-exercises special-relativity frequency doppler-effect or ask your own question. Is the Doppler effect for sound symmetrical for observer or source movement? Calculating frequency of sound when source is moving faster than the sound waves Doppler effect and apparent frequency Doppler frequency shift radar Why are there so many Doppler Effect formulas?
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Observational constraints on the sub-galactic matter-power spectrum from galaxy-galaxy strong gravitational lensing (1803.05952) D. Bayer, S. Vegetti Kapteyn Astronomical Institute, University of Groningen, Groningen, the Netherlands, ASTRON, Netherlands Institute for Radio Astronomy, Dwingeloo, the Netherlands, Department of Physics, Astronomy, UCLA, Los Angeles, USA, Department of Physics, University of California, Davis, USA) March 15, 2018 astro-ph.CO, astro-ph.GA Constraining the sub-galactic matter-power spectrum on 1-10 kpc scales would make it possible to distinguish between the concordance $\Lambda$CDM model and various alternative dark-matter models due to the significantly different levels of predicted mass structure. Here, we demonstrate a novel approach to observationally constrain the population of overall law-mass density fluctuations in the inner regions of massive elliptical lens galaxies, based on the power spectrum of the associated surface-brightness perturbations observable in highly magnified galaxy-scale Einstein rings and gravitational arcs. The application of our method to the SLACS lens system SDSS J0252+0039 results in the following limits (at the 99 per cent confidence level) on the dimensionless convergence-power spectrum (and the associated standard deviation in aperture mass): $\Delta^{2}_{\delta\kappa}<1$ ($\sigma_{AM}< 0.8 \times 10^8 M_\odot$) on 0.5-kpc scale, $\Delta^{2}_{\delta\kappa}<0.1$ ($\sigma_{AM}< 1 \times 10^8 M_\odot$) on 1-kpc scale and $\Delta^{2}_{\delta\kappa}<0.01$ ($\sigma_{AM}< 3 \times 10^8 M_\odot$) on 3-kpc scale. The estimated effect of CDM sub-haloes lies considerably below these first observational upper-limit constraints on the level of inhomogeneities in the projected total mass distribution of galactic haloes. Future analysis for a larger sample of galaxy-galaxy strong lens systems will narrow down these constraints and rule out all cosmological models predicting a significantly larger level of clumpiness on these critical sub-galactic scales. Spin transport in two-layer-CVD-hBN/graphene/hBN heterostructures (1712.00815) Mallikarjuna Gurram, Péter Makk, Christian Schönenberger Physics of Nanodevices, Zernike Institute for Advanced Materials, University of Groningen, Groningen, The Netherlands, Department of Physics, University of Basel, Basel, Switzerland, , Department of Materials Science, Engineering, College of Engineering, Peking University, Beijing, P.R. China) Dec. 3, 2017 cond-mat.mtrl-sci, cond-mat.mes-hall We study room temperature spin transport in graphene devices encapsulated between a layer-by-layer-stacked two-layer-thick chemical vapour deposition (CVD) grown hexagonal boron nitride (hBN) tunnel barrier, and a few-layer-thick exfoliated-hBN substrate. We find mobilities and spin-relaxation times comparable to that of SiO$_2$ substrate based graphene devices, and obtain a similar order of magnitude of spin relaxation rates for both the Elliott-Yafet and D'Yakonov-Perel' mechanisms. The behaviour of ferromagnet/two-layer-CVD-hBN/graphene/hBN contacts ranges from transparent to tunneling due to inhomogeneities in the CVD-hBN barriers. Surprisingly, we find both positive and negative spin polarizations for high-resistance two-layer-CVD-hBN barrier contacts with respect to the low-resistance contacts. Furthermore, we find that the differential spin injection polarization of the high-resistance contacts can be modulated by DC bias from -0.3 V to +0.3 V with no change in its sign, while its magnitude increases at higher negative bias. These features mark a distinctive spin injection nature of the two-layer-CVD-hBN compared to the bilayer-exfoliated-hBN tunnel barriers. Observation of the decay $B^0_s \to \phi\pi^+\pi^-$ and evidence for $B^0 \to \phi\pi^+\pi^-$ (1610.05187) LHCb collaboration: R. Aaij, Z. Ajaltouni, M. Alexander, A.A. Alves Jr, L. Anderlini, R.B. Appleby, A. Artamonov, M. Baalouch, A. Badalov, R.J. Barlow, V. Batozskaya, L. Beaucourt, V. Bellee, E. Ben-Haim, R. Bernet, M. van Beuzekom, T. Bird, F. Blanc, T. Boettcher, A. Borgheresi, M. Boubdir, S. Braun, C. Burr, R. Calabrese, P. Campana, L. Capriotti, A. Cardini, G. Casse, Ch. Cauet, Ph. Charpentier, S. Chen, X. Cid Vidal, H.V. Cliff, V. Cogoni, A. Comerma-Montells, G. Corti, G.A. Cowan, S. Cunliffe, J. Dalseno, K. De Bruyn, L. De Paula, D. Decamp, D. Derkach, H. Dijkstra, A. Dovbnya, K. Dungs, N. Déléage, V. Egorychev, R. Ekelhof, H.M. Evans, S. Farry, V. Fernandez Albor, F. Ferreira Rodrigues, M. Fiorini, F. Fleuret, D.C. Forshaw, C. Frei, A. Gallas Torreira, M. Gandelman, J. García Pardiñas, D. Gascon, E. Gersabeck, S. Gianì, K. Gizdov, A. Gomes (1, a), I.V. Gorelov, R. Graciani Diaz, E. Graverini, L. Grillo, Yu. Guz, C. Hadjivasiliou, B. Hamilton, S.T. Harnew, A. Heister, J.A. Hernando Morata, D. Hill, M. Hushchyn, P. Ilten, A. Jawahery, C.R. Jones, W. Kanso, M. Kelsey, E. Khairullin, S. Klaver, R.F. Koopman, L. Kravchuk, F. Kruse, V. Kudryavtsev, T. Kvaratskheliya, D. Lambert, C. Lazzeroni, J. Lefrançois, O. Leroy, T. Likhomanenko, X. Liu, M. Lucio Martinez, O. Lupton, O. Maev, G. Manca, J.F. Marchand, J. Marks, D. Martinez Santos, L.M. Massacrier, Z. Mathe, M. McCann, B. Meadows, A. Merli, D.S. Mitzel, S. Monteil, M.J. Morello, M. Mulder, K. Müller, R. Nandakumar, S. Neubert, C. Nguyen-Mau, T. Nikodem, A. Oblakowska-Mucha, C.J.G. Onderwater, A. Oyanguren, J. Panman, L.L. Pappalardo, A. Pastore (14, d), G.D. Patel, A. Pellegrino, P. Perret, A. Petrov, M. Pikies, S. Playfer, A. Poluektov, D. Popov, E. Price, C. Prouve, W. Qian, M. Rama, F. Redi, V. Renaudin, K. Rinnert, E. Rodrigues, A. Rogozhnikov, J.W. Ronayne, P. Ruiz Valls, B. Saitta, B. Sanmartin Sedes, M. Santimaria, C. Satriano (26, s), A. Satta, M. Schellenberg, M. Schmelling, A. Schopper, R. Schwemmer, A. Sergi, M. Shapkin, L. Shekhtman, R. Silva Coutinho, S. Simone, E. Smith, M.D. Sokoloff, B. Spaan, M. Stahl, S. Stemmle, S. Stone, U. Straumann, M. Szczekowski, T. Tekampe, E. Thomas, M. Tobin, S. Topp-Joergensen, K. Trabelsi, A. Trisovic, A. Ukleja, V. Vagnoni, A. Vallier, M. van Veghel, A. Venkateswaran, D. Vieira, A. Vollhardt, C. Voß, C. Wallace, H.M. Wark, M. Whitehead, M. Williams, F.F. Wilson, M. Witek, K. Wyllie, J. Yu, K.A. Zarebski, Y. Zhang, V. Zhukov, Rio de Janeiro, Brazil, , Rio de Janeiro, Brazil, Center for High Energy Physics, Tsinghua University, Beijing, China, LAPP, Université Savoie Mont-Blanc, CNRS/IN2P3, Annecy-Le-Vieux, France, Clermont Université, Université Blaise Pascal, CNRS/IN2P3, LPC, Clermont-Ferrand, France, CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France, LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France, LPNHE, Université Pierre et Marie Curie, Université Paris Diderot, CNRS/IN2P3, Paris, France, Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany, Max-Planck-Institut für Kernphysik Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany, School of Physics, University College Dublin, Dublin, Ireland, Sezione INFN di Bologna, Bologna, Italy, Sezione INFN di Ferrara, Ferrara, Italy, Sezione INFN di Firenze, Firenze, Italy, Sezione INFN di Genova, Genova, Italy, Sezione INFN di Milano Bicocca, Milano, Italy, Sezione INFN di Padova, Padova, Italy, Sezione INFN di Roma Tor Vergata, Roma, Italy, Sezione INFN di Roma La Sapienza, Roma, Italy, Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences, Kraków, Poland, AGH - University of Science, Technology, Faculty of Physics, Applied Computer Science, Kraków, Poland, , Warsaw, Poland, Horia Hulubei National Institute of Physics, Nuclear Engineering, Bucharest-Magurele, Romania, Petersburg Nuclear Physics Institute Institute of Theoretical, Experimental Physics Institute of Nuclear Physics, Moscow State University Institute for Nuclear Research of the Russian Academy of Sciences, Moscow, Russia, Budker Institute of Nuclear Physics, Novosibirsk, Russia, Novosibirsk, Russia, ICCUB, Universitat de Barcelona, Barcelona, Spain, Universidad de Santiago de Compostela, Santiago de Compostela, Spain, European Organization for Nuclear Research Ecole Polytechnique Fédérale de Lausanne Physik-Institut, Universität Zürich, Zürich, Switzerland, Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands, Nikhef National Institute for Subatomic Physics, VU University Amsterdam, Amsterdam, The Netherlands, NSC Kharkiv Institute of Physics, Technology Institute for Nuclear Research of the National Academy of Sciences University of Birmingham, Birmingham, United Kingdom, H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom, Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom, Department of Physics, University of Warwick, Coventry, United Kingdom, STFC Rutherford Appleton Laboratory, Didcot, United Kingdom, School of Physics, Astronomy, University of Edinburgh, Edinburgh, United Kingdom, School of Physics, Astronomy, University of Glasgow, Glasgow, United Kingdom, Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom, School of Physics, Astronomy, University of Manchester, Manchester, United Kingdom, Department of Physics, University of Oxford, Oxford, United Kingdom, Massachusetts Institute of Technology, Cambridge, MA, United States, University of Maryland, College Park, MD, United States, Syracuse University, Syracuse, NY, United States, Pontifícia Universidade Católica do Rio de Janeiro, Rio de Janeiro, Brazil, associated to University of Chinese Academy of Sciences, Beijing, China, associated to Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China, associated to Departamento de Fisica, Universidad Nacional de Colombia, Bogota, Colombia, associated to Institut für Physik, Universität Rostock, Rostock, Germany, associated to National Research Centre Kurchatov Institute, Moscow, Russia, associated to Instituto de Fisica Corpuscular, Van Swinderen Institute, University of Groningen, Groningen, The Netherlands, associated to Universidade Federal do Triângulo Mineiro Laboratoire Leprince-Ringuet, Palaiseau, France, P.N. Lebedev Physical Institute, Russian Academy of Science Università di Bologna, Bologna, Italy, Università di Cagliari, Cagliari, Italy, Università di Genova, Genova, Italy, Università di Milano Bicocca, Milano, Italy, Università di Roma La Sapienza, Roma, Italy, AGH - University of Science, Technology, Faculty of Computer Science, Electronics, Telecommunications, Kraków, Poland, LIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain, Hanoi University of Science, Hanoi, Viet Nam, Università di Pisa, Pisa, Italy, Università di Urbino, Urbino, Italy, Università della Basilicata, Potenza, Italy, Università di Modena e Reggio Emilia, Modena, Italy, Iligan Institute of Technology Novosibirsk State University, Novosibirsk, Russia, Jan. 13, 2017 hep-ex The first observation of the rare decay$B^0_s \to \phi\pi^+\pi^-$ and evidence for $B^0 \to \phi\pi^+\pi^-$ are reported, using $pp$ collision data recorded by the LHCb detector at centre-of-mass energies $\sqrt{s} = 7$ and 8~TeV, corresponding to an integrated luminosity of $3{\mbox{\,fb}^{-1}}$. The branching fractions in the $\pi^+\pi^-$ invariant mass range $400<m(\pi^+\pi^-)<1600{\mathrm{\,Me\kern -0.1em V\!/}c^2}$ are $[3.48\pm 0.23\pm 0.17\pm 0.35]\times 10^{-6}$ and $[1.82\pm 0.25\pm 0.41\pm 0.14]\times 10^{-7}$ for $B^0_s \to \phi\pi^+\pi^-$ and $B^0 \to \phi\pi^+\pi^-$ respectively, where the uncertainties are statistical, systematic and from the normalisation mode $B^0_s \to \phi\phi $. A combined analysis of the $\pi^+\pi^-$ mass spectrum and the decay angles of the final-state particles identifies the exclusive decays $B^0_s \to \phi f_0(980) $, $B_s^0 \to \phi f_2(1270) $, and $B^0_s \to \phi\rho^0$ with branching fractions of $[1.12\pm 0.16^{+0.09}_{-0.08}\pm 0.11]\times 10^{-6}$, $[0.61\pm 0.13^{+0.12}_{-0.05}\pm 0.06]\times 10^{-6}$ and $[2.7\pm 0.7\pm 0.2\pm 0.2]\times 10^{-7}$, respectively. The age of the young bulge-like population in the stellar system Terzan5: linking the Galactic bulge to the high-z Universe (1609.01515) F.R. Ferraro, B. Lanzoni - (1 DIFA, Univ. Bologna, 2 INAF-Bologna, 3 Kapteyn Astronomical Institute, Groningen, 4 UCLA) Sept. 6, 2016 astro-ph.GA, astro-ph.SR The Galactic bulge is dominated by an old, metal rich stellar population. The possible presence and the amount of a young (a few Gyr old) minor component is one of the major issues debated in the literature. Recently, the bulge stellar system Terzan 5 was found to harbor three sub-populations with iron content varying by more than one order of magnitude (from 0.2 up to 2 times the solar value), with chemical abundance patterns strikingly similar to those observed in bulge field stars. Here we report on the detection of two distinct main sequence turn-off points in Terzan 5, providing the age of the two main stellar populations: 12 Gyr for the (dominant) sub-solar component and 4.5 Gyr for the component at super-solar metallicity. This discovery classifies Terzan 5 as a site in the Galactic bulge where multiple bursts of star formation occurred, thus suggesting a quite massive progenitor possibly resembling the giant clumps observed in star forming galaxies at high redshifts. This connection opens a new route of investigation into the formation process and evolution of spheroids and their stellar content. Plasma Propulsion of a Metallic Micro-droplet and its Deformation upon Laser Impact (1604.00214) Dmitry Kurilovich, Francesco Torretti, Wim Ubachs Advanced Research Center for Nanolithography Department of Physics, Astronomy, LaserLaB, Vrije Universiteit, Amsterdam, The Netherlands, Physics of Fluids Group, Faculty of Science, Technology, MESA+ Institute, University of Twente, Enschede, The Netherlands, Zernike Institute for Advanced Materials, University of Groningen, Groningen, The Netherlands) April 1, 2016 physics.flu-dyn, physics.plasm-ph The propulsion of a liquid indium-tin micro-droplet by nanosecond-pulse laser impact is experimentally investigated. We capture the physics of the droplet propulsion in a scaling law that accurately describes the plasma-imparted momentum transfer, enabling the optimization of the laser-droplet coupling. The subsequent deformation of the droplet is described by an analytical model that accounts for the droplet's propulsion velocity and the liquid properties. Comparing our findings to those from vaporization-accelerated mm-sized water droplets, we demonstrate that the hydrodynamic response of laser-impacted droplets is scalable and independent of the propulsion mechanism. A low-mass protostar's disk-envelope interface: disk-shadowing evidence from ALMA DCO+ observations of VLA1623 (1505.07761) Nadia M. Murillo, Ewine F. van Dishoeck, Shih-Ping Lai (2, 5), Christian M. Fuchs Max Planck Institute for extraterrestrial physics, Garching, Germany, Institute of Astronomy, Department of Physics, National Tsing Hua University, Hsinchu, Taiwan, , Leiden Observatory, Leiden University, Leiden, The Netherlands, SRON Netherlands Institute for Space Research, Groningen, The Netherlands, Academia Sinica Institute of Astronomy, Astrophysics, Taipei, Taiwan, Institute of Astronautics, Technical University Munich, Munich, Germany) May 28, 2015 astro-ph.SR Due to instrumental limitations and a lack of disk detections, the structure between the envelope and the rotationally supported disk has been poorly studied. This is now possible with ALMA through observations of CO isotopologs and tracers of freezeout. Class 0 sources are ideal for such studies given their almost intact envelope and young disk. The structure of the disk-envelope interface of the prototypical Class 0 source, VLA1623A which has a confirmed Keplerian disk, is constrained from ALMA observations of DCO+ 3-2 and C18O 2-1. The physical structure of VLA1623 is obtained from the large-scale SED and continuum radiative transfer. An analytic model using a simple network coupled with radial density and temperature profiles is used as input for a 2D line radiative transfer calculation for comparison with the ALMA Cycle 0 12m array and Cycle 2 ACA observations of VLA1623. DCO+ emission shows a clumpy structure bordering VLA1623A's Keplerian disk, suggesting a cold ring-like structure at the disk-envelope interface. The radial position of the observed DCO+ peak is reproduced in our model only if the region's temperature is between 11-16K, lower than expected from models constrained by continuum and SED. Altering the density has little effect on the DCO+ position, but increased density is needed to reproduce the disk traced in C18O. The DCO+ emission around VLA1623A is the product of shadowing of the envelope by the disk. Disk-shadowing causes a drop in the gas temperature outside of the disk on >200AU scales, encouraging deuterated molecule production. This indicates that the physical structure of the disk-envelope interface differs from the rest of the envelope, highlighting the drastic impact that the disk has on the envelope and temperature structure. The results presented here show that DCO+ is an excellent cold temperature tracer. Evidence of Cluster Structure of $^9$Be from $^3$He+$^9$Be Reaction (1504.03942) S.M. Lukyanov, W.H. Trzaska, V. Glagolev, Yu E. Penionzhkevich, K. Kuterbekov Flerov Laboratory of Nuclear Reactions, Dubna, Russian Federation KVI-CART, University of Groningen, Groningen, Netherlands Nuclear Physics Institute, Řež, Czech Republic Department of Physics, University of Jyväskylä, Jyväskylä, Finland Khlopin Institute, St. Petersburg, Russian Federation National Research Nuclear University "MEPhI", Moscow, Russian Federation Nuclear Physics Institute, Almaty, Kazakhstan) April 15, 2015 nucl-ex The study of inelastic scattering and multi-nucleon transfer reactions was performed by bombarding a $^{9}$Be target with a $^3$He beam at an incident energy of 30 MeV. Angular distributions for $^9$Be($^3$He,$^3$He)$^{9}$Be, $^9$Be($^3$He,$^4$He)$^{8}$Be, $^9$Be($^3$He,$^5$He)$^{7}$Be, $^9$Be($^3$He,$^6$Li)$^6$Li and $^9$Be($^3$He,$^5$Li)$^7$Li reaction channels were measured. Experimental angular distributions for the corresponding ground states (g.s.) were analysed within the framework of the optical model, the coupled-channel approach and the distorted-wave Born approximation. Cross sections for channels leading to unbound $^5$He$_{g.s.}$, $^5$Li$_{g.s.}$ and $^8$Be systems were obtained from singles measurements where the relationship between the energy and the scattering angle of the observed stable ejectile is constrained by two-body kinematics. Information on the cluster structure of $^{9}$Be was obtained from the transfer channels. It was concluded that cluster transfer is an important mechanism in the investigated nuclear reactions. In the present work an attempt was made to estimate the relative strengths of the interesting $^8$Be+$n$ and $^5$He+$\alpha$ cluster configurations in $^9$Be. The branching ratios have been determined confirming that the $^5$He+$\alpha$ configuration plays an important role. The configuration of $^9$Be consisting of two bound helium clusters $^3$He+$^6$He is significantly suppressed, whereas the two-body configurations ${}^{8}$Be+$n$ and ${}^{5}$He+$\alpha$ including unbound $^8$Be and $^5$He are found more probable. Observation of anomalous Hanle spin precession lineshapes resulting from interaction with localized states (1411.1193) J. J. van den Berg Physics of Nanodevices, Zernike Institute for Advanced Materials, University of Groningen, Groningen, The Netherlands, Institute of Electronic Materials Technology, Warsaw, Poland) Nov. 5, 2014 cond-mat.mes-hall It has been shown recently that in spin precession experiments, the interaction of spins with localized states can change the response to a magnetic field, leading to a modified, effective spin relaxation time and precession frequency. Here, we show that also the shape of the Hanle curve can change, so that it cannot be fitted with the solutions of the conventional Bloch equation. We present experimental data that shows such an effect arising at low temperatures in epitaxial graphene on silicon carbide with localized states in the carbon buffer layer. We compare the strength of the effect between materials with different growth methods, epitaxial growth by sublimation and by chemical vapor deposition. The presented analysis gives information about the density of localized states and their coupling to the graphene states, which is inaccessible by charge transport measurements and can be applied to any spin transport channel that is coupled to localized states. Signatures of warm carbon monoxide in protoplanetary discs observed with Herschel SPIRE (1408.5432) M. H. D. van der Wiel, F. Ménard, K. M. Pontoppidan, J. S. Greaves Kapteyn, Groningen, The Netherlands, UMI-FCA, France, U of Chile, Santiago, Chile, U Grenoble Alpes, IPAG, France, STScI, USA, ETH Zürich, Switzerland, Aug. 22, 2014 astro-ph.GA, astro-ph.SR Molecular gas constitutes the dominant mass component of protoplanetary discs. To date, these sources have not been studied comprehensively at the longest far-infrared and shortest submillimetre wavelengths. This paper presents Herschel SPIRE FTS spectroscopic observations toward 18 protoplanetary discs, covering the entire 450-1540 GHz (666-195 $\mu$m) range at R~400-1300. The spectra reveal clear detections of the dust continuum and, in six targets, a significant amount of spectral line emission primarily attributable to $^{12}$CO rotational lines. Other targets exhibit little to no detectable spectral lines. Low signal-to-noise detections also include signatures from $^{13}$CO, [CI] and HCN. For completeness, we present upper limits of non-detected lines in all targets, including low-energy transitions of H2O and CH$^+$ molecules. The ten $^{12}$CO lines that fall within the SPIRE FTS bands trace energy levels of ~50-500 K. Combined with lower and higher energy lines from the literature, we compare the CO rotational line energy distribution with detailed physical-chemical models, for sources where these are available and published. Our 13CO line detections in the disc around Herbig Be star HD 100546 exceed, by factors of ~10-30, the values predicted by a model that matches a wealth of other observational constraints, including the SPIRE $^{12}$CO ladder. To explain the observed $^{12}$CO/$^{13}$CO ratio, it may be necessary to consider the combined effects of optical depth and isotope selective (photo)chemical processes. Considering the full sample of 18 objects, we find that the strongest line emission is observed in discs around Herbig Ae/Be stars, although not all show line emission. In addition, two of the six T Tauri objects exhibit detectable $^{12}$CO lines in the SPIRE range. Constraining the structure of the transition disk HD 135344B (SAO 206462) by simultaneous modeling of multiwavelength gas and dust observations (1403.6193) A. Carmona, F. Ménard, J. Olofsson, C. Martin-Zaïdi UMI-FCA Universidad de Chile, ExoPlanets, Stellar Astrophysics Laboratory, Goddard Space Flight Center, Goddard Center for Astrobiology, Goddard Space Flight Center, Kapteyn Astronomical Institute, Groningen, Max Planck Institut für Astronomie, Heidelberg, University of California, Berkeley, Joint ALMA Observatory, Santiago, April 18, 2014 astro-ph.SR HD 135344B is an accreting (pre-) transition disk that displays the emission of warm CO extending tens of AU inside its 30 AU dust cavity. We used the dust radiative transfer code MCFOST and the thermochemical code ProDiMo to derive the disk structure from the simultaneous modeling of the spectral energy distribution (SED), VLT/CRIRES CO P(10) 4.75 micron, Herschel/PACS [O I] 63 micron, Spitzer-IRS, and JCMT 12CO J=3-2 spectra, VLTI/PIONIER H-band visibilities, and constraints from (sub-)mm continuum interferometry and near-IR imaging. We found a disk model able to describe the current observations simultaneously. This disk has the following structure. (1) To reproduce the SED, the near-IR interferometry data, and the CO ro-vibrational emission, refractory grains (we suggest carbon) are present inside the silicate sublimation radius (0.08<R<0.2 AU). (2) The dust cavity (R<30 AU) is filled with gas, the surface density of this gas must increase with radius to fit the CO P(10) line profile, a small gap of a few AU in the gas is compatible with current data, and a large gap in the gas is not likely. (4) The gas/dust ratio inside the cavity is > 100 to account for the 870 micron continuum upper limit and the CO P(10) line flux. (5) The gas/dust ratio at 30<R<200 AU is < 10 to simultaneously describe the [O I] 63 micron line flux and the CO P(10) line profile. (6) In the outer disk, most of the mass should be located in the midplane, and a significant fraction of the dust is in large grains. Conclusions: Simultaneous modeling of the gas and dust is required to break the model degeneracies and constrain the disk structure. An increasing gas surface density with radius in the inner dust cavity echoes the effect of a migrating Jovian planet. The low gas mass (a few MJupiter) in the HD 135344B's disk suggests that it is an evolved disk that has already lost a large portion of its mass. Resolved Imaging of the HR 8799 Debris Disk with Herschel (1311.2977) Brenda C. Matthews, Mark Booth, Bruce Macintosh National Research Council of Canada Herzberg Astronomy, Astrophsyics, BC, Canada, University of Victoria, BC, Canada, Institute of Astronomy, University of Cambridge, United Kingdom, SRON Netherlands Institute for Space Research, Groningen, the Netherlands, Instituto de Astrofísica, Pontificia Universidad Católica de Chile, Santiago, Chile, Lawrence Livermore National Labs, CA, U.S.A., Nov. 12, 2013 astro-ph.SR, astro-ph.EP We present Herschel far-infrared and submillimeter maps of the debris disk associated with the HR 8799 planetary system. We resolve the outer disk emission at 70, 100, 160 and 250 um and detect the disk at 350 and 500 um. A smooth model explains the observed disk emission well. We observe no obvious clumps or asymmetries associated with the trapping of planetesimals that is a potential consequence of planetary migration in the system. We estimate that the disk eccentricity must be <0.1. As in previous work by Su et al. (2009), we find a disk with three components: a warm inner component and two outer components, a planetesimal belt extending from 100 - 310 AU, with some flexibility (+/- 10 AU) on the inner edge, and the external halo which extends to ~2000 AU. We measure the disk inclination to be 26 +/- 3 deg from face-on at a position angle of 64 deg E of N, establishing that the disk is coplanar with the star and planets. The SED of the disk is well fit by blackbody grains whose semi-major axes lie within the planetesimal belt, suggesting an absence of small grains. The wavelength at which the spectrum steepens from blackbody, 47 +/- 30 um, however, is short compared to other A star debris disks, suggesting that there are atypically small grains likely populating the halo. The PACS longer wavelength data yield a lower disk color temperature than do MIPS data (24 and 70 um), implying two distinct halo dust grain populations. The HIFI spectral survey of AFGL2591 (CHESS). I. Highly excited linear rotor molecules in the high-mass protostellar envelope (1303.3339) M. H. D. van der Wiel, F. F. S. van der Tak Kapteyn, Groningen, NL, LERMA, Paris, FR, March 14, 2013 astro-ph.GA, astro-ph.SR We aim to reveal the gas energetics in the circumstellar environment of the prototypical high-mass protostellar object AFGL2591 using space-based far-infrared observations of linear rotor molecules. Rotational spectral line signatures of CO, HCO+, CS, HCN and HNC from a 490-1240 GHz survey with Herschel/HIFI, complemented by ground-based JCMT and IRAM 30m spectra, cover transitions with E(up)/k between 5 and ~300 K (750K for 12C16O, using selected frequency settings up to 1850 GHz). The resolved spectral line profiles are used to separate and study various kinematic components. The line profiles show two emission components, the widest and bluest of which is attributed to an approaching outflow and the other to the envelope. We find evidence for progressively more redshifted and wider line profiles from the envelope gas with increasing energy level, qualitatively explained by residual outflow contribution picked up in the systematically decreasing beam size. Integrated line intensities for each species decrease as E(up)/k increases from <50 to 700K. We constrain the following: n(H2)~10^5-10^6 cm^-3 and T~60-200K for the outflow gas; T=9-17K and N(H2)~3x10^21 cm^-2 for a known foreground absorption cloud; N(H2)<10^19 cm^-2 for a second foreground component. Our spherical envelope radiative transfer model systematically underproduces observed line emission at E(up)/k > 150 K for all species. This indicates that warm gas should be added to the model and that the model's geometry should provide low optical depth pathways for line emission from this warm gas to escape, for example in the form of UV heated outflow cavity walls viewed at a favorable inclination angle. Physical and chemical conditions derived for the outflow gas are similar to those in the protostellar envelope, possibly indicating that the modest velocity (<10 km/s) outflow component consists of recently swept-up gas. LOFAR detections of low-frequency radio recombination lines towards Cassiopeia A (1302.3128) Ashish Asgekar, R. J. van Weeren, J. Anderson, M. E. Bell, G. Bernardi, F. Breitling, M. Bruggen, J. E. Conway, M. de Vos, C. Ferrari, J-M. Griesmeier, J. W. T. Hessels, E. Juette, V. I. Kondratiev, J. van Leeuwen, D. McKay-Bukowski, J. D. Mol, M. J. Norden, R. Pizzo, B. Scheers, C. Sobey, R. Vermeulen, O. Wucknitz Kapteyn Astronomical Institute, Groningen, Leiden Observatory, The Netherlands, ICRAR Curtin University, Australia, Institute for Astronomy, University of Edinburgh, UK, Research School of Astronomy, Astrophysics, ANU, Mt Stromlo Obs., Australia, Department of Astrophysics/IMAPP, Radboud University, The Netherlands, Onsala Space Observatory, Chalmers University of Technology, Sweden, Sodankyla Geophysical Observatory, University of Oulu, Finland, Astronomical Institute 'Anton Pannekoek', UvA, The Netherlands, SRON Netherlands Insitute for Space Research, Centre de Recherche Astrophysique de Lyon, France, School of Physics, Astronomy, University of Southampton, UK, Centrum Wiskunde & Informatica, Amsterdam, Laboratoire Lagrange, Universite de Nice, France, CIT, University of Groningen, The Netherlands, Laboratoire de Physique et Chimie de l' Environnement et de l' Espace, France, RAL, UC Berkeley, USA, Argelander-Institut fur Astronomie, Bonn, Germany, Astro Space Center of the Lebedev Physical Institute, Russia, Thuringer Landessternwarte, Tautenburg, Germany, Leibniz-Institut fur Astrophysik Potsdam MPIfR, Bonn, Astronomisches Institut der Ruhr-Universitet, Germany, Station de Radioastronomie de Nancay, France, CSIRO Australia Telescope National Facility, Feb. 13, 2013 astro-ph.GA Cassiopeia A was observed using the Low-Band Antennas of the LOw Frequency ARray (LOFAR) with high spectral resolution. This allowed a search for radio recombination lines (RRLs) along the line-of-sight to this source. Five carbon-alpha RRLs were detected in absorption between 40 and 50 MHz with a signal-to-noise ratio of > 5 from two independent LOFAR datasets. The derived line velocities (v_LSR ~ -50 km/s) and integrated optical depths (~ 13 s^-1) of the RRLs in our spectra, extracted over the whole supernova remnant, are consistent within each LOFAR dataset and with those previously reported. For the first time, we are able to extract spectra against the brightest hotspot of the remnant at frequencies below 330 MHz. These spectra show significantly higher (15-80 %) integrated optical depths, indicating that there is small-scale angular structure on the order of ~1 pc in the absorbing gas distribution over the face of the remnant. We also place an upper limit of 3 x 10^-4 on the peak optical depths of hydrogen and helium RRLs. These results demonstrate that LOFAR has the desired spectral stability and sensitivity to study faint recombination lines in the decameter band. The cooling phase of Type I X-ray bursts observed with RXTE in 4U 1820-30 does not follow the canonical F - T^4 relation (1301.1035) Federico García IAR-CONICET, Argentina Kapteyn Astronomical Institute, Groningen, The Netherlands) Jan. 6, 2013 astro-ph.HE We analysed the complete set of bursts from the neutron-star low-mass X-ray binary 4U 1820-30 detected with the Rossi X-ray Timing Explorer (RXTE). We found that all are photospheric radius expansion bursts, and have similar duration, peak flux and fluence. From the analysis of time-resolved spectra during the cooling phase of the bursts, we found that the relation between the bolometric flux and the temperature is very different from the canonical F - T^4 relation that is expected if the apparent emitting area on the surface of the neutron star remains constant. The flux-temperature relation can be fitted using a broken power law, with indices 2.0$\pm$0.3 and 5.72$\pm$0.06. The departure from the F - T^4 relation during the cooling phase of the X-ray bursts in 4U 1820-30 could be due to changes in the emitting area of the neutron star while the atmosphere cools-down, variations in the colour-correction factor due to chemical evolution, or the presence of a source of heat, e.g. residual hydrogen nuclear burning, playing an important role when the burst emission ceases. Study of $J/\psi\to p\bar{p}$ and $J/\psi\to n\bar{n}$ (1205.1036) BESIII Collaboration: M. Ablikim, D. J. Ambrose, R. B. Ferroli, J. M. Bian, R. A. Briere, J. F. Chang, M. L. Chen, H. P. Cheng, D. Dedovich, M. Destefanis, M. Y. Dong, F. Feldbauer, C. Geng, M. H. Gu, Y.P. Guo, M. He, H. M. Hu, J. S. Huang, Q. Ji, X. S. Jiang, F. F. Jing, W. Lai, Cui Li, K. Li, W. D. Li, Z. B. Li, G. R. Liao, C. X. Liu, H. B. Liu, Kun Liu, X. Liu, Zhiqiang Liu, J. G. Lu, M. X. Luo, H. L. Ma, F. E. Maas, Y. J. Mao, R. E. Mitchell, N. Yu. Muchnoi, Z. Ning, J. W. Park, R. Poling, C. F. Qiao, K. H. Rashid, J. Schulze, M. R. Shepherd, D. H. Sun, Y. J. Sun, X. Tang, G. S. Varner, L. S. Wang, Q. J. Wang, Y. F. Wang, D. H. Wei, M. W. Werner, W. Wu, Q. L. Xiu, Y. Xu, Y. H. Yan, H. Ye, S. P. Yu, A. A. Zafar, C. C. Zhang, J. G. Zhang, J. Z. Zhang, X. Y. Zhang, Z. P. Zhang, K. X. Zhao, S. J. Zhao, A. Zhemchugov, Z. P. Zheng, X. R. Zhou, X. L. Zhu, J. Zhuang Institute of High Energy Physics, Beijing, P. R. China, Bochum Ruhr-University, Bochum, Germany, China Center of Advanced Science, Technology, Beijing, P. R. China, G.I. Budker Institute of Nuclear Physics SB RAS Graduate University of Chinese Academy of Sciences, Beijing, P. R. China, GSI Helmholtzcentre for Heavy Ion Research GmbH, Darmstadt, Germany, Guangxi Normal University, Guilin, P. R. China, GuangXi University, Nanning, P.R.China, Hangzhou Normal University, Hangzhou, P. R. China, Henan Normal University, Xinxiang, P. R. China, Henan University of Science, Technology, Luoyang, P. R. China, Huazhong Normal University, Wuhan, P. R. China, Hunan University, Changsha, P. R. China, Indiana University, Bloomington, Indiana, USA, Johannes Gutenberg University of Mainz, Mainz, Germany, Joint Institute for Nuclear Research, Dubna, Russia, KVI/University of Groningen, Groningen, The Netherlands, Liaoning University, Shenyang, P. R. China, Nanjing Normal University, Nanjing, P. R. China, Nankai University, Tianjin, P. R. China, Peking University, Beijing, P. R. China, Shandong University, Jinan, P. R. China, Shanxi University, Taiyuan, P. R. China, Soochow University, Suzhou, China, The Chinese University of Hong Kong, Shatin, Hong Kong, The University of Hong Kong, Pokfulam, Hong Kong, Tsinghua University, Beijing, P. R. China, University of Hawaii, Honolulu, Hawaii, USA, University of Minnesota, Minneapolis, MN, USA, University of Science, Technology of China, Hefei, P. R. China, University of South China, Hengyang, P. R. China, University of the Punjab, Lahore, Pakistan, Wuhan University, Wuhan, P. R. China, Zhejiang University, Hangzhou, P. R. China, Aug. 9, 2012 hep-ex The decays $J/\psi\to p\bar{p}$ and $J/\psi\to n\bar{n}$ have been investigated with a sample of 225.2 million $J/\psi$ events collected with the BESIII detector at the BEPCII $e^+e^-$ collider. The branching fractions are determined to be $\mathcal{B}(J/\psi\to p\bar{p})=(2.112\pm0.004\pm0.031)\times10^{-3}$ and $\mathcal{B}(J/\psi\to n\bar{n})=(2.07\pm0.01\pm0.17)\times10^{-3}$. Distributions of the angle $\theta$ between the proton or anti-neutron and the beam direction are well described by the form $1+\alpha\cos^2\theta$, and we find $\alpha=0.595\pm0.012\pm0.015$ for $J/\psi\to p\bar{p}$ and $\alpha=0.50\pm0.04\pm0.21$ for $J/\psi\to n\bar{n}$. Our branching-fraction results suggest a large phase angle between the strong and electromagnetic amplitudes describing the $J/\psi\to N\bar{N}$ decay. Two-photon widths of the $\chi_{c0, 2}$ states and helicity analysis for $\chi_{c2}\ar\gamma\gamma$} (1205.4284) May 19, 2012 hep-ex Based on a data sample of 106 M $\psi^{\prime}$ events collected with the BESIII detector, the decays $\psi^{\prime}\ar\gamma\chi_{c0, 2}$,$\chi_{c0, 2}\ar\gamma\gamma$ are studied to determine the two-photon widths of the $\chi_{c0, 2}$ states. The two-photon decay branching fractions are determined to be ${\cal B}(\chi_{c0}\ar\gamma\gamma) = (2.24\pm 0.19\pm 0.12\pm 0.08)\times 10^{-4}$ and ${\cal B}(\chi_{c2}\ar\gamma\gamma) = (3.21\pm 0.18\pm 0.17\pm 0.13)\times 10^{-4}$. From these, the two-photon widths are determined to be $\Gamma_{\gamma \gamma}(\chi_{c0}) = (2.33\pm0.20\pm0.13\pm0.17)$ keV, $\Gamma_{\gamma \gamma}(\chi_{c2}) = (0.63\pm0.04\pm0.04\pm0.04)$ keV, and $\cal R$ $=\Gamma_{\gamma \gamma}(\chi_{c2})/\Gamma_{\gamma \gamma}(\chi_{c0})=0.271\pm 0.029\pm 0.013\pm 0.027$, where the uncertainties are statistical, systematic, and those from the PDG ${\cal B}(\psi^{\prime}\ar\gamma\chi_{c0,2})$ and $\Gamma(\chi_{c0,2})$ errors, respectively. The ratio of the two-photon widths for helicity $\lambda=0$ and helicity $\lambda=2$ components in the decay $\chi_{c2}\ar\gamma\gamma$ is measured for the first time to be $f_{0/2} =\Gamma^{\lambda=0}_{\gamma\gamma}(\chi_{c2})/\Gamma^{\lambda=2}_{\gamma\gamma}(\chi_{c2}) = 0.00\pm0.02\pm0.02$. The infant Milky Way (1204.1943) Stefania Salvadori Kapteyn Astronomical Institute, Groningen, The Netherlands Scuola Normale Superiore, Pisa, Italy) April 9, 2012 astro-ph.CO We investigate the physical properties of the progenitors of today living Milky Way-like galaxies that are visible as Damped Lya Absorption systems and Lya Emitters at higher redshifts (z ~ 2.3,5.7). To this aim we use a statistical merger-tree approach that follows the formation of the Galaxy and its dwarf satellites in a cosmological context, tracing the chemical evolution and stellar population history of the progenitor halos. The model accounts for the properties of the most metal-poor stars and local dwarf galaxies, providing insights on the early cosmic star-formation. Fruitful links between Galactic Archaeology and more distant galaxies are presented. Stellar archeology: a cosmological view of dwarf galaxies (1204.1946) Stefania Salvadori (Kapteyn Astronomical Institute, Groningen, The Netherlands) April 9, 2012 astro-ph.CO, astro-ph.GA The origin of dwarf spheroidal galaxies (dSphs) is investigated in a global cosmological context by simultaneously following the evolution of the Milky Way Galaxy and its dwarf satellites. This approach enable to study the formation of dSphs in their proper birth environment and to reconstruct their own merging histories. The proposed picture simultaneously accounts for several dSph and Milky Way properties, including the Metallicity Distribution Functions of metal-poor stars. The observed features are interpreted in terms of physical processes acting at high redshifts. First stars in Damped Lyman Alpha systems (1111.6637) Stefania Salvadori Kapteyn Astronomical Institute, Groningen, The Netherlands, Scuola Normale Superiore, Pisa, Italy) Nov. 28, 2011 astro-ph.CO In order to characterize Damped Lyman Alpha systems (DLAs) potentially hosting first stars, we present a novel approach to investigate DLAs in the context of Milky Way (MW) formation, along with their connection with the most metal-poor stars and local dwarf spheroidal (dSph) galaxies. The merger tree method previously developed is extended to include inhomogeneous reionization and metal mixing, and it is validated by matching both the Metallicity Distribution Function of Galactic halo stars and the Fe-Luminosity relation of dSph galaxies. The model explains the observed NHI-Fe relation of DLAs along with the chemical abundances of [Fe/H] < -2 systems. In this picture, the recently discovered z_abs ~ 2.34 C-enhanced DLA (Cooke et al. 2011a), pertains to a new class of absorbers hosting first stars along with second-generation long-living low-mass stars. These "PopIII DLAs" are the descendants of H2-cooling minihalos with Mh ~ 10^7 Msun, that virialize at z > 8 in neutral, primordial regions of the MW environment and passively evolve after a short initial period of star formation. The gas in these systems is warm Tg \sim (40-1000) K, and strongly C-enriched by long-living, extremely metal-poor stars of total mass M* \sim 10^{2-4} Msun. The JCMT Spectral Legacy Survey: physical structure of the molecular envelope of the high-mass protostar AFGL2591 (1101.5529) M.H.D. van der Wiel, M. Spaans Kapteyn, Groningen, NL, Calgary, CA, Jan. 28, 2011 astro-ph.GA The understanding of the formation process of massive stars (>8 Msun) is limited, due to theoretical complications and observational challenges. We investigate the physical structure of the large-scale (~10^4-10^5 AU) molecular envelope of the high-mass protostar AFGL2591 using spectral imaging in the 330-373 GHz regime from the JCMT Spectral Legacy Survey. Out of ~160 spectral features, this paper uses the 35 that are spatially resolved. The observed spatial distributions of a selection of six species are compared with radiative transfer models based on a static spherically symmetric structure, a dynamic spherical structure, and a static flattened structure. The maps of CO and its isotopic variations exhibit elongated geometries on scales of ~100", and smaller scale substructure is found in maps of N2H+, o-H2CO, CS, SO2, CCH, and methanol lines. A velocity gradient is apparent in maps of all molecular lines presented here, except SO, SO2, and H2CO. We find two emission peaks in warm (Eup~200K) methanol separated by 12", indicative of a secondary heating source in the envelope. The spherical models are able to explain the distribution of emission for the optically thin H13CO+ and C34S, but not for the optically thick HCN, HCO+, and CS, nor for the optically thin C17O. The introduction of velocity structure mitigates the optical depth effects, but does not fully explain the observations, especially in the spectral dimension. A static flattened envelope viewed at a small inclination angle does slightly better. We conclude that a geometry of the envelope other than an isotropic static sphere is needed to circumvent line optical depth effects. We propose that this could be achieved in envelope models with an outflow cavity and/or inhomogeneous structure at scales smaller than ~10^4 AU. The picture of inhomogeneity is supported by observed substructure in at least six species. Parsec-Scale Bipolar X-ray Shocks Produced by Powerful Jets from the Neutron Star Circinus X-1 (1008.0647) P. H. Sell, V. Tudose (3, 4, 5), P. Soleri, N. S. Schulz, M. van der Klis Department of Astronomy, University of Wisconsin-Madison, Madison, WI, USA, School of Physics, Astronomy, University of Southampton, Southampton, UK, Netherlands Institute for Radio Astronomy, Dwingeloo, The Netherlands, Astronomical Institute of the Romanian Academy, Bucharest, Romania, Research Center for Atomic Physics, Astrophysics, Bucharest, Romania, Kapteyn Astronomical Institute, University of Groningen, Groningen, The Netherlands, SRON, Netherlands Institute for Space Research, Utrecht, The Netherlands, Department of Astrophysics, IMAPP, Radboud University Nijmegen, Nijmegen, the Netherlands, Harvard--Smithsonian Center for Astrophysics, Cambridge, MA, USA, Kavli Institute for Astrophysics, Space Research, Massachusetts Institute of Technology, Cambridge, MA, USA, Department of Astronomy, Astrophysics, The Pennsylvania State University, University Park, PA, USA, Astronomical Institute Anton Pannekoek, University of Amsterdam, Amsterdam, The Netherlands) Aug. 3, 2010 astro-ph.HE We report the discovery of multi-scale X-ray jets from the accreting neutron star X-ray binary, Circinus X-1. The bipolar outflows show wide opening angles and are spatially coincident with the radio jets seen in new high-resolution radio images of the region. The morphology of the emission regions suggests that the jets from Circinus X-1 are running into a terminal shock with the interstellar medium, as is seen in powerful radio galaxies. This and other observations indicate that the jets have a wide opening angle, suggesting that the jets are either not very well collimated or precessing. We interpret the spectra from the shocks as cooled synchrotron emission and derive a cooling age of approximately 1600 yr. This allows us to constrain the jet power to be between 3e35 erg/s and 2e37 erg/s, making this one of a few microquasars with a direct measurement of its jet power and the only known microquasar that exhibits stationary large-scale X-ray emission. Fossil evidence for spin alignment of SDSS galaxies in filaments (1001.4479) Bernard J.T. Jones, Miguel A. Aragon-Calvo Kapteyn Astronomical Institute, University of Groningen, Groningen, the Netherlands, Dept. Physics & Astronomy, the Johns Hopkins University, Baltimore, U.S.A.) July 5, 2010 astro-ph.CO We search for and find fossil evidence that the distribution of the spin axes of galaxies in cosmic web filaments relative to their host filaments are not randomly distributed. This would indicate that the action of large scale tidal torques effected the alignments of galaxies located in cosmic filaments. To this end, we constructed a catalogue of clean filaments containing edge-on galaxies. We started by applying the Multiscale Morphology Filter (MMF) technique to the galaxies in a redshift-distortion corrected version of the Sloan Digital Sky Survey DR5. From that sample we extracted those 426 filaments that contained edge-on galaxies (b/a < 0.2). These filaments were then visually classified relative to a variety of quality criteria. Statistical analysis using "feature measures" indicates that the distribution of orientations of these edge-on galaxies relative to their parent filament deviate significantly from what would be expected on the basis of a random distribution of orientations. The interpretation of this result may not be immediately apparent, but it is easy to identify a population of 14 objects whose spin axes are aligned perpendicular to the spine of the parent filament (\cos \theta < 0.2). The candidate objects are found in relatively less dense filaments. This might be expected since galaxies in such locations suffer less interaction with surrounding galaxies, and consequently better preserve their tidally induced orientations relative to the parent filament. The technique of searching for fossil evidence of alignment yields relatively few candidate objects, but it does not suffer from the dilution effects inherent in correlation analysis of large samples. Imaging of a Transitional Disk Gap in Reflected Light: Indications of Planet Formation Around the Young Solar Analog LkCa 15 (1005.5162) C. Thalmann, M. Janson, G. D. Mulders, K. W. Hodapp, M. Feldt, Y. Hayano, R. Kandori, T. Matsuo, T.-S. Pyo, M. Takami, E. L. Turner (11, 20), M. Watanabe, M. Tamura Max Planck Institute for Astronomy, Heidelberg, Germany, University of Washington, Seattle, Washington, USA, University of Toronto, Toronto, Canada, Faculty of Science, Kanagawa University, Kanagawa, Japan, Astronomical Institute "Anton Pannekoek", University of Amsterdam, Amsterdam, The Netherlands, SRON Netherlands Institute for Space Research, Groningen, The Netherlands, Astronomical Institute, University of Utrecht, Utrecht, The Netherlands, Department of Astrophysical Sciences, Princeton University, Princeton, USA, College of Charleston, Charleston, South Carolina, USA, Laboratoire Hippolyte Fizeau, Nice, France, Subaru Telescope, Hilo, Hawai`i, USA, University of Tokyo, Tokyo, Japan, JPL, California Institute of Technology, Pasadena, USA, Institute of Astronomy, Astrophysics, Academia Sinica, Taipei, Taiwan, Institute for the Physics, Mathematics of the Universe, University of Tokyo, Japan, Department of Cosmosciences, Hokkaido University, Sapporo, Japan, Astronomical Institute, Tohoku University, Sendai, Japan) July 1, 2010 astro-ph.SR, astro-ph.EP We present H- and Ks-band imaging data resolving the gap in the transitional disk around LkCa 15, revealing the surrounding nebulosity. We detect sharp elliptical contours delimiting the nebulosity on the inside as well as the outside, consistent with the shape, size, ellipticity, and orientation of starlight reflected from the far-side disk wall, whereas the near-side wall is shielded from view by the disk's optically thick bulk. We note that forward-scattering of starlight on the near-side disk surface could provide an alternate interpretation of the nebulosity. In either case, this discovery provides confirmation of the disk geometry that has been proposed to explain the spectral energy distributions (SED) of such systems, comprising an optically thick outer disk with an inner truncation radius of ~46 AU enclosing a largely evacuated gap. Our data show an offset of the nebulosity contours along the major axis, likely corresponding to a physical pericenter offset of the disk gap. This reinforces the leading theory that dynamical clearing by at least one orbiting body is the cause of the gap. Based on evolutionary models, our high-contrast imagery imposes an upper limit of 21 Jupiter masses on companions at separations outside of 0.1" and of 13 Jupiter masses outside of 0.2". Thus, we find that a planetary system around LkCa 15 is the most likely explanation for the disk architecture. High redshift Lya emitters: clues on the Milky Way infancy (1005.4422) S. Salvadori Kapteyn Astronomical Institute, Groningen, The Netherlands SISSA/ISAS, Trieste, Italy May 24, 2010 astro-ph.CO With the aim of determining if Milky Way (MW) progenitors could be identified as high redshift Lyman Alpha Emitters (LAEs) we have derived the intrinsic properties of z ~ 5.7 MW progenitors, which are then used to compute their observed Lyman-alpha luminosity, L_alpha, and equivalent width, EW. MW progenitors visible as LAEs are selected according to the canonical observational criterion, L_alpha > 10^42 erg/s and EW > 20 A. Progenitors of MW-like galaxies have L_alpha = 10^(39-43.25) erg/s, making some of them visible as LAEs. In any single MW merger tree realization, typically only 1 (out of ~ 50) progenitor meets the LAE selection criterion, but the probability to have at least one LAE is very high, P = 68%. The identified LAE stars have ages, t_* ~ 150-400 Myr at z ~ 5.7 with the exception of five small progenitors with t_* < 5 Myr and large EW = 60-130 A. LAE MW progenitors provide > 10% of the halo very metal-poor stars [Fe/H] < -2, thus establishing a potentially fruitful link between high-z galaxies and the Local Universe. Chemical stratification in the Orion Bar: JCMT Spectral Legacy Survey observations (0902.1433) Matthijs H. D. van der Wiel, Floris F. S. van der Tak (2, 1), Volker Ossenkopf, Gary A. Fuller SRON, Groningen, NL, Calgary, CA) Photon-dominated regions (PDRs) are expected to show a layered structure in molecular abundances and emerging line emission, which is sensitive to the physical structure of the region as well as the UV radiation illuminating it. We aim to study this layering in the Orion Bar, a prototypical nearby PDR with a favorable edge-on geometry. We present new maps of 2 by 2 arcminute fields at 14-23 arcsecond resolution toward the Orion Bar in the SO 8_8-9_9, H2CO 5_(1,5)-4_(1,4), 13CO 3-2, C2H 4_(9/2)-3_(7/2) and 4_(7/2)-3_(5/2), C18O 2-1 and HCN 3-2 transitions. The data reveal a clear chemical stratification pattern. The C2H emission peaks close to the ionization front, followed by H2CO and SO, while C18O, HCN and 13CO peak deeper into the cloud. A simple PDR model reproduces the observed stratification, although the SO emission is predicted to peak much deeper into the cloud than observed while H2CO is predicted to peak closer to the ionization front than observed. In addition, the predicted SO abundance is higher than observed while the H2CO abundance is lower than observed. The discrepancies between the models and observations indicate that more sophisticated models, including production of H2CO through grain surface chemistry, are needed to quantitatively match the observations of this region.
CommonCrawl
Existence of positive solutions for a class of Kirchhoff type equations in $\mathbb{R}^3$ The bifurcations of solitary and kink waves described by the Gardner equation December 2016, 9(6): 1647-1662. doi: 10.3934/dcdss.2016068 Periodic solutions and homoclinic solutions for a Swift-Hohenberg equation with dispersion Shengfu Deng 1, Department of Mathematics, Zhanjiang Normal University, Zhanjiang, Guangdong 524048 Received July 2015 Revised August 2016 Published November 2016 We investigate the 1D Swift-Hohenberg equation with dispersion $$u_t+2u_{\xi\xi}-\sigma u_{\xi\xi\xi}+u_{\xi\xi\xi\xi}=\alpha u+\beta u^2-\gamma u^3,$$ where $\sigma, \alpha, \beta$ and $\gamma$ are constants. Even if only the stationary solutions of this equation are considered, the dispersion term $-\sigma u_{\xi\xi\xi}$ destroys the spatial reversibility which plays an important role for studying localized patterns. In this paper, we focus on its traveling wave solutions and directly apply the dynamical approach to provide the first rigorous proof of existence of the periodic solutions and the homoclinic solutions bifurcating from the origin without the reversibility condition as the parameters are varied. Keywords: periodic solution., zero-Hopf bifurcation, homoclinic solution, Swift-Hohenberg. Mathematics Subject Classification: Primary: 34C25, 37G15; Secondary: 35B3. Citation: Shengfu Deng. Periodic solutions and homoclinic solutions for a Swift-Hohenberg equation with dispersion. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1647-1662. doi: 10.3934/dcdss.2016068 D. Avitabile, D. J. B. Lloyd, J. Burke, E. Knobloch and B. Sandstede, To snake or not to snake in the planar Swift-Hohenberg equation, SIAM J. Appl. Dyn. Syst., 9 (2010), 704-733. doi: 10.1137/100782747. Google Scholar M. Beck, J. Knobloch, D. J. B. Lloyd, B. Sandstede and T. Wagenknecht, Snakes, ladders, and isolas of localized patterns, SIAM J. Math. Anal., 41 (2009), 936-972. doi: 10.1137/080713306. Google Scholar D. Blair, I. S. Aranson, G. W. Crabtree, V. Vinokur, L. S. Tsimring and C. Josserand, Patterns in thin vibrated granular layers: Interfaces, hexagons, and superoscillons, Phys. Rev. E, 61 (2000), 5600-5610. doi: 10.1103/PhysRevE.61.5600. Google Scholar B. Braaksma, G. Iooss and L. Stolovitch, Existence of quasipattern solutions of the Swift-Hohenberg equation, Arch. Ration. Mech. Anal., 209 (2013), 255-285. doi: 10.1007/s00205-013-0627-7. Google Scholar J. Burke, S. M. Houghton and E. Knobloch, Swift-Hohenberg equation with broken reflection symmetry, Phys. Rev. E, 80 (2009), 036202. doi: 10.1103/PhysRevE.80.036202. Google Scholar P. F. Byrd and M. D. Friedman, Handbook of Elliptic Integrals For Engineers and Physicists, Springer-Verlag, Berlin, 1954. Google Scholar P. Collet and J. P. Eckmann, Instabilities and Fronts in Extended Systems, Princeton University Press, Princeton, 1990. doi: 10.1515/9781400861026. Google Scholar S. Day, Y. Hiraoka, K. Mischaikow and T. Ogawa, Rigorous numerics for global dynamics: A study of the Swift-Hohenberg equation, SIAM J. Appl. Dyn. Syst., 4 (2005), 1-31. doi: 10.1137/040604479. Google Scholar S. Deng and X. Li, Generalized homoclinic solutions for the Swift-Hohenberg equation, J. Math. Anal. Appl., 390 (2012), 15-26. doi: 10.1016/j.jmaa.2011.11.074. Google Scholar S. Deng and S. M. Sun, Multi-hump solutions with small oscillations at infinity for stationary Swift-Hohenberg equation,, submitted., (). Google Scholar J. P. Gaivão and V. Gelfreich, Splitting of separatrices for the Hamiltonian-Hopf bifurcation with the Swift-Hohenberg equation as an example, Nonlinearity, 24 (2011), 677-698. doi: 10.1088/0951-7715/24/3/002. Google Scholar P. Gandhi, C. Beaume and E. Knobloch, A new resonance mechanism in the Swift-Hohenberg rquation with time-periodic forcing, SIAM J. Appl. Dyn. Syst., 14 (2015), 860-892. doi: 10.1137/14099468X. Google Scholar J. Guckenheimer and P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Springer-Verlag, New York, 1990. Google Scholar M. Haragus and A. Scheel, Interfaces between rolls in the Swift-Hohenberg equation, Int. J. Dyn. Syst. Diff. Equ., 1 (2007), 89-97. doi: 10.1504/IJDSDE.2007.016510. Google Scholar G. Iooss and A. M. Rucklidge, On the existence of quasipattern solutions of the Swift-Hohenberg equation, J. Nonlinear Sci., 20 (2010), 361-394. doi: 10.1007/s00332-010-9063-0. Google Scholar J. Knobloch, M. Vielitz and T. Wagenknecht, Non-reversible perturbations of homoclinic snaking scenarios, Nonlinearity, 25 (2012), 3469-3485. doi: 10.1088/0951-7715/25/12/3469. Google Scholar N. A. Kudryashov and D. I. Sinelshchikov, Exact solutions of the Swift-Hohenberg equation with dispersion, Commun. Nonlinear Sci. Numer. Simulat., 17 (2012), 26-34. doi: 10.1016/j.cnsns.2011.04.008. Google Scholar R. E. LaQuey, S. M. Mahajan, P. H. Rutherford and W. M. Tang, Nonlinear saturation of the trapped-ion mode, Phys. Rev. Lett., 34 (1975), 391-394. doi: 10.1103/PhysRevLett.34.391. Google Scholar L. Lee and H. Swinney, Lamellar structures and self-replicating spots in a reaction-diffusion system, Phys. Rev. E, 51 (1995), 1899-1915. doi: 10.1103/PhysRevE.51.1899. Google Scholar L. Lega, J. V. Moloney and A. C. Newell, Swift-Hohenberg equation for lasers, Phys. Rev. Lett., 73 (1994), 2978-2981. doi: 10.1103/PhysRevLett.73.2978. Google Scholar M. Lopez-Fernandez and S. Sauter, Fast and stable contour integration for high order divided differences via elliptic functions, Math. Comp., 84 (2015), 1291-1315. doi: 10.1090/S0025-5718-2014-02890-1. Google Scholar E. Makrides and B. Sandstede, Predicting the bifurcation structure of localized snaking patterns, Phys. D, 268 (2014), 59-78. doi: 10.1016/j.physd.2013.11.009. Google Scholar P. Mandel, Theoretical Problems in Cavity Nonlinear Optics, Cambridge University Press, Cambridge, 1997. Google Scholar S. G. McCalla and B. Sandstede, Spots in the Swift-Hohenberg equation, SIAM J. Appl. Dyn. Syst., 12 (2013), 831-877. doi: 10.1137/120882111. Google Scholar A. Mielke, Instability and stability of rolls in the Swift-Hohenberg equation, Comm. Math. Phys., 189 (1997), 829-853. doi: 10.1007/s002200050230. Google Scholar D. Morgan and J. H. P. Dawes, The Swift-Hohenberg equation with a nonlocal nonlinearity, Phys. D, 270 (2014), 60-80. doi: 10.1016/j.physd.2013.11.018. Google Scholar L. A. Peletier and V. Rottschafer, Pattern selection of solutions of the Swift-Hohenberg equation, Phys. D, 194 (2004), 95-126. doi: 10.1016/j.physd.2004.01.043. Google Scholar L. A. Peletier and J. F. Williams, Some canonical bifurcations in the Swift-Hohenberg equation, SIAM J. Appl. Dyn. Syst., 6 (2007), 208-235. doi: 10.1137/050647232. Google Scholar D. Smets and J. B. van den Berg, Homoclinic solutions for Swift-Hohenberg and suspension bridge type equations, J. Diff. Eqns., 184 (2002), 78-96. doi: 10.1006/jdeq.2001.4135. Google Scholar J. Swift and P. C. Hohenberg, Hydrodynamic fluctuations at the convective instability, Phys. Rev. A, 15 (1977), 319-328. doi: 10.1103/PhysRevA.15.319. Google Scholar J. B. van den Berg, L. A. Peletier and W. C. Troy, Global branches of multi-bump periodic solutions of the Swift-Hohenberg equation, Arch. Ration. Mech. Anal., 158 (2001), 91-153. doi: 10.1007/PL00004243. Google Scholar F. Verhulst, Nonlinear Differential Equations and Dynamical Systems, Springer-Verlag, Universitext, 1996. doi: 10.1007/978-3-642-61453-8. Google Scholar Jongmin Han, Masoud Yari. Dynamic bifurcation of the complex Swift-Hohenberg equation. Discrete & Continuous Dynamical Systems - B, 2009, 11 (4) : 875-891. doi: 10.3934/dcdsb.2009.11.875 Jongmin Han, Chun-Hsiung Hsia. Dynamical bifurcation of the two dimensional Swift-Hohenberg equation with odd periodic condition. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2431-2449. doi: 10.3934/dcdsb.2012.17.2431 Yixia Shi, Maoan Han. Existence of generalized homoclinic solutions for a modified Swift-Hohenberg equation. Discrete & Continuous Dynamical Systems - S, 2020, 13 (11) : 3189-3204. doi: 10.3934/dcdss.2020114 Yuncherl Choi, Taeyoung Ha, Jongmin Han, Doo Seok Lee. Bifurcation and final patterns of a modified Swift-Hohenberg equation. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2543-2567. doi: 10.3934/dcdsb.2017087 Toshiyuki Ogawa, Takashi Okuda. Bifurcation analysis to Swift-Hohenberg equation with Steklov type boundary conditions. Discrete & Continuous Dynamical Systems, 2009, 25 (1) : 273-297. doi: 10.3934/dcds.2009.25.273 Jaume Llibre, Ernesto Pérez-Chavela. Zero-Hopf bifurcation for a class of Lorenz-type systems. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1731-1736. doi: 10.3934/dcdsb.2014.19.1731 Masoud Yari. Attractor bifurcation and final patterns of the n-dimensional and generalized Swift-Hohenberg equations. Discrete & Continuous Dynamical Systems - B, 2007, 7 (2) : 441-456. doi: 10.3934/dcdsb.2007.7.441 J. Burke, Edgar Knobloch. Multipulse states in the Swift-Hohenberg equation. Conference Publications, 2009, 2009 (Special) : 109-117. doi: 10.3934/proc.2009.2009.109 Peng Gao. Averaging principles for the Swift-Hohenberg equation. Communications on Pure & Applied Analysis, 2020, 19 (1) : 293-310. doi: 10.3934/cpaa.2020016 Ling-Jun Wang. The dynamics of small amplitude solutions of the Swift-Hohenberg equation on a large interval. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1129-1156. doi: 10.3934/cpaa.2012.11.1129 Yanfeng Guo, Jinqiao Duan, Donglong Li. Approximation of random invariant manifolds for a stochastic Swift-Hohenberg equation. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1701-1715. doi: 10.3934/dcdss.2016071 John Burke, Edgar Knobloch. Normal form for spatial dynamics in the Swift-Hohenberg equation. Conference Publications, 2007, 2007 (Special) : 170-180. doi: 10.3934/proc.2007.2007.170 Kevin Li. Dynamic transitions of the Swift-Hohenberg equation with third-order dispersion. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6069-6090. doi: 10.3934/dcdsb.2021003 Isaac A. García, Claudia Valls. The three-dimensional center problem for the zero-Hopf singularity. Discrete & Continuous Dynamical Systems, 2016, 36 (4) : 2027-2046. doi: 10.3934/dcds.2016.36.2027 Andrea Giorgini. On the Swift-Hohenberg equation with slow and fast dynamics: well-posedness and long-time behavior. Communications on Pure & Applied Analysis, 2016, 15 (1) : 219-241. doi: 10.3934/cpaa.2016.15.219 Changrong Zhu, Bin Long. The periodic solutions bifurcated from a homoclinic solution for parabolic differential equations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3793-3808. doi: 10.3934/dcdsb.2016121 Qigang Yuan, Jingli Ren. Periodic forcing on degenerate Hopf bifurcation. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2857-2877. doi: 10.3934/dcdsb.2020208 Dmitriy Yu. Volkov. The Hopf -- Hopf bifurcation with 2:1 resonance: Periodic solutions and invariant tori. Conference Publications, 2015, 2015 (special) : 1098-1104. doi: 10.3934/proc.2015.1098 Juntao Sun, Jifeng Chu, Zhaosheng Feng. Homoclinic orbits for first order periodic Hamiltonian systems with spectrum point zero. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3807-3824. doi: 10.3934/dcds.2013.33.3807 Ewa Schmeidel, Robert Jankowski. Asymptotically zero solution of a class of higher nonlinear neutral difference equations with quasidifferences. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2691-2696. doi: 10.3934/dcdsb.2014.19.2691 Shengfu Deng
CommonCrawl