text
stringlengths
100
500k
subset
stringclasses
4 values
Dynamics in a Rosenzweig-Macarthur predator-prey system with quiescence DCDS-B Home The dynamical mechanism of jets for AGN May 2016, 21(3): 919-941. doi: 10.3934/dcdsb.2016.21.919 On the uniqueness of weak solution for the 2-D Ericksen--Leslie system Meng Wang 1, , Wendong Wang 2, and Zhifei Zhang 3, Department of Mathematics, Zhejiang University, Hangzhou 310027, China School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China School of Mathematical Sciences, Peking University, Beijing 100871, China Received October 2014 Revised September 2015 Published January 2016 In this paper, we prove the uniqueness of weak solutions to the two dimensional full Ericksen-Leslie system with the Leslie stress and general Ericksen stress under the physical constrains on the Leslie coefficients. This question remains unknown even in the case when the Leslie stress is vanishing. The main technique used in the proof is Littlewood-Paley analysis performed in a very delicate way. Different from the earlier result in [28], we introduce a new metric and explore the algebraic structure of the molecular field. Keywords: Ericksen--Leslie system, Littlewood-Paley theory., uniqueness of weak solution. Mathematics Subject Classification: Primary: 35A02, 76A15; Secondary: 35Q3. Citation: Meng Wang, Wendong Wang, Zhifei Zhang. On the uniqueness of weak solution for the 2-D Ericksen--Leslie system. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 919-941. doi: 10.3934/dcdsb.2016.21.919 J. M. Bony, Calcul symbolique et propagation des singularitiés pour les équations aux dérivées partielles non linéaires,, Ann. Ecole Norm. Sup., 14 (1981), 209. Google Scholar J. Y. Chemin, Perfect Incompressible Fluids,, Oxford Lecture series in Mathematics and its Applications, (1998). Google Scholar J. Ericksen, Conservation laws for liquid crystals,, Trans. Soc. Rheol., 5 (1961), 23. doi: 10.1122/1.548883. Google Scholar M. Giaquinta, G. Modica and J. Soucek, Cartesian Currents in the Calculus of Variations,, part II, (1998). doi: 10.1007/978-3-662-06218-0. Google Scholar M.-C. Hong, Global existence of solutions of the simplified Ericksen-Leslie system in dimension two,, Calc. Var. Partial Differential Equations, 40 (2011), 15. doi: 10.1007/s00526-010-0331-5. Google Scholar M.-C. Hong and Z.-P. Xin, Global existence of solutions of the liquid crystal flow for the Oseen-Frank model in $\mathbbR^2$,, Adv. Math., 231 (2012), 1364. doi: 10.1016/j.aim.2012.06.009. Google Scholar M.-C. Hong, J.-K. Li and Z.-P. Xin, Blow-up criteria of strong solutions to the Ericksen-Leslie system in $\mathbbR^3$,, Comm. Partial Differential Equations, 39 (2014), 1284. doi: 10.1080/03605302.2013.871026. Google Scholar J.-R. Huang, F.-H. Lin and C.-Y. Wang, Regularity and existence of global solutions to the Ericksen-Leslie system in $\mathbbR^2$,, Comm. Math. Phys., 331 (2014), 805. doi: 10.1007/s00220-014-2079-9. Google Scholar T. Huang and C.-Y. Wang, Blow up criterion for nematic liquid crystal flows,, Comm. Partial Differential Equations, 37 (2012), 875. doi: 10.1080/03605302.2012.659366. Google Scholar F. Leslie, Some constitutive equations for anisotropic fluids,, Quart. J. Mech. Appl. Math., 19 (1966), 357. doi: 10.1093/qjmam/19.3.357. Google Scholar F. Leslie, Some constitutive equations for liquid crystals,, Arch. Ration. Mech. Anal., 28 (1968), 265. doi: 10.1007/BF00251810. Google Scholar F. Leslie, Theory of flow phenomena in liquid crystals,, The Theory of Liquid Crystals, 4 (1979), 1. doi: 10.1016/B978-0-12-025004-2.50008-9. Google Scholar J.-K. Li, E. Titi and Z.-P. Xin, On the uniqueness of weak solutions to weak solutions to the Ericksen-Leslie liquid crystal model in $\mathbbR^2$,, , (). Google Scholar F.-H. Lin, Nonlinear theory of defects in nematic liquid crystal: Phase transition and flow phenomena,, Comm. Pure Appl. Math., 42 (1989), 789. doi: 10.1002/cpa.3160420605. Google Scholar F.-H. Lin, J. Lin and C. Wang, Liquid crystal flows in two dimensions,, Arch. Ration. Mech. Anal., 197 (2010), 297. doi: 10.1007/s00205-009-0278-x. Google Scholar F.-H. Lin and C. Liu, Nonparabolic dissipative systems modeling the flow of liquid crystals,, Comm. Pure Appl. Math., 48 (1995), 501. doi: 10.1002/cpa.3160480503. Google Scholar F.-H. Lin and C. Liu, Partial regularity of the dynamic system modeling the flow of liquid crystals,, Discrete Contin. Dynam. Systems, 2 (1996), 1. Google Scholar F.-H. Lin and C. Liu, Existence of solutions for the Ericksen-Leslie system,, Arch. Ration. Mech. Anal., 154 (2000), 135. doi: 10.1007/s002050000102. Google Scholar F.-H. Lin and C. Wang, On the uniqueness of heat flow of harmonic maps and hydrodynamic flow of nematic liquid crystals,, Chin. Ann. Math. Ser. B, 31 (2010), 921. doi: 10.1007/s11401-010-0612-5. Google Scholar O. Parodi, Stress tensor for a nematic liquid crystal,, Journal de Physique, 31 (1970), 581. Google Scholar M. Struwe, On the evolution of harmonic mappings of Riemannian surfaces,, Comm. Math. Helv., 60 (1985), 558. doi: 10.1007/BF02567432. Google Scholar C. Wang, Well-posedness for the heat flow of harmonic maps and the liquid crystal flow with rough initial data,, Arch. Ration. Mech. Anal., 200 (2011), 1. doi: 10.1007/s00205-010-0343-5. Google Scholar C. Wang and X. Xu, On the rigidity of nematic liquid crystal flow on $S^2$,, Jounal of Functional Analysis, 266 (2014), 5360. doi: 10.1016/j.jfa.2014.02.023. Google Scholar W. Wang, P. Zhang and Z. Zhang, The small Deborah number limit of the Doi-Onsager equation to the Ericksen- Leslie equation,, Comm. Pure Appl. Math., 68 (2015), 1326. doi: 10.1002/cpa.21549. Google Scholar W. Wang, P. Zhang and Z. Zhang, Well-posedness of the Ericksen-Leslie system,, Arch. Ration. Mech. Anal., 210 (2013), 837. doi: 10.1007/s00205-013-0659-z. Google Scholar M. Wang and W.-D. Wang, Global existence of weak solution for the 2-D Ericksen-Leslie system,, Calc. Var. Partial Differential Equations, 51 (2014), 915. doi: 10.1007/s00526-013-0700-y. Google Scholar H. Wu, X. Xu and C. Liu, On the general Ericksen Leslie system: Parodis relation, well-posedness and stability,, Arch. Ration. Mech. Anal., 208 (2013), 59. doi: 10.1007/s00205-012-0588-2. Google Scholar X. Xu and Z. Zhang, Global regularity and uniqueness of weak solution for the 2-D liquid crystal flows,, J. Differential Equations, 252 (2012), 1169. doi: 10.1016/j.jde.2011.08.028. Google Scholar Etienne Emmrich, Robert Lasarzik. Weak-strong uniqueness for the general Ericksen—Leslie system in three dimensions. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4617-4635. doi: 10.3934/dcds.2018202 Radjesvarane Alexandre, Mouhamad Elsafadi. Littlewood-Paley theory and regularity issues in Boltzmann homogeneous equations II. Non cutoff case and non Maxwellian molecules. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 1-11. doi: 10.3934/dcds.2009.24.1 Jishan Fan, Tohru Ozawa. Regularity criteria for a simplified Ericksen-Leslie system modeling the flow of liquid crystals. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 859-867. doi: 10.3934/dcds.2009.25.859 Jihoon Lee. Scaling invariant blow-up criteria for simplified versions of Ericksen-Leslie system. Discrete & Continuous Dynamical Systems - S, 2015, 8 (2) : 381-388. doi: 10.3934/dcdss.2015.8.381 Stefano Bosia. Well-posedness and long term behavior of a simplified Ericksen-Leslie non-autonomous system for nematic liquid crystal flows. Communications on Pure & Applied Analysis, 2012, 11 (2) : 407-441. doi: 10.3934/cpaa.2012.11.407 Zdzisław Brzeźniak, Erika Hausenblas, Paul André Razafimandimby. A note on the stochastic Ericksen-Leslie equations for nematic liquid crystals. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-18. doi: 10.3934/dcdsb.2019106 Toyohiko Aiki, Adrian Muntean. On uniqueness of a weak solution of one-dimensional concrete carbonation problem. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1345-1365. doi: 10.3934/dcds.2011.29.1345 Dominique Blanchard, Nicolas Bruyère, Olivier Guibé. Existence and uniqueness of the solution of a Boussinesq system with nonlinear dissipation. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2213-2227. doi: 10.3934/cpaa.2013.12.2213 Peter Markowich, Jesús Sierra. Non-uniqueness of weak solutions of the Quantum-Hydrodynamic system. Kinetic & Related Models, 2019, 12 (2) : 347-356. doi: 10.3934/krm.2019015 Thi-Bich-Ngoc Mac. Existence of solution for a system of repulsion and alignment: Comparison between theory and simulation. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 3013-3027. doi: 10.3934/dcdsb.2015.20.3013 Yang Wang, Xiong Li. Uniqueness of traveling front solutions for the Lotka-Volterra system in the weak competition case. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3067-3075. doi: 10.3934/dcdsb.2018300 Chunxiao Guo, Fan Cui, Yongqian Han. Global existence and uniqueness of the solution for the fractional Schrödinger-KdV-Burgers system. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1687-1699. doi: 10.3934/dcdss.2016070 Zhaoquan Xu, Jiying Ma. Monotonicity, asymptotics and uniqueness of travelling wave solution of a non-local delayed lattice dynamical system. Discrete & Continuous Dynamical Systems - A, 2015, 35 (10) : 5107-5131. doi: 10.3934/dcds.2015.35.5107 Feng Li, Yuxiang Li. Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-28. doi: 10.3934/dcdsb.2019064 Marko Nedeljkov, Sanja Ružičić. On the uniqueness of solution to generalized Chaplygin gas. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4439-4460. doi: 10.3934/dcds.2017190 Ze Cheng, Changfeng Gui, Yeyao Hu. Existence of solutions to the supercritical Hardy-Littlewood-Sobolev system with fractional Laplacians. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1345-1358. doi: 10.3934/dcds.2019057 Andrea Davini, Maxime Zavidovique. Weak KAM theory for nonregular commuting Hamiltonians. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 57-94. doi: 10.3934/dcdsb.2013.18.57 Peter R. Kramer, Joseph A. Biello, Yuri Lvov. Application of weak turbulence theory to FPU model. Conference Publications, 2003, 2003 (Special) : 482-491. doi: 10.3934/proc.2003.2003.482 Shuxing Chen, Jianzhong Min, Yongqian Zhang. Weak shock solution in supersonic flow past a wedge. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 115-132. doi: 10.3934/dcds.2009.23.115 Yingshu Lü, Zhongxue Lü. Some properties of solutions to the weighted Hardy-Littlewood-Sobolev type integral system. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3791-3810. doi: 10.3934/dcds.2016.36.3791 PDF downloads (6) HTML views (0) Meng Wang Wendong Wang Zhifei Zhang
CommonCrawl
Surface Passivation of Silicon Using HfO2 Thin Films Deposited by Remote Plasma Atomic Layer Deposition System Xiao-Ying Zhang1,2, Chia-Hsun Hsu2, Shui-Yang Lien2, Song-Yan Chen3, Wei Huang3, Chih-Hsiang Yang4, Chung-Yuan Kung4, Wen-Zhang Zhu1, Fei-Bing Xiong1 & Xian-Guo Meng1 Hafnium oxide (HfO2) thin films have attracted much attention owing to their usefulness in equivalent oxide thickness scaling in microelectronics, which arises from their high dielectric constant and thermodynamic stability with silicon. However, the surface passivation properties of such films, particularly on crystalline silicon (c-Si), have rarely been reported upon. In this study, the HfO2 thin films were deposited on c-Si substrates with and without oxygen plasma pretreatments, using a remote plasma atomic layer deposition system. Post-annealing was performed using a rapid thermal processing system at different temperatures in N2 ambient for 10 min. The effects of oxygen plasma pretreatment and post-annealing on the properties of the HfO2 thin films were investigated. They indicate that the in situ remote plasma pretreatment of Si substrate can result in the formation of better SiO2, resulting in a better chemical passivation. The deposited HfO2 thin films with oxygen plasma pretreatment and post-annealing at 500 °C for 10 min were effective in improving the lifetime of c-Si (original lifetime of 1 μs) to up to 67 μs. High-quality surface passivation is very important for a range of crystalline silicon (c-Si)-based electronic devices, and especially for high-efficiency c-Si solar cells. As the need for lower-cost silicon solar cells increases, since Si material has a rather high cost, thinner Si substrates are required. Therefore, their surface/volume ratio of such substrates and the contribution of their surfaces to the overall performance are increasing. Traditional surface passivation for Si involves the formation of a thin silicon dioxide (SiO2) layer. However, this process requires a high thermal budget process, which involves long period at high temperature. Owing to these process-related issues, considerable efforts have been made in the past to develop low-temperature surface passivation methods for both heavily doped and moderately doped c-Si surfaces. Besides SiO2, other layers such as SiC, a-Si:H and Si3N4 have been used for surface passivation [1]. Recently, Al2O3 films that are grown by atomic layer deposition (ALD) have been demonstrated to provide good surface passivation on c-Si [2,3,4]. ALD technique is a powerful method. It provides a high-level degree of precise control over the properties of the material, and especially the morphology and thickness of dielectric layers. In the advanced semiconductor industry, hafnium dioxide (HfO2) thin films are used to replace SiO2 as the gate dielectric in field-effect transistors because they have better functionality and performance at lower cost [5, 6]. Additionally, the high refractive index of HfO2 makes it a potential candidate for anti-reflection coatings [7] and interference filters [8]. However, its surface passivation properties, particularly on c-Si, have scantly been studied. For example, Jun Wang et al. [9] presented the surface passivation properties of a Si surface using a thin HfO2 layer grown by ALD without further annealing. In another study Huijuan Geng et al. [10] reported advanced passivation using simple materials (Al2O3, HfO2) and their compounds H(Hf)A(Al)O deposited by ALD. All of the previous attempts were performed to deposit HfO2 on c-Si substrates without any pre-treatments. In this work, the surface passivation properties of the HfO2 films deposited by a remote plasma atomic layer deposition system (RP-ALD) on p-type c-Si with and without in situ oxygen plasma pretreatment were investigated. Samples were annealed at different temperatures by rapid thermal annealing (RTA) system. The structural changes and the electrical properties of the thin films induced by RTA were characterized by field-emission transmission electron microscope (FE-TEM), X-ray photoelectron spectroscopy (XPS) and capacitance-voltage (C-V) measurements. The passivation mechanism of HfO2 films on Si is also investigated. In this study, (100) oriented boron-doped p-type crystalline Czochralski (Cz) Si wafers that were polished on both sides and had a resistivity of 30 Ω · cm, original lifetime of 1 μs and a thickness of 250 μm were used. Prior to the deposition of the HfO2 film, all wafers were cleaned through a standard Radio Corporation of America (RCA) cleaning process followed by a dip in diluted hydrofluoric acid (HF) solution (5%) for 2 min to remove the native oxide and dried in nitrogen. The HfO2 thin films were grown in an RP-ALD reactor (Model: Picosun, Finland) using tetrakis (ethylmethylamino) hafnium (TEMAH) and remote O2 plasma as the precursors for hafnium and oxygen respectively with N2 as the carrier gas. In the ALD process, one deposition cycle consisted of two half cycles, one TEMAH pulse (for 1.6 s) and one O2 plasma pulse (for 10 s). The nitrogen purge times for TEMAH and O2 were 10.0 and 12.0 s, respectively. The samples were divided into two groups. For group one, HfO2 thin films were deposited directly on the cleaned Si wafers. For group two, before deposition of HfO2 thin films, Si wafers were additionally treated by remote O2 plasma for 1 min. The O2 plasma power for the pretreatment and for the ALD deposition process was 2500 W. The HfO2 films for all of the samples were deposited at 250 °C. Different HfO2 thickness (5, 15, and 25 nm) were prepared on as-cleaned Si wafers followed by annealing at 500 °C, and the corresponding minority carrier lifetimes of the passivated wafer were 9.98, 66.8, and 4.2 μs, respectively, at the injection level of 3 × 1014 cm−3. Therefore, the thickness of 15 nm (corresponding to 168 ALD cycles) was used. The substrate pre-treatment could affect nucleation, leading to different film thickness. The thicknesses of the deposited HfO2 are 15 nm ± 0.5 nm and 13 nm ± 0.7 nm for the samples with and without the oxygen plasma pretreatment, respectively. The wafer was flat on a platen. The double side coated samples processed twice, with a break in vacuum to flip the wafer in the chamber. The HfO2 thin films were deposited on 2-in wafers. As the substrate holder was about 8 in, four samples were placed on the holder and processed at a time. The samples in the two groups are referred hereafter as SD (direct depositing samples) and SO (O2 plasma pretreatment samples), respectively. Annealing process was performed using a RTA system at 400–650 °C in N2 ambient for 10 min. Samples were identified with suffixes A400 to A650 that represent the annealing temperatures. Table 1 lists the samples. Table 1 Details of the HfO2 thin films The minority carrier lifetimes (τ eff) of the samples were assessed by photo-conductance decay method (Model: WCT-120, Sinton lifetime tester) in the quasi-steady state mode. Metal-insulator-semiconductor (MIS) structures were prepared by depositing Al electrodes with diameters of 500 μm onto the passivation layer using a sputter system and a shadow mask. The C-V characteristics were measured with a HP4284A semiconductor characterization system to extract the electrical parameters. The chemical composition and states of elements in the HfO2/Si were analyzed by XPS (Thermo Fisher K-Alpha). The ion energy used for the depth profile was 3000 eV. The physical thicknesses, microstructure and interface properties of the HfO2 thin films were determined by FE-TEM (JEM-2100 F). Generally, the quality of passivation is assessed in terms of τ eff or surface recombination velocity (SRV = S max ). The τ eff refers to the recombination at surface defects. Figure 1(a) plots τ eff and S max for all samples at the injection level of 3 × 1014 cm−3. The τ eff measurements were performed three times for each sample in the different locations, and the errors of the minority carrier lifetime were within ±5%. As the annealing temperature was increased from 400 to 500 °C, the τ eff of the annealed HfO2 sample with O2 plasma pretreatment at the injection level of 3 × 1014 cm−3 increased significantly. The increase of the annealed HfO2 samples without O2 plasma pretreatment at the same injection level was much less than that of the annealed HfO2 samples with O2 plasma pretreatment. At lower temperatures (T < 500 °C), the annealed HfO2 samples without O2 plasma pretreatment had lower τ eff than those with O2 plasma pretreatment. The annealing process provides energy to the HfO2 layer to active the passivation. When the annealing temperature higher than 500 °C, the minority carrier lifetime decreases, which might be due to the defects generated by the increased microcrystalline fraction and grain boundaries in the HfO2 layer. The O2 plasma pretreatment sample that had been annealed at the temperature of 500 °C had the highest τ eff of 67 μs, corresponding to an S max value of 187 cm/s. This calculation was based on the quasi steady-state photo conductance (QSSPC) τ eff data for the injection level of 3 × 1014 cm−3. S max represents the upper limit of SRV, and is estimated from the measured lifetime values using the following relation [11], a τ eff and S max of the samples at the injection level of 3 × 1014 cm−3. Injection level-dependent effective minority carrier lifetime of the (b) SD and (c) SO samples $$ {S}_{max}=\frac{W}{2{\tau}_{eff}}, $$ where W (=250 μm) is the thickness of the silicon substrate. The lower value of S max can be attributed to a lower density of interface traps. It also can be seen from Fig. 1a that the O2 plasma pretreatment samples exhibited better passivation than the directly deposited samples, so they had a lower interface recombination velocity. This difference is attributable to the diffusion of O from the O2 plasma to the interfacial region to form a SiO2 thin film, which provides better chemical passivation of the dangling bonds. Figure 1b, c shows the injection level-dependent effective minority carrier lifetime of the samples without and with O2 plasma pretreatment. For the SO samples, the minority carrier lifetime increases with the annealing temperature between 400 and 500 °C. All of the SO samples without annealing exhibited almost no passivation, and their τ eff values were similar to that of the bare Si wafer. However, τ eff of the annealed samples increased significantly and then decreased as the injection level increased from 4 × 1013 cm−3 to 5 × 1015 cm−3. The drop in τ eff with increasing injection levels is caused by Auger recombination in the bulk of the c-Si substrate. The τ eff of the as-deposited samples depends very strongly on injection level, decreasing by approximately one order of magnitude as the injection level is decreased from 3 × 1014 to 1013 cm−3. This dependence in injection levels is much weaker for the annealed samples. The τ eff values of the annealed samples decrease only slightly as the injection level is reduced [12]. C-V measurements are commonly used to characterize the quality of dielectric layers and their interface with the substrates. C-V measurements were performed herein at room temperature in the dark conditions at 1 MHz on a standard MIS (Al/HfO2/p-Si) structure. Figure 2a, b shows the C-V curves of the HfO2 thin films without and with O2 plasma pretreatment, respectively. The voltage (V A ) that was applied across the MIS device was varied (-5 V < V A < 5 V) with a sweep step length of 100 mV and signal amplitude of 50 mA, shifting from accumulation to inversion. The shift of C-V curves toward negative voltages demonstrates the presence of effective oxide charges of positive polarity in the as-deposited HfO2 thin films. The effective oxide charge represents the sum of mobile ionic charges (Q m ), oxide trapped charges (Q OT ) and oxide fixed charges (Q f ). Q f significantly affects the flat band voltage (V FB ), as it is located at the oxide-semiconductor interface. In Fig. 2a, the C-V curves are shifted in the positive direction by the V FB shift because Q f decreases as the annealing temperature increases. The slope of the C-V curve increases with the annealing temperature increases, indicating that the interface trap density decreases as the annealing temperature increases. The HfO2 thin films with O2 plasma pretreatment exhibited a similar relationship, as shown in Fig. 2b. The presence of fixed charges arose from the charged oxygen vacancies in the films [13]. The fixed charge density is estimated using Eq. (2), assuming a negligible effect of the interface traps [14], C-V characteristics measured at 1 MHz for (a) directly deposited samples without O2 plasma pretreatment, and (b) samples with O2 plasma pretreatment; (c) estimated Q f of the annealed HfO2 thin films $$ {V}_{FB}={\phi}_{ms}-\frac{q{ Q}_f}{C_{ox}}, $$ where ϕ ms (=0.32 eV), q (=1.602 × 10−19 C), C ox , and V FB are the difference between the work functions of metal and the semiconductor, the electronic charge, the capacitance of the dielectric per unit area and the flat band voltage, respectively. The values of Q f for the as-deposited and annealed HfO2 thin films are shown in Fig. 2c. Q f decreases as the annealing temperature increases. The annealing process appears to reduce the density of oxygen vacancies that are responsible for the presence of positive fixed charges, which may be related to the reconstruction of the oxide film near the interface [15]. Furthermore, the Q f of SO samples are lower than that of SD samples at the same annealing temperature. The interfacial defect density (D it ) is determined using an approximation method given by W. A. Hill and C. C. Coleman [16]. The Q f and D it values are listed in Table 2. Table 2 Calculated fixed charge density (Q f ) and interface defect density (D it ) from C-V measurement of the HfO2 thin films Cross-sections of the annealed thin films were evaluated by a FE-TEM for assessing the film microstructure and HfO2/Si interface. The FE-TEM cross-section analysis of the HfO2 thin film annealed at 500 °C (a) without and (b) with O2 plasma pretreatment is shown in Fig. 3. From the FE-TEM images, the annealed HfO2 thin films consist of three regions, which are the HfO2 layer, an interfacial oxide and the Si substrate. The atoms in the HfO2 layer are orderly arranged in some areas, indicating that the HfO2 layer is microcrystalline structure. A very thin interfacial oxide layer is formed between the high k film and the substrate in the as deposited and annealed samples [17]. The HfO2 layer and the interfacial layer of the sample with oxygen plasma treatment are 15.3 and 2.7 nm, respectively. Whereas, the HfO2 layer and the interfacial layer of the sample without the pretreatment are 13.9 and 2.2 nm, respectively. This thickness difference should not cause the significant lifetime variation (35 and 67 μs for the samples without and with the pretreatment). Therefore, the significant lifetime improvement could be attributed to the different interface layers with the oxygen plasma pretreatment. FE-TEM cross-section analysis of the HfO2 film annealed at 500 °C (a) without and (b) with O2 plasma pretreatment Figure 4 shows the elemental depth profiles of the HfO2 films annealed at 500°C without and with O2 plasma pretreatment obtained by XPS. Three regions are observed. In Region A, when the etching time was below 100 s, the relatively uniform atomic percentages of Hf and O corresponded to the RP-ALD μc-HfO2 layer. In Region B, the O and Hf atomic percentages decreased as the etching time increased from 130 to 175 s, indicating that the O elements diffused into the c-Si substrate, forming an interfacial layer [18, 19]. In Region C, when the etching time increased above 175 s, the Si signal drastically increased up to more than 60%, corresponding to the surface of the c-Si substrate. The oxygen atomic percentage and Hf atomic percentage in the c-Si substrates are due to the Ar ion sputtering effect. During the sputter process of the XPS measurement, some of the Hf or O atoms may reside on the silicon substrate surface and then be detected. Notice that in Region B, in addition to lower Hf and O with a corresponding increase in Si signal in the interface region, the sample with the oxygen pretreatment has also a larger Si signal in the bulk of the HfO2 film that may account for the percentage differences. The similar results can be obtained at the other investigated annealing temperatures. A possible reason might be that the O2 pretreatment leads to the growth of a very thin SiO2 layer reducing the Hf and O diffusion coming from the subsequently deposited HfO2. Fewer atomic vacancies are formed by diffusion in the HfO2 on the sample with the O2 pretreatment. Thus, the O2 pretreatment can be expected to yield fewer interface traps and exhibited higher chemical passivation quality. Elemental depth profiles of HfO2 annealed at 500 °C without and with O2 plasma pretreatment versus etching time Growth of a thin oxide film on a clean but unpassivated Si surface leads to the formation of new covalent bonds (chemical passivation) and termination of the dangling bonds [9]. Si/oxide interfaces often carry some fixed charges. These charges can induce an electric field at the surface of Si and can potentially reduce the recombination rate at the Si/oxide interface (field effect passivation). It has been reported by Hoex et al [20] that when preparing Al2O3 thin films by plasma ALD, they found a very thin (~1.5 nm) SiOx interfacial oxide layer was formed, which provides good passivation to c-Si surface. They attributed this to the exposure of the substrate to the oxygen plasma in the very first ALD cycles. Although in this study the HfO2 thin films are prepared, the oxygen plasma pretreatment is found to result in a similar interfacial oxide layer (a-HfO2 + a-SiO2). The oxygen plasma pretreatment could improve the surface passivation of Si wafers. In this work, HfO2 thin films with a thickness of 15 nm were deposited on p-type crystalline silicon wafers using a remote plasma atomic layer deposition system. In situ remote O2 plasma pretreatment of the Si substrate before the deposition of HfO2 thin films and post-annealing at 500 °C for 10 min effectively reduced the trap density at the HfO2/Si interface, yielding a highest lifetime of 67 μs. The HfO2 thin films deposited by RP-ALD with O2 plasma pretreatment have potential as passivation layers in high-quality Si solar cells. C ox : The capacitance of the dielectric per unit area c-Si: C-V: Capacitance-voltage Cz: Czochralski D it : Interfacial defect density FE-TEM: Field-emission transmission electron microscope HF: Hydrofluoric acid HfO2 : Hafnium oxide MIS: Metal-insulator-semiconductor The electronic charge Q f : Oxide fixed charges Q m : Mobile ionic charges Q OT : Oxide trapped charges QSSPC: Quasi steady-state photo conductance RCA: Radio Corporation of America RP-ALD: Remote plasma atomic layer deposition SD: Direct depositing samples SiO2 : Silicon dioxide O2 plasma pretreatment samples SRV: Surface recombination velocity TEMAH: Tetrakis (ethylmethylamino) hafnium V A : V FB : Flat band voltage τ eff : Lifetimes ϕ ms : The difference between the work functions of metal and the semiconductor Mohammad Ziaur R (2014) Advances in surface passivation and emitter optimization techniques of c-Si solar cells. Renew Sustain Energy Rev 30:734–742 Shui-Yang L, Chih-Hsiang Y, Kuei-Ching W, Chung-Yuan K et al (2015) Investigation on the passivated Si/Al2O3 interface fabricated by non-vacuum spatial atomic layer deposition system. Nanoscale Res Lett 10:93-93-9 Abdulrahman M (2014) Albadri. Characterization of Al2O3 surface passivation of silicon solar cells. Thin Solid Films 562:451–455 Simon DK, Jordan PM, Dirnstorfer I, Benner F, Richter C, Mikolajick T et al (2014) Symmetrical Al2O3–based passivation layers for p- and n-type silicon. Sol Energy Mater Sol Cells 131:72–76 Xiaowei C, Xiaoling L, Chao L, Haiyang H, Cheng L, Songyan C, Hongkai L, Wei H, Jianfang X et al (2016) An improvement of HfO2/Ge interface by in situ remote N2 plasma pretreatment for Ge MOS devices. Materials Research Express 3:035012–035012-5 Vikram Singh, Satinder K. Sharma, Dinesh Kumar, R.K. Nahar, et al. Study of rapid thermal annealing on ultra thin high-k HfO2 films properties for nano scaled MOSFET technology. Microelectronic Engineering. 2012;91:137-143 Wang Y, Lin Z, Cheng X, Xiao H, Zhang F, Zou S et al (2004) Study of HfO2 thin films prepared by electron beam evaporation. Appl Surf Sci 228:93–99 Toledano-Luque M, San Andres E, del Prado A, Martil I, Lucia ML, Gonzalez-Diaz G, Martinez FL, Bohne W, Rohrich J, Strub E et al (2007) High-pressure reactively sputtered HfO2: composition, morphology, and optical proterties. J Appl Phys 102:044106-044106-8 Wang J, Mottaghian SS, Baroughi MF et al (2012) Passivation properties of atomic-layer-deposited hafnium and aluminum oxides on Si surfaces. Transactions on Electron Devices 59(2):342–348 Huijuan G, Tingjui L, Ayra Jagadhamma L, Huey-Liang H, Kyznetsov FA, Smirnova TP, Saraev AA, Kaichev VV et al (2014) Advanced passivation techniques for Si solar cells with high-k dielectric materials. Appl Phys Lett 105:123905 Jhuma Gope V, Neha B, Jagannath P, Rajbir S, Maurya KK, Ritu S, Singh PK et al (2015) Silicon surface passivation using thin HfO2 films by atomic layer deposition. Appl Surf Sci 357:635–642 Lin F, Hoex B, Koh YH, Lin JJ, Aberle AG et al (2012) Low-temperature surface passivation of moderately doped crystalline silicon by atomic-layer-deposited Hafnium oxide films. Energy Procedia 15:84–90 Xiong K, Robertson J, Gibson MC, Clark SJ et al (2005) Defect energy levels in HfO2 high-dielectric-constant gate oxide. Appl Phys Lett 87:183505–183505-3 Dieter K. Schroder. Semiconductor material and device characterization, 3rd edition. Wiley:2006. Cheng X, Song Z, Jiang J, Yu Y, Yang W, Shen D et al (2006) Study of HfOSi film prepared by electron beam evaporation for high-k gate dielectric applications. Appl Surf Sci 252:8073–8076 Hill WA, Coleman CC (1980) A single-frequency approximation for interface-state density determination. Solid State Electron 23:987–993 Choi K, Temkin H, Harris H, Gangopadhyay S, Xie L, White M (2004) Initial growth of interfacial oxide during deposition of HfO2 on silicon. Appl Phys Lett 85(2):215–217 Wei-En F, Chang C-W, Chang Y-Q, Yao C-K, Liao J-D et al (2012) Reliability assessment of ultra-thin HfO2 films deposited on silicon wafer. Appl Surf Sci 258:8974–8979 Ran J, Erqing X, Zhenfang W (2006) Interfacial chemical structure of HfO2/Si film fabricated by sputtering. Appl Phys Lett 89:142907–142907-3 Hoex B, Heil SBS, Langereis E, van de Sanden MCM, Kessels WMM et al (2006) Ultralow surface recombination of c-Si substrates passivated by plasma-assisted atomic layer deposited Al2O3. Appl Phys Lett 89:142112–142112-3 This work is sponsored by the Ministry of Science and Technology of the Republic of China under the grants No. 105-2632-E-212-001 and 104-2221-E-212-002-MY3. This work is also supported by the National Natural Science Foundation of China (No. 61474081, 61534005, and 61307115), the science and technology project of Xiamen (No. 3502Z20141042) and the Fundamental Research Funds for the Central Universities (No. 20720150028). XYZ carried out the characterization of the HfO2 thin films deposited by ALD and drafted the manuscript. CHH and SYL led the experimental and analytical effort on the passivation of the HfO2 on Silicon. SYC and WH contributed to deposit the HfO2 thin films. CHY and CYK assisted in design and analysis of the experiments for the HfO2 thin films. WZZ, FBX, and XGM contributed to the valuable discussion on experimental and theoretical results, respectively. All authors read and approved the final manuscript. School of Opto-electronic and Communication Engineering, Fujian Key Laboratory of Optoelectronic Technology and Devices, Xiamen University of Technology, Xiamen, 361024, China Xiao-Ying Zhang, Wen-Zhang Zhu, Fei-Bing Xiong & Xian-Guo Meng Department of Electrical Engineering, Da-Yeh University, ChungHua, 51591, Taiwan Xiao-Ying Zhang, Chia-Hsun Hsu & Shui-Yang Lien Department of Physics, OSED, Xiamen University, Xiamen, 361005, China Song-Yan Chen & Wei Huang Department of Electrical Engineering, National Chung-Hsing University, Taichung, 40227, Taiwan Chih-Hsiang Yang & Chung-Yuan Kung Xiao-Ying Zhang Chia-Hsun Hsu Shui-Yang Lien Song-Yan Chen Wei Huang Chih-Hsiang Yang Chung-Yuan Kung Wen-Zhang Zhu Fei-Bing Xiong Xian-Guo Meng Correspondence to Shui-Yang Lien. Zhang, XY., Hsu, CH., Lien, SY. et al. Surface Passivation of Silicon Using HfO2 Thin Films Deposited by Remote Plasma Atomic Layer Deposition System. Nanoscale Res Lett 12, 324 (2017). https://doi.org/10.1186/s11671-017-2098-5 HfO2 thin films O2 plasma pretreatment Surface passivation
CommonCrawl
Long-time dynamics of a diffusive epidemic model with free boundaries Analytical study of resonance regions for second kind commensurate fractional systems Periodic solutions of a tumor-immune system interaction under a periodic immunotherapy Gladis Torres-Espino , and Claudio Vidal Grupo de Investigación en Sistemas Dinámicos y Aplicaciones-GISDA, Departamento de Matemática, Facultad de Ciencias, Universidad del Bío-Bío, Concepción, Chile * Corresponding author: Gladis Torres-Espino Received February 2020 Revised August 2020 Published October 2020 In this paper, we consider a mathematical model of a tumor-immune system interaction when a periodic immunotherapy treatment is applied. We give sufficient conditions, using averaging theory, for the existence and stability of periodic solutions in such system as a function of the six parameters associated to this problem. Finally, we provide examples where our results are applied. Keywords: Tumor-immune system, immunotherapy, periodic solutions, averaging theory, stability. Mathematics Subject Classification: Primary: 34C25; Secondary: 34C29, 37N25. Citation: Gladis Torres-Espino, Claudio Vidal. Periodic solutions of a tumor-immune system interaction under a periodic immunotherapy. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020301 P. Amster, L. Berezansky and L. Idels, Periodic solutions of angiogenesis models with time lags, Nonlinear Analysis: Real World Applications, 13 (2012), 299-311. doi: 10.1016/j.nonrwa.2011.07.035. Google Scholar A. d'Onofrio, A general framework for modeling tumor-inmune system competition and immunotherapy: Mathematical analysis and biomedical inferences, Physica D: Nonlinear Phenomena, 208 (2005), 220-235. doi: 10.1016/j.physd.2005.06.032. Google Scholar A. d'Onofrio, Metamodeling tumor-immune system interaction, tumor evasion and immunotherapy, Math. Comput. Model., 47 (2008), 614-637. doi: 10.1016/j.mcm.2007.02.032. Google Scholar D. I. Gabrilovich, Combination of chemotherapy and immunotherapy for cancer: A paradigm revisited, Lancet Oncology, 8 (2007), 2-3. doi: 10.1016/S1470-2045(06)70985-8. Google Scholar V. A. Kuznetsov, I. A. Makalkin, M. Taylor and A. Perelson, Nonlinear dynamics of immunogenic tumors: Parameter estimation and global bifurcation analysis, Bull. Math. Biol., 56 (1994), 295-321. doi: 10.1007/BF02460644. Google Scholar Z. Liu and C. Yang, A mathematical model of cancer treatment by radiotherapy, Comput. Math. Meth. Med., 124 (2014), 1-12. doi: 10.1155/2014/172923. Google Scholar O. Sotolongo-Costa, L. Morales-Molina, D. Rodríguez-Pérez, J. C. Antonraz and M. Chacón-Reyes, Behaviour of tumors under nonstationary therapy, Physica D: Nonlinear Phenomena, 178 (2003), 242-253. doi: 10.1016/S0167-2789(03)00005-8. Google Scholar F. Verhulst, Nonlinear Differential Equations and Dynamical Systems, 2$^{nd}$ edition, Universitext, Springer-Verlag, Berlin Heidelberg, 1996. doi: 10.1007/978-3-642-61453-8. Google Scholar Figure 1. Intersection between the graph of the function $ g(X) $ and the line $ l_2:\, Y = \frac{KA}{BS}X-\frac{\overline{\beta}}{S} $. Figure 2. Intersection between the graph of the function $ g(X) $ and the line $ l_2:\, Y = \frac{KA}{BS}X+\frac{FA}{BS}-\frac{\overline{\beta}}{S} $, when $ 0<\frac{FA}{BS}-\frac{\overline{\beta}}{S}<1 $ and $ \frac{FA}{BS}-\frac{\overline{\beta}}{S}<0 $ Figure 3. Intersection between the graph of the function $ g(X) $ and the line $ l_2:\, Y = (K-D)\frac{A}{BS}X-\frac{\overline{\beta}}{S} $, when $ K-D>0 $ Figure 4. Intersection between the graph of the function $ g(X) $ and the line $ l_2:\, Y = (K-D)\frac{A}{BS}X+\frac{FA}{BS}-\frac{\overline{\beta}}{S} $, when $ 0<\frac{FA}{BS}-\frac{\overline{\beta}}{S}<1 $ and $ \frac{FA}{BS}-\frac{\overline{\beta}}{S}<0 $. Figure 5. Intersection between the graph of the function $ g(X) $ and the line $ l_2:\, Y = (K-D)\frac{A}{BS}X+\frac{FA}{BS}-\frac{\overline{\beta}}{S} $. Figure 6. Intersection between the graph of the function $ g(X) $ and the line $ l_2:\, Y = -\frac{DA}{BS}X+\frac{FA}{BS}-\frac{\overline{\beta}}{S} $ Figure 7. Malignant cells $ x(t) $ and Lymphocyte Cells $ y(t) $ for the periodic solution of Theorem 2.1, with initial conditions $ x_0 = 1/19+10^{-40} $ and $ y_0 = 100000+10^{-40} $ Figure 8. Malignant cells $ x(t) $ and Lymphocyte Cells $ y(t) $ for the periodic solution of Theorem 2.2, with initial conditions $ x_0 = 5/32(-3+\sqrt{73})+10^{-40} $ and $ y_0 = 80000+10^{-40} $ Figure 9. Malignant cells $ x(t) $ and Lymphocyte Cells $ y(t) $ for the periodic solution of Theorem 2.3, with initial conditions $ x_0 = 1/60 (-27 + \sqrt{1009})+10^{-40} $ and $ y_0 = 50000+10^{-40} $ Figure 10. Malignant cells $ x(t) $ and Lymphocyte Cells $ y(t) $ for the periodic solution of Theorem 2.4, with initial conditions $ x_0 = 1.0099381+10^{-40} $ and $ y_0 = 25000+10^{-40} $ Figure 11. Malignant cells $ x(t) $ and Lymphocyte Cells $ y(t) $ for the periodic solution of Theorem 2.5, with initial conditions $ x_0 = 1.15201+10^{-40} $ and $ y_0 = 25000+10^{-40} $ Table 1. Definition of the parameters in model (2) Functions Biological meaning $ \xi(x) $ Growth rate of the tumor $ \phi(x)y $ Functional response $ g(x) $ External inflow of effector cells $ \beta(x) $ Tumor-stimulated proliferation rate of effector cells $ \mu(x) $ Tumor-induced loss of effector cells $ \sigma g(x) $ Influx of effector cells $ \theta(\omega t) $ Immunotherapy Parameter Biological meaning $ a $ Intrinsic growth rate of the tumor $ b $ Death malignant cells rate due to interaction with lymphocyte cells $ d $ Increased lymphocyte rate due to interaction with malignant cells $ f $ Death rate of the lymphocytes $ \kappa $ Immunosuppression coefficient $ \sigma g(x) $ Influx external of effector cells $ \omega $ Immunotherapy dosage frequency Shigui Ruan. Nonlinear dynamics in tumor-immune system interaction models with delays. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 541-602. doi: 10.3934/dcdsb.2020282 Shujing Shi, Jicai Huang, Yang Kuang. Global dynamics in a tumor-immune model with an immune checkpoint inhibitor. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1149-1170. doi: 10.3934/dcdsb.2020157 Yi Guan, Michal Fečkan, Jinrong Wang. Periodic solutions and Hyers-Ulam stability of atmospheric Ekman flows. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1157-1176. doi: 10.3934/dcds.2020313 Rong Chen, Shihang Pan, Baoshuai Zhang. Global conservative solutions for a modified periodic coupled Camassa-Holm system. Electronic Research Archive, 2021, 29 (1) : 1691-1708. doi: 10.3934/era.2020087 Mengyu Cheng, Zhenxin Liu. Periodic, almost periodic and almost automorphic solutions for SPDEs with monotone coefficients. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021026 Dong-Ho Tsai, Chia-Hsing Nien. On space-time periodic solutions of the one-dimensional heat equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3997-4017. doi: 10.3934/dcds.2020037 Hao Wang. Uniform stability estimate for the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 657-680. doi: 10.3934/dcds.2020292 Pan Zheng. Asymptotic stability in a chemotaxis-competition system with indirect signal production. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1207-1223. doi: 10.3934/dcds.2020315 Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276 Craig Cowan, Abdolrahman Razani. Singular solutions of a Lane-Emden system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 621-656. doi: 10.3934/dcds.2020291 Chao Wang, Qihuai Liu, Zhiguo Wang. Periodic bouncing solutions for Hill's type sub-linear oscillators with obstacles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 281-300. doi: 10.3934/cpaa.2020266 Sishu Shankar Muni, Robert I. McLachlan, David J. W. Simpson. Homoclinic tangencies with infinitely many asymptotically stable single-round periodic solutions. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021010 Michal Fečkan, Kui Liu, JinRong Wang. $ (\omega,\mathbb{T}) $-periodic solutions of impulsive evolution equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021006 Kerioui Nadjah, Abdelouahab Mohammed Salah. Stability and Hopf bifurcation of the coexistence equilibrium for a differential-algebraic biological economic system with predator harvesting. Electronic Research Archive, 2021, 29 (1) : 1641-1660. doi: 10.3934/era.2020084 Izumi Takagi, Conghui Zhang. Existence and stability of patterns in a reaction-diffusion-ODE system with hysteresis in non-uniform media. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020400 Shin-Ichiro Ei, Shyuh-Yaur Tzeng. Spike solutions for a mass conservation reaction-diffusion system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3357-3374. doi: 10.3934/dcds.2020049 Gladis Torres-Espino Claudio Vidal
CommonCrawl
Search Results: 1 - 10 of 8001 matches for " HuiLi Dai " The changes in telomerase activity and telomere length in HeLa cells undergoing apoptosis induced by sodium butyrate Jianguo Ren,Huili Xia,Yaoren Dai Chinese Science Bulletin , 2001, DOI: 10.1007/BF03187012 Abstract: The changes in telomerase activity and telomere length during apoptosis in HeLa cells as induced by sodium butyrate (SB) have been studied. After a 48 h SB treatment, HeLa cells demonstrated characteristic apoptotic hallmarks including chromatin condensation, formation of apoptotic bodies and DNA Laddering which were caused by the cleavage and degradation of DNA between nucleosomes. There were no significant changes in telomerase activity of apoptotic cells, while the telomere length shortened markedly. In the meanwhile, cells became more susceptible to apoptotic stimuli and telomere became more vulnerable to degradation after telomerase activity was inhibited. All the results suggest that the apoptosis induced by SB is closely related to telomere shortening, while telomerase enhances resistance of HeLa cells to apoptotic stimuli by protecting telomere. The changes in telomerase activity and telomere length in HeLa cells undergoing apop- tosis induced by sodium butyrate REN Jianguo,XIA Huili,DAI Yaoren, 科学通报(英文版) , 2001, An integrated assessment of the impact of precipitation and groundwater on vegetation growth in arid and semiarid areas Lin Zhu,Huili Gong,Zhenxue Dai,Tingbao Xu,Xiaosi Su Physics , 2014, Abstract: Increased demand for water resources together with the influence of climate change has degraded water conditions which support vegetation in many parts of the world, especially in arid and semiarid areas. This study develops an integrated framework to assess the impact of precipitation and groundwater on vegetation growth in the Xiliao River Plain of northern China. The integrated framework systematically combines remote sensing technology with water flow modeling in the vadose zone and field data analysis. The vegetation growth is quantitatively evaluated with the remote sensing data by the Normalized Difference Vegetation Index (NDVI) and the simulated plant water uptake rates. The correlations among precipitation, groundwater depth and NDVI are investigated by using Pearson correlation equations. The results provide insights for understanding interactions between precipitation and groundwater and their contributions to vegetation growth. Strong correlations between groundwater depth, plant water uptake and NDVI are found in parts of the study area during a ten-year drought period. The numerical modeling results indicate that there is an increased correlation between the groundwater depth and vegetation growth and that groundwater significantly contributes to sustaining effective soil moisture for vegetation growth during the long drought period. Therefore, a decreasing groundwater table might pose a great threat to the survival of vegetation during a long drought period. Statistic inversion of multi-zone transition probability models for aquifer characterization in alluvial fans Lin Zhu,Zhenxue Dai,Huili Gong,Carl Gable,Pietro Teatini Abstract: Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This paper develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss-Newton-Levenberg-Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in an accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. The result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations. Consent for Use of Clinical Leftover Biosample: A Survey among Chinese Patients and the General Public Yi Ma, HuiLi Dai, LiMin Wang, LiJun Zhu, HanBing Zou, XianMing Kong Abstract: Background Storage of leftover biosamples generates rich biobanks for future studies, saving time and money and limiting physical impact to sample donors. Objective To investigate the attitudes of Chinese patients and the general public on providing consent for storage and use of leftover biosamples. Design, Setting and Participants Cross-sectional surveys were conducted among randomly selected patients admitted to a Shanghai city hospital (n = 648) and members of the general public (n = 492) from May 2010 to July 2010. Main Outcome Measures Face-to-face interviews collected respondents-report of their willingness to donate residual biosample, trust in medical institutions, motivation for donation, concerns of donated sample use, expectations for research results return, and so on. Results The response rate was 83.0%. Of the respondents, 89.1% stated that they completely understood or understood most of questions. Willingness to donate residual sample was stated by 64.7%, of which 16.7% desired the option to withdraw their donations anytime afterwards. Only 42.3% of respondents stated they "trust" or "strongly trust" medical institutions, the attitude of trusting or strongly trusting medical institutions were significantly associated with willingness to donate in the general public group.(p<0.05) The overall assent rate for future research without specific consents was also low (12.1%). Hepatitis B virus carriers were significantly less willing than non-carriers to donate biosamples (32.1% vs. 64.7%, p<0.001). Conclusions Low levels of public trust in medical institutions become serious obstacle for biosample donation and biobanking in China. Efforts to increase public understanding of human medical research and biosample usage and trust in the ethical purposes of biobanking are urgently needed. These efforts will be greatly advanced by the impending legislation on biobanking procedures and intent, and our results may help guide the structure of such law. Uniqueness for the martingale problem associated with pure jump processes of variable order Huili Tang Abstract: Let $L$ be the operator defined on $C^2$ functions by $$L f(x)=\int[f(x+h)-f(x)-1_{(|h|\leq 1)}\nabla f(x)\cdot h]\frac{n(x,h)}{|h|^{d+\alpha(x)}}dh.$$ This is an operator of variable order and the corresponding process is of pure jump type. We consider the martingale problem associated with $L$. Sufficient conditions for existence and uniqueness are given. Transition density estimates for $\alpha$-stable processes are also obtained. Deficiency of Antinociception and Excessive Grooming Induced by Acute Immobilization Stress in Per1 Mutant Mice Jing Zhang,Zhouqiao Wu,Linglin Zhou,Huili Li,Huajing Teng,Wei Dai,Yongqing Wang,Zhong Sheng Sun Abstract: Acute stressors induce changes in numerous behavioral parameters through activation of the hypothalamic-pituitary-adrenal (HPA) axis. Several important hormones in paraventricular nucleus of the hypothalamus (PVN) play the roles in these stress-induced reactions. Corticotropin-releasing hormone (CRH), arginine-vasopressin (AVP) and corticosterone are considered as molecular markers for stress-induced grooming behavior. Oxytocin in PVN is an essential modulator for stress-induced antinociception. The clock gene, Per1, has been identified as an effecter response to the acute stresses, but its function in neuroendocrine stress systems remains unclear. In the present study we observed the alterations in grooming and nociceptive behaviors induced by acute immobilization stress in Per1 mutant mice and other genotypes (wild types and Per2 mutant). The results displayed that stress elicited a more robust effect on grooming behavior in Per1 mutant mice than in other genotypes. Subsequently, the obvious stress-induced antinociception was observed in the wild-type and Per2 mutant mice, however, in Per1 mutant, this antinociceptive effects were partially-reversed (mechanical sensitivity), or over-reversed to hyperalgesia (thermal sensitivity). The real-time qPCR results showed that in PVN, there were stress-induced up-regulations of Crh, Avp and c-fos in all of genotypes; moreover, the expression change of Crh in Per1 mutant mice was much larger than in others. Another hormonal gene, Oxt, was up-regulated induced by stress in wild-type and Per2 mutant but not in Per1 mutant. In addition, the stress significantly elevated the serum corticosterone levels without genotype-dependent differences, and accordingly the glucocorticoid receptor gene, Nr3c1, expressed with a similar pattern in PVN of all strains. Taken together, the present study indicated that in acute stress treated Per1 mutant mice, there are abnormal hormonal responses in PVN, correlating with the aberrant performance of stress-induced behaviors. Therefore, our findings suggest a novel functional role of Per1 in neuroendocrine stress system, which further participates in analgesic regulation. The Optimal Control and MLE of Parameters of a Stochastic Single-Species System Huili Xiang,Zhijun Liu Discrete Dynamics in Nature and Society , 2012, DOI: 10.1155/2012/676871 Abstract: This paper investigates the optimal control and MLE (maximum likelihood estimation) for a single-species system subject to random perturbation. With the help of the techniques of stochastic analysis and mathematical statistics, sufficient conditions for the optimal control threshold value, the optimal control moment, and the maximum likelihood estimation of parameters are established, respectively. An example is presented to illustrate the feasibility of our theoretical results. 1. Introduction The Malthus model is usually expressed as where , stands for the density of species at moment, and is the intrinsic growth rate. As everyone knows, model (1.1) has epoch-making significance in mathematics and ecology and later, many deterministic mathematical models have been widely studied (see [1–5]). In fact, a population system is inevitably affected by the environmental noise in the real world. As a consequence, it is reasonable to study a corresponding stochastic model. Notice that some recent results, especially on optimal control, for the following stochastic model have been obtained (see [6–9]), where stands for the standard Brownian motion. However, for some pest populations, their generations are nonoverlapping (e.g., poplar and willow weevil, osier weevil and paranthrene tabaniformis) and the discrete models are more appropriate than the continuous ones. Compared with the continuous ones, the study on discrete mathematical models is more challenging. Inspired by [1–12], in this paper we will consider the following discrete model of system (1.2) where , , and any two of them are independent. stands for the noise intensity. We will focus on the optimal control threshold value, the optimal control moment, and the maximum likelihood estimation of parameters. To the best of our knowledge, no work has been done for system (1.3). The rest of this paper is organized as follows. In Section 2, some preliminaries are introduced. In Section 3, we give three results of this paper. As applications of our main results; an example is presented to illustrate the feasibility of our theoretical results in Section 4. 2. Preliminaries In this section, we summarize several definitions, assumptions, and lemmas which are useful for the later sections. Definition 2.1. Only when the quantity of pest population reaches one starts to control the pest population, and the real number is called to be a control threshold value. Definition 2.2. Until the th generation, the total quantity of pest population first reaches the control threshold value, then one says that is the first Research into the Mental Lexicon Representation of Chinese English Learners Based on Spreading Activation Model Huili WANG,Yan HOU Studies in Literature and Language , 2011, Abstract: Nowadays, the main idea regarding the organization of the lexicon is that words are stored in an organized intertwined semantic network. However, relatively little is known about the actual process that takes place during the course of activation production. Therefore, in order to gain a deeper understanding of the problems in question, this study conducted word association test to 150 sophomores in Dalian University of Technology (DUT) and tried to show the internal relations of mental lexicon in data by calculating the word frequency between certain words through a computer program which is written based on the actual calculating steps. And the innovation of this study is to show the abstract lexicon relation in data and illustrate the mental lexicon representation in three-dimensional figures by Netdraw software. Through the study we find: (1) The responses with higher frequency in the first few positions may not ensure themselves high association strength to the stimuli. And the current research also proves that activation of mental lexicon is not a "one stop" process but a linear forward one. (2) The data of association strength obtained from this study may help us convert the abstract lexicon relation into concrete statistical facts and establish representation of the mental lexicon network model. At last, the mechanism of Spreading Activation Model is illustrated and the implications for future English teaching are provided. Key Words: Mental lexicon; Word association test; Mental lexicon representation; Association strength Apoptosis of Rheumatoid Arthritis Fibroblast-Like Synoviocytes: Possible Roles of Nitric Oxide and the Thioredoxin 1 Huili Li,Ajun Wan Mediators of Inflammation , 2013, DOI: 10.1155/2013/953462
CommonCrawl
Complete blood count in acute kidney injury prediction: a narrative review Joana Gameiro1 & José António Lopes1 Acute kidney injury (AKI) is a complex syndrome defined by a decrease in renal function. The incidence of AKI has raised in the past decades, and it is associated with negative impact in patient outcomes in the short and long term. Considering the impact of AKI on patient prognosis, research has focused on methods to assess patients at risk for developing AKI, diagnose subclinical AKI, and on prevention and treatment strategies, for which it is crucial an understanding of pathophysiology the of AKI. In this review, we discuss the use of easily available parameters found in a complete blood count to detect patients at risk for developing AKI, to provide an early diagnosis of AKI, and to predict associated patient outcomes. Acute kidney injury (AKI) is a complex syndrome defined as a rapid decrease in renal function, caused by numerous etiologies [1]. The incidence of AKI has raised in the past decades, occurring in up to 20% of hospitalized patients and the incidence is higher in critically ill patients [2, 3]. AKI has a negative impact in patient outcomes in the short and long term, increasing the risk of in-hospital mortality, longer hospital stays, cardiovascular events, progression to chronic kidney disease, and long-term mortality [4,5,6]. Overall, mortality rates associated with AKI have decreased reflecting the impact of the increased recognition of this diagnosis and improvements in patient care. Nevertheless, mortality rates are significant and increase with AKI severity, specifically in dialysis-requiring AKI [7, 8]. Given its impact on prognosis, it is important to recognize risk factors for its occurrence, such as advanced age, diabetes, hypertension, chronic kidney disease (CKD), cardiovascular disease (CVD), liver disease, lung disease, disease severity, sepsis, shock, nephrotoxicity, and surgery [7,8,9,10] (Table 1). Table 1 Risk factors for AKI In the past decade, numerous studies have increased the understanding of the pathophysiology of AKI which led to the investigation of novel biomarkers. Nevertheless, the use of these biomarkers is limited in clinical practice. The purpose of this review is to discuss the use of easily available parameters found in a complete blood count to detect patients at risk for developing AKI and to predict associated patient outcomes. AKI definition and biomarkers The definition of AKI has evolved significantly since 2004 when the Risk Injury Failure Loss of kidney function End-stage kidney disease (RIFLE) classification was first published [11]. The Acute Kidney Injury Network (AKIN) classification further revised these criteria, improving its diagnostic sensitivity and specificity [7]. The current definition of AKI was proposed by the Kidney Disease Improving Global Outcomes (KDIGO) work group in 2012 [12]. The KDIGO classification defines AKI as an increase in serum creatinine (SCr) of at least 0.3 mg/dl within 48 h, or an increase in SCr to more than 1.5 times baseline which is known or presumed to have occurred within the prior 7 days, or a urine output (UO) decrease to less than 0.5 ml/kg/h for 6 h. This classification also stratifies AKI in stages of severity which correlate with patient prognosis [12] (Table 2). Table 2 Risk, injury, failure, loss of kidney function, End-stage kidney disease (RIFLE) [11], Acute Kidney Injury Network (AKIN) [7], and Kidney Disease Improving Global Outcomes (KDIGO) [12] classifications The definition of AKI depends on SCr and UO, which are imperfect markers of AKI [13,14,15]. SCr is an insensitive and delayed marker which is altered by factors affecting its production (age, gender, diet, and muscle mass), elimination (previous renal dysfunction), and secretion (medications). UO depends on patient's volemic and haemodynamic status and use of diuretics, is difficult to assess without a urinary catheter and its usefulness relies on an hourly assessment which is time-consuming [13,14,15,16,17]. Therefore, research has focused on the development of new biomarkers, including Cystatin-C, neutrophil gelatinase-associated lipocalin (NGAL), N-acetyl-glucosaminidase (NAG), kidney injury molecule 1 (KIM-1), interleukin-6 (IL-6), interleukin-8 (IL-8), interleukin 18 (IL-18), liver-type fatty acid-binding protein (L-FABP), calprotectin, urine angiotensinogen (AGT), urine microRNAs, insulin-like growth factor-binding protein 7 (IGFBP7), and tissue inhibitor of metalloproteinases-2 (TIMP-2) [18,19,20]. These have been evaluated and validated in multiple settings, though their use in clinical practice is limited due to the associated costs and lack of evidence of better patient outcomes with their use in detecting AKI [20, 21]. A comprehensive understanding of the pathophysiology is critical to identify easily available predictors of AKI to prevent, diagnose, and treat this complication, ultimately improving patient outcomes. Pathophysiology of AKI AKI can be systematized clinically into three groups: prerenal, which accounts for up to 60% of cases and results from the functional adaptation to hypoperfusion of structurally normal kidneys; intrinsic renal results from structural damage to any component of the renal parenchyma and accounts for up to 40% of cases; and less frequently postrenal resulting from urinary tract obstruction [22,23,24]. Most cases are multifactorial and following the inciting event numerous pathophysiologic events occur, including hemodynamic instability, microcirculatory dysfunction, tubular cell injury, tubular obstruction, renal congestion, microvascular thrombi, endothelial dysfunction, and inflammation [23,24,25,26]. In recent years, the significant role of inflammation in the pathophysiology of AKI has been emphasized [27]. The immunopathogenesis of AKI involves a complex interaction of damage-associated molecular patterns, pathogen-associated molecular patterns, oxidative stress, hypoxia inducible factor, complement system, dendritic cells, neutrophils, lymphocytes, macrophages, platelets, and cytokines [28,29,30,31]. The initial damage to renal endothelial cells and proximal tubular epithelial cells produces cytokines resulting in infiltration of inflammatory cells, such as neutrophils, lymphocytes, and macrophages into the interstitium [24, 26,27,28]. These cells cause ischemic tubular epithelial and endothelial injury directly and indirectly by releasing oxygen radicals, prostaglandins, leukotrienes, and thromboxanes [28, 29]. In addition, they produce pro and anti-inflammatory cytokines that further increase or decrease inflammation in the kidney [28,29,30]. Local and systemic inflammation plays a significant part in the initiation and extension phases of AKI, in the multi-organ failure associated with AKI and also in the progression to chronic kidney disease which can result if these immune mechanisms persist [28, 30]. The role of immune cells in the pathogenesis of AKI has been highlighted in several animal models, in which direct or indirect inhibition of immune cells attenuated renal injury [31,32,33,34]. Neutrophils are important components of innate immunity and respond rapidly to injury. Their contribution to AKI pathophysiology results from adherence to endothelium and release of cytokines, reactive oxygen species, and proteases [30, 35]. Neutrophils regulate the acute phase of inflammation within the first 24 h, which motivated their role as an early marker of the severity of AKI [36, 37]. Lymphocytes are the main components of adaptive immunity, which also play a significant role in the development and maintenance of AKI, directly by cellular damage and indirectly by producing pro-inflammatory cytokines [30,31,32,33,34,35,36,37,38]. Natural killer T cells are presumed to induce renal injury as they are directly cytotoxic and release pro-inflammatory cytokines and activate macrophages and neutrophils [39]. CD4+ T cells also contribute to the early phase of AKI. In later phases of AKI, infiltration of lymphocytes and macrophages predominates over neutrophils [40, 41]. T cells appear to have a significant role in the repair phase of AKI and possibly contribute to the transition from AKI to chronic kidney disease. Regulatory T cells are significant in renoprotection and renal regeneration processes, through anti-inflammatory cytokine release and promoting tubular proliferation [42]. Understanding these inflammatory processes may provide insight into future interventions to prevent or attenuate the consequences of AKI. In addition, platelet and leucocyte interactions appear to be a critical step in inflammation, as both innate and adaptive immunity are mediated by the interaction of neutrophils, monocytes, lymphocytes, and platelets [43,44,45]. Platelets adhere to the endothelial wall, modulate vascular permeability, recruit and activate leucocytes, and activate the complement system, thus contributing significantly to hemodynamic and inflammatory processes of AKI [45, 46]. Investigation has long focused on quantifying inflammation and determining its prognostic impact on AKI. With the limitations of the contemporary markers of AKI and the prognostic impact of this diagnosis, it was important to evaluate the role of parameters found in a complete blood count as early markers of AKI, disease severity and prognosis. In the following section, we present a review of the studies focused on the correlation of complete blood count results, AKI, and outcomes (Table 3). Table 3 Complete blood count parameters, AKI incidence, and outcomes Complete blood count parameters and AKI Several studies have demonstrated the association of anemia, which is frequent in hospitalized patients and associated with worse outcomes, and AKI [47]. The contributory effects of anemia to AKI are likely multifactorial. The presence of lower hemoglobin predisposes patients to renal hypoxia and oxidative stress [48]. In addition, many anemic patients have subclinical renal disease which increases the susceptibility to renal insults [49]. Shema-Didi et al. reported an independent association between preexisting anemia and in-hospital AKI occurrence (OR 1.5 (1.4–1.6), p < 0.001) and that the severity of anemia correlated with AKI requiring renal replacement therapy (RRT) and mortality, in a retrospective study of 34,802 hospitalized patients [50]. In a retrospective cohort of 1248 post-acute myocardial infarction patients, admission hemoglobin levels and anemia were independently associated with AKI (OR 1.76 (1.02–3.02), p = 0.04) [51]. Abusaada et al. developed a score to predict AKI risk in patients with acute myocardial infarction based on clinical and laboratory data at admission, including anemia as an independent risk predictor along with decompensated heart failure, baseline renal function, diabetes, and tachycardia [52]. Han et al. identified a hemoglobin threshold for detecting an increased risk of AKI and an association of anemia and AKI and long-term mortality in a retrospective study of critically ill patients [53]. Interestingly, a retrospective study by Powell-Tuck et al. of 210 critically ill patients with AKI stage I diagnosed by the AKI Network (AKIN) classification criteria reported that anemia did not increase the risk to progress to AKI stage III [54]. Malhotra et al. recently developed a risk prediction score in a prospective study of critically ill patients which included anemia as an independent predictor of AKI [OR 1.477 (0.891–2.449), p = 0.13] [55]. In cardiac surgery patients, Karkouti et al. reported that the presence of preoperative anemia, intraoperative anemia, and red blood cell transfusion on the day of surgery had a 2.6-fold increase in the relative risk of AKI [56]. A Spanish multicenter retrospective cohort of cardiac surgery patients demonstrated that intraoperative anemia was independently associated with increased risk of AKI [OR 1.32 (1.00–1.75), p = 0.05] [57]. Oprea et al. also demonstrated the association of preoperative and postoperative anemia and AKI and AKI severity after coronary artery bypass grafting surgery (HR 1.15 (1.01–1.32), p = 0.04). Furthermore, this study also demonstrated an association between anemia and long-term mortality (HR 1.50 (1.25 to 1.79), p < 0.001) [58]. The presence of preoperative and postoperative anemia was also independently associated with AKI [OR 1.82 (1.45–2.29), p < 0.01] and 1-year mortality [HR 1.5 (1.29–1.73), p < 0.01] in a prospective study of patients undergoing transcatheter aortic valve implantation [59]. In patients undergoing thoracic endovascular aortic repair, Gorla et al. identified preoperative hemoglobin levels and postoperative hemoglobin levels decrease as risk factors for AKI and in-hospital mortality [60]. In a cohort of 2467 patients submitted to total hip replacement arthroplasty, postoperative anemia was associated with postoperative AKI [OR 2.036 (1.369–3.028), p < 0.001] [61]. Anemia was also an independent predictor of contrast-induced AKI in patients undergoing coronary angiography [62]. Therefore, anemia has long been a marker of poor patient prognosis and is important to take into account when evaluating a patient. White blood count The role of white blood cells (WBC) in the pathophysiology of AKI has been described above. Han et al. demonstrated a U-shaped relationship between WBC count and risk of AKI and mortality in a prospective cohort of critically ill patients [63]. They propose that the higher risk of AKI in leucocytosis could be related to neutrophilia and associated pro-inflammatory function, and the higher risk of AKI in leucopenia could be related to lymphopenia and monocytopenia which result in a lack of protective function [63]. Recently, Kim et al. reported the use of a calculated delta-neutrophil index (DNI), which reflects the fraction of immature WBC, to predict sepsis-associated AKI in the emergency department [64]. In their study, DNI ≥ 14.0% was an independent predictor of severe AKI (OR 7.238, p < 0.001) and severe AKI was an independent predictor of 30-day mortality (HR 25.2, p < 0.001) in a cohort of septic patients [64]. Neutrophil-to-lymphocyte ratio The neutrophil–lymphocyte ratio (NLR) is an easily calculated marker of systemic inflammation that has been recently demonstrated to effectively predict AKI in multiple settings. In a retrospective cohort of 590 patients undergoing cardiovascular surgery, postoperative elevated NLR was significantly associated with an increased risk of postoperative AKI and mortality [65]. Parlar et al. also demonstrated that an increased NLR was associated with postoperative AKI in patients undergoing cardiovascular surgery with cardiopulmonary bypass [66]. An elevated preoperative NLR was also documented as a predictor of AKI in burn surgery patients, in whom a cut-off value of 11.7 was optimal for postoperative AKI prediction (OR 1.094 (1.064–1.125), p < 0.001) [67]. The NLR was associated with an increased risk of contrast-induced AKI, defined as an increase in SCr of 0.5 mg/dl within 3 days, in 1162 patients submitted to an emergency percutaneous coronary intervention (OR 1.105 (1.044–1.169), p = 0.001) [68]. In patients with sepsis, NLR at admission has been demonstrated as an important predictor of AKI in two retrospective cohorts [69, 70]. Indeed, the systemic inflammation associated with septic-AKI is vital in the development of multi-organ failure [71]. In both studies, there was no correlation between NLR and mortality [71, 72]. Alfeilat et al. reported that a cut-off value of NLR > 5.5 could be used to early detect AKI in a prospective study of 294 patients at emergency department admission (NLR > 5.5 OR 6.4 (2.7–16), p = 0.031) [72]. In a cohort of cirrhotic patients, we have developed a risk score for AKI combining renal, liver and inflammatory markers, which included the NLR [73]. In this retrospective study, a higher NLR was independently associated with AKI (13.9 ± 16.5 vs 5.5 ± 4.0, p < 0.001; unadjusted OR 1.2 (1.1–1.3), p < 0.001; adjusted OR 1.1 (1.0–1.1), p = 0.028; NLR > 6 OR 2.4 (1.0–5.8), p = 0.041) [73]. Furthermore, an increasing trajectory of NLR over the first 48 h of admission was associated with development of organ failure in critically ill male trauma patients (OR 2.06 (1.04–4.06), p = 0.04) [74]. In a retrospective study of 13,678 critically ill AKI patients, Fan et al. demonstrated that an NLR higher than 12.14 was a predictor of all-cause mortality (HR 1.83 (1.66–2.02), p < 0.0001) [75]. Although a standardized cut-off value for the NLR has not been defined, this easily calculated marker could be promising in the early diagnosis of AKI and as a prediction of worse outcomes. Platelet volume As described above, platelets have a significant role in the hemodynamic and inflammatory mechanisms of AKI. Thus, several studies have focused on platelet parameters as predictors of AKI and outcomes. Han et al. demonstrated that mean platelet volume (MPV) ≥ 10.2 fL was a significant prognostic risk factor for 28-day mortality in a retrospective analysis of 349 AKI patients requiring continuous renal replacement therapy (CRRT) (HR 1.08 (1.010–1.155), p = 0.023) [76]. An increased MPV reflects increased platelet activity and turnover, which could reflect more severe inflammation and a risk factor for overall vascular mortality [77]. Li et al. developed the mean platelet volume/platelet count ratio (MPR) from a retrospective cohort of critically ill AKI patients CRRT [78]. In this study, MPR > 0.099 was a significant predictor of mortality [AUROC 0.636 (0.563–0.708), p < 0.001] [78]. Thrombocytopenia has often been reported as an indicator of underlying disease severity and worse patient outcomes [79, 80]. Thrombocytopenia was an independent risk factor for postoperative AKI in patients undergoing minimally invasive transapical aortic valve implantation (OR 4.4 (1.6–12.2), p = 0.005) [81]. There was also a significant association between postoperative thrombocytopenia and postoperative AKI (OR 1.14 (1.09–1.20), p < 0.0001) and mortality (HR 5.46 (3.79–7.89), p < 0.0001), in a retrospective study of patients undergoing after coronary artery bypass grafting surgery [82]. In a retrospective study of patients with heat stroke, thrombocytopenia at admission was also a predictor of AKI (OR 37.92 (2.18–87.21), p < 0.01) [83]. In addition, thrombocytopenia was associated with a higher risk of AKI in elderly patients in a prospective study of patients admitted to the emergency department [84]. In patients with hemorrhagic shock, thrombocytopenia in the first 48 h was associated with higher AKI and mortality risk [85]. Moreover, the severity of platelet count decrease had a significant correlation to the severity of AKI and mortality [85]. Hence, it is important to consider thrombocytopenia as an indicator of disease severity when assessing patients with AKI. Neutrophil, lymphocyte, and platelet ratio Koo et al. developed the neutrophil, lymphocyte, and platelet ratio (NLPR) in the setting of AKI after cardiovascular surgery [86]. The NLPR is calculated as follows: $$({\text{Neutrophil count}} \times 100)/({\text{lymphocyte count}} \times {\text{platelet count}}).$$ In their retrospective study of 1099 patients, higher perioperative NLPR were associated with postoperative AKI (NLPR ≥ 64 OR 2.18 (1.20–3.98), p = 0.011) and 5-year mortality (NLPR > 3 HR 3.54 (2.00–6.28), p < 0.001). This study demonstrated that adding platelet count to the NLR improved the predictive efficacy when compared to NLR or thrombocytopenia alone [86]. We have conducted a retrospective analysis in a cohort of patients undergoing major abdominal surgery and confirmed the predictive ability of NLPR in detecting AKI in this setting [OR 1.05 (1.00–1.10), p = 0.048]; however, in our study, NLPR did not predict mortality [87]. Further studies should be conducted to validate the use of this ratio as a marker to detect risk of AKI and mortality. Platelet-to-lymphocyte ratio In recent publications, platelet-to-lymphocyte ratio (PLR) has been reported as new poor prognostic marker [88]. Hudzik et al. reported an association between higher PLR and risk of developing contrast-induced AKI in a retrospective analysis of diabetic patients with ST-elevation myocardial infarction (OR 1.22 (1.10–1.34), p < 0.0001) [89]. In this study, a PLR higher than 110 had a d 71% sensitivity and 63% specificity for predicting AKI [89]. The predictive ability of PLR was also demonstrated in a study by Sun et al. including also non-diabetic patients with ST-elevation myocardial infarction undergoing percutaneous coronary intervention (OR 1.432 (1.205–1.816), p = 0.031) [90]. Sun et al. demonstrated that a PLR of 127.5 or higher had 76.8% sensitivity and 69.2% specificity to predict AKI [90]. In the setting of cardiac surgery, Parlar et al. also demonstrated that an elevated preoperative PLR was associated with early postoperative AKI [OR 1.06 (1.01–1.10), p = 0.01] [66]. The cut-off value to predict AKI determined in this study was 136.85, which demonstrated a sensitivity of 71.0% and specificity of 70.7% [66]. Interestingly, in a retrospective cohort of 10,859 critically ill AKI patients, Zheng et al. identified that both low and high PLRs were associated with an increased mortality risk [PLR < 90 HR 1.25 (1.12–1.39), p < 0.001; PLR > 311 HR 1.19 (1.08–1.31), p < 0.001] [91]. A lower PLR could result from the presence of thrombocytopenia which is also correlated with a worse prognosis in critically ill patients [91]. While promising, the PLR needs to analyze in further studies to confirm its validity as a diagnostic and prognostic marker. The complete blood count could be a useful tool to estimate the risk of developing AKI and mortality. Several parameters and indirect inflammation markers have been studied over the years with encouraging results. Although some studies defined AKI with different criteria, the majority of studies used the KDIGO classification. The availability, low cost, and efficacy of these markers are an important advantage. The evidence is significant and the results have been replicated in large populations and many different settings, as depicted in Table 3. Nonetheless, some important limitations have to be addressed. First, the majority of these studies are of a retrospective and single-center design which can compromise the results. Second, investigation has established a correlation between these parameters, AKI and mortality, determining the poor prognosis in these patients which could reflect disease severity and has not been able to prove causality between these. Finally, standardized cut-off parameters remain to be determined. Therefore, before generalizing the use of complete blood count to predict AKI, further studies must be performed to confirm and validate these results, in preference with prospective design, larger populations, and longer follow-up. AKI is a frequent complication in hospitalized patients with negative impact in the short and long term. It is crucial to identify easily available predictors of AKI to prevent, diagnose, and treat this complication, and to minimize the associated morbidity and mortality. Research has developed new biomarkers, although their widespread use in clinical practice is still limited. The complete blood count could be useful to estimate the risk of developing AKI and mortality. Anemia, leucopenia, leucocytosis, and thrombocytopenia can estimate illness severity. NLR, NLPR, and PLR are simple and straightforward markers calculated from routine blood analysis, which have proven to be effective in predicting AKI and outcomes in multiple settings. Kellum JA, Prowle JR. Paradigms of acute kidney injury in the intensive care setting. Nat Rev Nephrol. 2018;14(4):217–30. Hoste EA, Bagshaw SM, Bellomo R, Cely CM, Colman R, Cruz DN, et al. Epidemiology of acute kidney injury in critically ill patients: the multinational AKI-EPI study. Intensive Care Med. 2015;41:1411–23. Bouchard J, Acharya A, Cerda J, Maccariello ER, Madarasu RC, Tolwani AJ, et al. A prospective international multicenter study of AKI in the intensive care unit. Clin J Am Soc Nephrol. 2015;10:1324–31. Chertow G, Burdick E, Honour M, Bonventre J, Bates D. Acute kidney injury, mortality, length of stay, and costs in hospitalized patients. J Am Soc Nephrol. 2005;16(11):3365–70. Hoste EAJ, Kellum JA, Selby NM, Zarbock A, Palevsky PM, Bagshaw SM, et al. Global epidemiology and outcomes of acute kidney injury. Nat Rev Nephrol. 2018;14(10):607–25. Wald R, Quinn RR, Adhikari NK, Burns KE, Friedrich JO, Garg AX, et al. Risk of chronic dialysis and death following acute kidney injury. Am J Med. 2012;125:585–93. Mehta RL, Kellum JA, Shah SV, Molitoris BA, Ronco C, Warnock DG, et al. Acute kidney injury network: report of an initiative to improve outcomes in acute kidney injury. Crit Care. 2007;11(2):R31. Cruz DN, Ronco C. Acute kidney injury in the intensive care unit: current trends in incidence and outcome. Crit Care. 2007;11:149. Ali T, Khan I, Simpson W, Prescott G, Townend J, Smith W, et al. Incidence and outcomes in acute kidney injury: a comprehensive population-based study. J Am Soc Nephrol. 2007;18:1292–8. Lameire NH, Bagga A, Cruz D, De Maeseneer J, Endre Z, Kellum JA, et al. Acute kidney injury: an increasing global concern. Lancet. 2013;382(9887):170–9. Bellomo R, Ronco C, Kellum JA, Mehta RL, Palevsky P, Acute Dialysis Quality Initiative workgroup. Acute renal failure—definition, outcome measures, animal models, fluid therapy and information technology needs: the Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group. Crit Care. 2004;8:R204–12. Kdigo AK. KDIGO clinical practice guideline for acute kidney injury. Kidney Int Suppl. 2012;2:S1–138. Thomas ME, Blaine C, Dawnay A, Devonald MA, Ftouh S, Laing C, et al. The definition of acute kidney injury and its use in practice. Kidney Int. 2015;87(1):62–73. Ostermann M. Diagnosis of acute kidney injury: kidney disease improving global outcomes criteria and beyond. Curr Opin Crit Care. 2014;20(6):581–7. Bellomo R, Ronco C, Mehta RL, Asfar P, Boisramé-Helms J, Darmon M, et al. Acute kidney injury in the ICU: from injury to recovery: reports from the 5th Paris International Conference. Ann Intensive Care. 2017;7(1):49. Waikar SS, Betensky RA, Emerson SC, Bonventre JV. Imperfect gold standards for kidney injury biomarker evaluation. J Am Soc Nephrol. 2012;23:13–21. Macedo E, Malhotra R, Claure-Del Granado R, Fedullo P, Mehta RL. Defining urine output criterion for acute kidney injury in critically ill patients. Nephrol Dial Transplant. 2011;26(2):509–15. Lima C, Macedo E. Urinary biochemistry in the diagnosis of acute kidney injury. Dis Markers. 2018;2018:4907024. Vanmassenhove J, Vanholder R, Nagler E, Van Biesen W. Urinary and serum biomarkers for the diagnosis of acute kidney injury: an in-depth review of the literature. Nephrol Dial Transplant. 2013;28(2):254–73. Ostermann M, Joannidis M. Biomarkers for AKI improve clinical practice: no. Intensive Care Med. 2015;41(4):618–22. Honore PM, Spapen HD. Oxidative stress markers and septic acute kidney injury: novel research avenue or road to nowhere? Ann Intensive Care. 2016;6(1):100. Ostermann M, Liu K. Pathophysiology of AKI. Best Pract Res Clin Anaesthesiol. 2017;31(3):305–14. Case J, Khan S, Khalid R, Khan A. Epidemiology of acute kidney injury in the intensive care unit. Crit Care Res Pract. 2013;2013:479730. Akcay A, Nguyen Q, Edelstein CL. Mediators of inflammation in acute kidney injury. Mediators Inflamm. 2010;2009:137072. Basile DP, Anderson MD, Sutton TA. Pathophysiology of acute kidney injury. Compr Physiol. 2012;2(2):1303–53. Devarajan P. Update on mechanisms of ischemic acute kidney injury. J Am Soc Nephrol. 2006;17(6):1503–20. Kinsey GR, Li L, Okusa MD. Inflammation in acute kidney injury. Nephron Exp Nephrol. 2008;109(4):e102–7. Radi ZA. Immunopathogenesis of Acute Kidney Injury. Toxicol Pathol. 2018;46(8):930–43. Lee DW, Faubel S, Edelstein CL. Cytokines in acute kidney injury (AKI). Clin Nephrol. 2011;76(3):165–73. Jang HR, Rabb H. Immune cells in experimental acute kidney injury. Nat Rev Nephrol. 2015;11(2):88–101. Hayama T, Hayama T, Funao K, Naganuma T, Kawahito Y, Sano H, et al. Benefical effect of neutrophil elastase inhibitor on renal warm ischemia reperfusion injury in the rat. Transplant Proc. 2006;38:2201–2. Sakr M, Zetti G, McClain C, Gavaler J, Nalesnik M, Todo S, et al. The protective effect of FK506 pretreatment against renal ischemia/reperfusion injury in rats. Transplantation. 1992;53:987–91. Jones EA, Shoskes DA. The effect of mycophenolate mofetil and polyphenolic bioflavonoids on renal ischemia reperfusion injury and repair. J Urol. 2000;163:999–1004. Costa NA, Gut AL, Azevedo PS, Tanni SE, Cunha NB, Magalhães ES, et al. Erythrocyte superoxide dismutase as a biomarker of septic acute kidney injury. Ann Intensive Care. 2016;6(1):95. Lee SA, Noel S, Sadasivam M, Hamad ARA, Rabb H. Role of immune cells in acute kidney injury and repair. Nephron. 2017;137(4):282–6. Zuk A, Bonventre JV. Acute kidney injury. Annu Rev Med. 2016;67:293–307. Friedewald JJ, Rabb H. Inflammatory cells in ischemic acute renal failure. Kidney Int. 2004;66:486–91. Weller S, Varrier M, Ostermann M. Lymphocyte function in human acute kidney injury. Nephron. 2017;137(4):287–93. Zhang ZX, Wang S, Huang X, Min WP, Sun H, Liu W, et al. NK cells induce apoptosis in tubular epithelial cells and contribute to renal ischemia-reperfusion injury. J Immunol. 2008;181:7489–98. Bonventre JV. Pathophysiology of acute kidney injury: roles of potential inhibitors of inflammation. Contrib Nephrol. 2007;156:39–46. Burne MJ, Daniels F, Ghandour A, Mauiyyedi S, Colvin RB, O'Donnell MP, et al. Identification of the CD4(+) T cell as a major pathogenic factor in ischemic acute renal failure. J Clin Invest. 2001;108:1283–90. Kinsey GR, Sharma R, Huang L, Li L, Vergis AL, Ye H, et al. Regulatory T cells suppress innate immunity in kidney ischemia-reperfusion injury. J Am Soc Nephrol. 2009;20:1744–53. Ioannou A, Kannan L, Tsokos GC. Platelets, complement and tissue inflammation. Autoimmunity. 2013;46(1):1–5. Engelmann B, Massberg S. Thrombosis as an intravascular effector of innate immunity. Nat Rev Immunol. 2013;13(1):34–45. Li Z, Yang F, Dunn S, Gross AK, Smyth SS. Platelets as immune mediators: their role in host defense responses and sepsis. Thromb Res. 2011;127:184–8. Jansen M, Florquin S, Roelofs J. The role of platelets in acute kidney injury. Nat Rev Nephrol. 2018;14(7):457–71. du Cheyron D, Parienti JJ, Fekih-Hassen M, Daubin C, Charbonneau P. Impact of anemia on outcome in critically ill patients with severe acute renal failure. Intensive Care Med. 2005;31:1529–36. Darby PJ, Kim N, Hare GM, Tsui A, Wang Z, Harrington A, et al. Anemia increases the risk of renal cortical and medullary hypoxia during cardiopulmonary bypass. Perfusion. 2013;28(6):504–11. Estrella MM, Astor BC, Kottgen A, Selvin E, Coresh J, Parekh RS. Prevalence of kidney disease in anaemia differs by GFR estimating method: the Third National Health and Nutrition Examination Survey (1988–94). Nephrol Dial Transplant. 2010;25:2542–8. Shema-Didi L, Ore L, Geron R, Kristal B. Is anemia at hospital admission associated with in-hospital acute kidney injury occurrence? Nephron Clin Pract. 2010;115(2):c168–76. Shacham Y, Gal-Oz A, Leshem-Rubinow E, Arbel Y, Flint N, Keren G, Roth A, et al. Association of admission hemoglobin levels and acute kidney injury among myocardial infarction patients treated with primary percutaneous intervention. Can J Cardiol. 2015;31(1):50–5. Abusaada K, Yuan C, Sabzwari R, Butt K, Maqsood A. Development of a novel score to predict the risk of acute kidney injury in patient with acute myocardial infarction. J Nephrol. 2017;30(3):419–25. Han SS, Baek SH, Ahn SY, Chin HJ, Na KY, Chae DW, et al. Anemia is a risk factor for acute kidney injury and long-term mortality in critically ill patients. Tohoku J Exp Med. 2015;237(4):287–95. Powell-Tuck J, Crichton S, Raimundo M, Camporota L, Wyncoll D, Ostermann M. Anaemia is not a risk factor for progression of acute kidney injury: a retrospective analysis. Crit Care. 2016;8(20):52. Malhotra R, Kashani KB, Macedo E, Kim J, Bouchard J, Wynn S, et al. A risk prediction score for acute kidney injury in the intensive care unit. Nephrol Dial Transplant. 2017;32(5):814–22. Karkouti K, Grocott HP, Hall R, Jessen ME, Kruger C, Lerner AB, et al. Interrelationship of preoperative anemia, intraoperative anemia, and red blood cell transfusion as potentially modifiable risk factors for acute kidney injury in cardiac surgery: a historical multicentre cohort study. Can J Anaesth. 2015;62(4):377–84. Duque-Sosa P, Martínez-Urbistondo D, Echarri G, Callejas R, Iribarren MJ, Rábago G, et al. Perioperative hemoglobin area under the curve is an independent predictor of renal failure after cardiac surgery. Results from a Spanish multicenter retrospective cohort study. PLoS ONE. 2017;12(2):e0172021. Oprea AD, Del Rio JM, Cooter M, Green CL, Karhausen JA, Nailer P, et al. Pre- and postoperative anemia, acute kidney injury, and mortality after coronary artery bypass grafting surgery: a retrospective observational study. Can J Anaesth. 2018;65(1):46–59. Arai T, Morice MC, O'Connor SA, Yamamoto M, Eltchaninoff H, Leguerrier A, FRANCE 2 Registry Investigators, et al. Impact of pre- and post-procedural anemia on the incidence of acute kidney injury and 1-year mortality in patients undergoing transcatheter aortic valve implantation (from the French Aortic National CoreValve and Edwards 2 [FRANCE 2] Registry). Catheter Cardiovasc Interv. 2015;85(7):1231–9. Gorla R, Tsagakis K, Horacek M, Mahabadi AA, Kahlert P, Jakob H, et al. Impact of preoperative anemia and postoperative hemoglobin drop on the incidence of acute kidney injury and in-hospital mortality in patients with type B acute aortic syndromes undergoing thoracic endovascular aortic repair. Vasc Endovasc Surg. 2017;51(3):131–8. Choi YJ, Kim SO, Sim JH, Hahm KD. Postoperative anemia is associated with acute kidney injury in patients undergoing total hip replacement arthroplasty: a retrospective study. Anesth Analg. 2016;122(6):1923–8. Sreenivasan J, Zhuo M, Khan MS, Li H, Fugar S, Desai P, et al. Anemia (hemoglobin ≤ 13 g/dL) as a risk factor for contrast-induced acute kidney injury following coronary angiography. Am J Cardiol. 2018;122(6):961–5. Han SS, Ahn SY, Ryu J, Baek SH, Kim KI, Chin HJ, et al. U-shape relationship of white blood cells with acute kidney injury and mortality in critically ill patients. Tohoku J Exp Med. 2014;232(3):177–85. Kim JH, Park YS, Yoon CY, Lee HS, Kim S, Lee JW, et al. Delta neutrophil index for the prediction of the development of sepsis-induced acute kidney injury in the emergency department. Shock. 2019;1:1. https://doi.org/10.1097/SHK.0000000000001299 (Epub ahead of print). Kim WH, Park JY, Ok SH, Shin IW, Sohn JT. Association between the neutrophil/lymphocyte ratio and acute kidney injury after cardiovascular surgery: a retrospective observational study. Medicine (Baltimore). 2015;94(43):e1867. Parlar H, Şaşkın H. Are pre and postoperative platelet to lymphocyte ratio and neutrophil to lymphocyte ratio associated with early postoperative AKI following CABG? Braz J Cardiovasc Surg. 2018;33(3):233–41. Kim HY, Kong YG, Park JH, Kim YK. Acute kidney injury after burn surgery: preoperative neutrophil/lymphocyte ratio as a predictive factor. Acta Anaesthesiol Scand. 2019;63(2):240–7. Yuan Y, Qiu H, Hu X, Luo T, Gao X, Zhao X, et al. Predictive value of inflammatory factors on contrast-induced acute kidney injury in patients who underwent an emergency percutaneous coronary intervention. Clin Cardiol. 2017;40(9):719–25. Yilmaz H, Cakmak M, Inan O, Darcin T, Akcay A. Can neutrophil-lymphocyte ratio be independent risk factor for predicting acute kidney injury in patients with severe sepsis? Ren Fail. 2015;37(2):225–9. Bu X, Zhang L, Chen P, Wu X. Relation of neutrophil-to-lymphocyte ratio to acute kidney injury in patients with sepsis and septic shock: a retrospective study. Int Immunopharmacol. 2019;70:372–7. Doi K, Rabb H. Impact of acute kidney injury on distant organ function: recent findings and potential therapeutic targets. Kidney Int. 2016;89(3):555–64. Abu Alfeilat M, Slotki I, Shavit L. Single emergency room measurement of neutrophil/lymphocyte ratio for early detection of acute kidney injury (AKI). Intern Emerg Med. 2018;13(5):717–25. Gameiro J, Agapito Fonseca J, Monteiro Dias J, Melo MJ, Jorge S, Velosa J, et al. Prediction of acute kidney injury in cirrhotic patients: a new score combining renal, liver and inflammatory markers. Int J Nephrol Renovasc Dis. 2018;11:149–54. Younan D, Richman J, Zaky A, Pittet JF. An increasing neutrophil-to-lymphocyte ratio trajectory predicts organ failure in critically-ill male trauma patients. An exploratory study. Healthcare (Basel). 2019;7(1):E42. Fan LL, Wang YJ, Nan CJ, Chen YH, Su HX. Neutrophil-lymphocyte ratio is associated with all-cause mortality among critically ill patients with acute kidney injury. Clin Chim Acta. 2019;490:207–13. Han JS, Park KS, Lee MJ, Kim CH, Koo HM, Doh FM, et al. Mean platelet volume is a prognostic factor in patients with acute kidney injury requiring continuous renal replacement therapy. J Crit Care. 2014;29(6):1016–21. Tajarernmuang P, Phrommintikul A, Limsukon A, Pothirat C, Chittawatanarat K. The role of mean platelet volume as a predictor of mortality in critically ill patients: a systematic review and meta-analysis. Crit Care Res Pract. 2016;2016:4370834. Li J, Li Y, Sheng X, Wang F, Cheng D, Jian G, et al. Combination of mean platelet volume/platelet count ratio and the APACHE II score better predicts the short-term outcome in patients with acute kidney injury receiving continuous renal replacement therapy. Kidney Blood Press Res. 2018;43(2):479–89. Zarychanski R, Houston DS. Assessing thrombocytopenia in the intensive care unit: the past, present, and future. Hematol Am Soc Hematol Educ Prog. 2017;2017(1):660–6. Vanderschueren S, De Weerdt A, Malbrain M, Vankersschaever D, Frans E, Wilmer A, et al. Thrombocytopenia and prognosis in intensive care. Crit Care Med. 2000;28(6):1871–6. Van Linden A, Kempfert J, Rastan AJ, Holzhey D, Blumenstein J, Schuler G, et al. Risk of acute kidney injury after minimally invasive transapical aortic valve implantation in 270 patients. Eur J Cardiothorac Surg. 2011;39(6):835–42 (discussion 842-3). Kertai MD, Zhou S, Karhausen JA, Cooter M, Jooste E, Li YJ, et al. Platelet counts, acute kidney injury, and mortality after coronary artery bypass grafting surgery. Anesthesiology. 2016;124(2):339–52. Fan H, Zhao Y, Zhu JH, Song FC, Ye JH, Wang ZY, et al. Thrombocytopenia as a predictor of severe acute kidney injury in patients with heat stroke. Ren Fail. 2015;37(5):877–81. Chao CT, Tsai HB, Chiang CK, Huang JW, COGENT study group. Thrombocytopenia on the first day of emergency department visit predicts higher risk of acute kidney injury among elderly patients. Scand J Trauma Resusc Emerg Med. 2017;25(1):11. Wu M, Luan YY, Lu JF, Li H, Zhan HC, Chen YH, et al. Platelet count as a new biomarker for acute kidney injury induced by hemorrhagic shock. Platelets. 2019;27:1–9. Koo CH, Eun Jung D, Park YS, Bae J, Cho YJ, Kim WH, et al. Neutrophil, lymphocyte, and platelet counts and acute kidney injury after cardiovascular surgery. J Cardiothorac Vasc Anesth. 2018;32(1):212–22. Gameiro J, Fonseca JA, Dias JM, Milho J, Rosa R, Jorge S, et al. Neutrophil, lymphocyte and platelet ratio as a predictor of postoperative acute kidney injury in major abdominal surgery. BMC Nephrol. 2018;19(1):320. Azab B, Shah N, Akerman M, Mcginn JT. Value of platelet/lymphocyte ratio as a predictor of all-cause mortality after non-ST-elevation myocardial infarction. J Thromb Thromb. 2012;34(3):326–34. Hudzik B, Szkodziński J, Korzonek-Szlacheta I, Wilczek K, Gierlotka M, Lekston A, et al. Platelet-to-lymphocyte ratio predicts contrast-induced acute kidney injury in diabetic patients with ST-elevation myocardial infarction. Biomark Med. 2017;11(10):847–56. Sun XP, Li J, Zhu WW, Li DB, Chen H, Li HW, et al. Platelet to lymphocyte ratio predicts contrast-induced nephropathy in patients with ST-segment elevation myocardial infarction undergoing primary percutaneous coronary intervention. Angiology. 2018;69(1):71–8. Zheng CF, Liu WY, Zeng FF, Zheng MH, Shi HY, Zhou Y, et al. Prognostic value of platelet-to-lymphocyte ratios among critically ill patients with acute kidney injury. Crit Care. 2017;21(1):238. The authors have no acknowledgements. There was no funding for this study. Division of Nephrology and Renal Transplantation, Department of Medicine, Centro Hospitalar Lisboa Norte, EPE, Av. Prof. Egas Moniz, 1649-035, Lisbon, Portugal Joana Gameiro & José António Lopes Search for Joana Gameiro in: Search for José António Lopes in: The authors participated as follows: JG drafted the article; JAL revised the article. Both authors read and approved the final manuscript. Correspondence to Joana Gameiro. The authors give their consent for publication. There are no competing interests. The results presented in this paper have not been published previously in whole or part. Gameiro, J., Lopes, J.A. Complete blood count in acute kidney injury prediction: a narrative review. Ann. Intensive Care 9, 87 (2019) doi:10.1186/s13613-019-0561-4
CommonCrawl
Dual graphs and modified Barlow-Bass resistance estimates for repeated barycentric subdivisions DCDS-S Home February 2019, 12(1): 1-26. doi: 10.3934/dcdss.2019001 Dirichlet-to-Neumann or Poincaré-Steklov operator on fractals described by d-sets Kevin Arfi and Anna Rozanova-Pierrat , Laboratoire de Mathématiques et Informatique pour la Complexité et les Systèmes, CentralSupélec, Université Paris Saclay, Bȃtiment Bouygues, 3 rue Joliot-Curie, 91190 Gif-sur-Yvette, France * Corresponding author: Anna Rozanova-Pierrat Received February 2017 Revised June 2017 Published July 2018 Figure(1) In the framework of the Laplacian transport, described by a Robin boundary value problem in an exterior domain in $\mathbb{R}^n$, we generalize the definition of the Poincaré-Steklov operator to $d$-set boundaries, $n-2< d<n$, and give its spectral properties to compare to the spectra of the interior domain and also of a truncated domain, considered as an approximation of the exterior case. The well-posedness of the Robin boundary value problems for the truncated and exterior domains is given in the general framework of $n$-sets. The results are obtained thanks to a generalization of the continuity and compactness properties of the trace and extension operators in Sobolev, Lebesgue and Besov spaces, in particular, by a generalization of the classical Rellich-Kondrachov Theorem of compact embeddings for $n$ and $d$-sets. Keywords: Poincaré-Steklov operator, d-sets, Laplacian transport, fractal. Mathematics Subject Classification: Primary: 35J25, 46E35; Secondary: 47A10. Citation: Kevin Arfi, Anna Rozanova-Pierrat. Dirichlet-to-Neumann or Poincaré-Steklov operator on fractals described by d-sets. Discrete & Continuous Dynamical Systems - S, 2019, 12 (1) : 1-26. doi: 10.3934/dcdss.2019001 R. A. Adams, Sobolev Spaces, Academic Press, New York, 1975. Google Scholar G. Allaire, Analyse Numérique et Optimisation, École Polytechnique, 2012. Google Scholar W. Arendt and A. F. M. T. Elst, Sectorial forms and degenerate differential operators, J. Operator Theory, 67 (2012), 33-72. Google Scholar W. Arendt and R. Mazzeo, Spectral properties of the Dirichlet-to-Neumann operator on Lipschitz domains, Ulmer Seminare, 12 (2007), 28-38. Google Scholar W. Arendt and R. Mazzeo, Friedlander's eigenvalue inequalities and the Dirichlet-to-Neumann semigroup, Commun. Pure Appl. Anal., 11 (2012), 2201-2212. doi: 10.3934/cpaa.2012.11.2201. Google Scholar W. Arendt and A. ter Elst, The Dirichlet-to-Neumann operator on rough domains, J. Differential Equations, 251 (2011), 2100-2124. doi: 10.1016/j.jde.2011.06.017. Google Scholar W. Arendt and A. F. M. ter Elst, The Dirichlet-to-Neumann operator on exterior domains, Potential Anal., 43 (2015), 313-340. doi: 10.1007/s11118-015-9473-6. Google Scholar L. Banjai, Eigenfrequencies of fractal drums, J. of Comp. and Appl. Math., 198 (2007), 1-18. doi: 10.1016/j.cam.2005.11.015. Google Scholar J. Behrndt and A. ter Elst, Dirichlet-to-Neumann maps on bounded Lipschitz domains, J. Differential Equations, 259 (2015), 5903-5926. doi: 10.1016/j.jde.2015.07.012. Google Scholar M. Bodin, Characterisations of Function Spaces on Fractals, Ph. D thesis, Ume$ \mathbb{R} aa$ University, 2005. Google Scholar C. Bardos, D. Grebenkov and A. Rozanova-Pierrat, Short-time heat diffusion in compact domains with discontinuous transmission boundary conditions, Math. Models Methods Appl. Sci., 26 (2016), 59-110. doi: 10.1142/S0218202516500032. Google Scholar L. P. Bos and P. D. Milman, Sobolev-Gagliardo-Nirenberg and Markov type inequalities on subanalytic domains, Geom. Funct. Anal., 5 (1995), 853-923. doi: 10.1007/BF01902214. Google Scholar A.-P. Calderon, Lebesgue spaces of differentiable functions and distributions, Proc. Symp. Pure Math., 4 (1961), 33-49. Google Scholar R. Capitanelli, Mixed Dirichlet-Robin problems in irregular domains, Comm. to SIMAI Congress, 2 (2007). Google Scholar R. Capitanelli, Asymptotics for mixed Dirichlet-Robin problems in irregular domains, J. Math. Anal. Appl., 362 (2010), 450-459. doi: 10.1016/j.jmaa.2009.09.042. Google Scholar L. C. Evans, Partial Differential Equations, American Mathematical Society, Providence, RI, 1998. Google Scholar M. Filoche and D. S. Grebenkov, The toposcopy, a new tool to probe the geometry of an irregular interface by measuring its transfer impedance, Europhys. Lett., 81 (2008), 40008. doi: 10.1209/0295-5075/81/40008. Google Scholar A. Girouard, R. S. Laugesen and B. A. Siudeja, Steklov eigenvalues and quasiconformal maps of simply connected planar domains, Arch. Ration. Mech. Anal., 219 (2016), 903-936. doi: 10.1007/s00205-015-0912-8. Google Scholar A. Girouard, L. Parnovski, I. Polterovich and D. A. Sher, The Steklov spectrum of surfaces: asymptotics and invariants, Math. Proc. Cambridge Philos. Soc., 157 (2014), 379-389. doi: 10.1017/S030500411400036X. Google Scholar A. Girouard and I. Polterovich, Spectral geometry of the Steklov problem, Shape Optimization and Spectral Theory, 120C148, De Gruyter Open, Warsaw, 2017., arXiv: 1411.6567. Google Scholar D. S. Grebenkov, Transport Laplacien Aux Interfaces Irregulires: Étude Théorique, Numérique et Expérimentale, Ph. D thesis, Ecole Polytechnique, 2004. Google Scholar D. S. Grebenkov, M. Filoche and B. Sapoval, Mathematical basis for a general theory of Laplacian transport towards irregular interfaces, Phys. Rev. E, 73(2006), 021103, 9pp. doi: 10.1103/PhysRevE.73.021103. Google Scholar D. S. Grebenkov, M. Filoche and B. Sapoval, A simplified analytical model for Laplacian transfer across deterministic prefractal interfaces, Fractals, 15 (2007), 27-39. doi: 10.1142/S0218348X0700340X. Google Scholar P. Hajlasz, P. Koskela and H. Tuominen, Sobolev embeddings, extensions and measure density condition, Journal of Functional Analysis, 254 (2008), 1217-1234. doi: 10.1016/j.jfa.2007.11.020. Google Scholar D. A. Herron and P. Koskela, Uniform, Sobolev extension and quasiconformal circle domains, J. Anal. Math., 57 (1991), 172-202. doi: 10.1007/BF03041069. Google Scholar L. Ihnatsyeva and A. V. Vähäkangas, Characterization of traces of smooth functions on Ahlfors regular sets, J. Funct. Anal., 265 (2013), 1870–1915, arXiv: 1109.2248v1. doi: 10.1016/j.jfa.2013.07.006. Google Scholar P. W. Jones, Quasiconformal mappings and extendability of functions in Sobolev spaces, Acta Mathematica, 147 (1981), 71-88. doi: 10.1007/BF02392869. Google Scholar A. Jonsson, P. Sjögren and H. Wallin, Hardy and Lipschitz spaces on subsets of $ \mathbb{R}^n$, Studia Math., 80 (1984), 141-166. doi: 10.4064/sm-80-2-141-166. Google Scholar A. Jonsson and H. Wallin, Function spaces on subsets of $ \mathbb{R}^n$, Math. Rep., 2(1984), xiv+221 pp. Google Scholar A. Jonsson and H. Wallin, The dual of Besov spaces on fractals, Studia Mathematica, 112 (1995), 285-300. doi: 10.4064/sm-112-3-285-300. Google Scholar A. Jonsson and H. Wallin, Boundary value problems and brownian motion on fractals, Chaos, Solitons & Fractals, 8 (1997), 191-205. doi: 10.1016/S0960-0779(96)00048-3. Google Scholar M. R. Lancia, A transmission problem with a fractal interface, Zeitschrift für Analysis und ihre Anwendungen, 21 (2002), 113-133. doi: 10.4171/ZAA/1067. Google Scholar J. Lions and E. Magenes, Non-Homogeneous Boundary Value Problems and Applications, Vol. 1, Berlin: Springer-Verlag, 1972. Google Scholar G. Lu and B. Ou, A Poincaré inequality on $ \mathbb{R}^n$ and its application to potential fluid flows in space, Comm. Appl. Nonlinear Anal, 12 (2005), 1-24. Google Scholar J. Marschall, The trace of Sobolev-Slobodeckij spaces on Lipschitz domains, Manuscripta Math, 58 (1987), 47-65. doi: 10.1007/BF01169082. Google Scholar M. Martin and M. Putinar, Lectures on Hyponormal Operators, Vol. 39, Birkhauser, Basel, 1989. doi: 10.1007/978-3-0348-7466-3. Google Scholar O. Martio and J. Sarvas, Injectivity theorems in plane and space, Ann. Acad. Sci. Fenn. Ser. A I Math., 4 (1979), 383-401. doi: 10.5186/aasfm.1978-79.0413. Google Scholar V. N. Maslennikova, Partial Differential Equations, (in Russian) Moscow, Peoples Freindship University of Russia, 1997. Google Scholar W. McLean, Strongly Elliptic Systems and Boundary Integral Equations, Cambridge University Press, 2000. Google Scholar J. P. Pinasco and J. D. Rossi, Asymptotics of the spectral function for the Steklov problem in a family of sets with fractal boundaries, Appl. Maths. E-Notes, 5 (2005), 138-146. Google Scholar P. Shvartsman, On the boundary values of Sobolev $ W^1_p$-functions, Adv. in Maths., 225 (2010), 2162-2221. doi: 10.1016/j.aim.2010.03.031. Google Scholar E. M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton University Press, 1970. Google Scholar M. Taylor, Partial Differential Equations II, Appl. Math. Sci., Vol. 116, Springer-Verlag, New-York, 1996. doi: 10.1007/978-1-4684-9320-7. Google Scholar H. Triebel, Fractals and Spectra. Related to Fourier Analysis and Function Spaces, Birkhäuser, 1997. doi: 10.1007/978-3-0348-0034-1. Google Scholar H. Wallin, The trace to the boundary of Sobolev spaces on a snowflake, Manuscripta Math, 73 (1991), 117-125. doi: 10.1007/BF02567633. Google Scholar P. Wingren, Lipschitz spaces and interpolating polynomials on subsets of euclidean space, Function Spaces and Applications, Springer Science + Business Media, 1302 (1988), 424–435. doi: 10.1007/BFb0078893. Google Scholar Figure 1. Example of the considered domains: $\Omega_0$ (the von Koch snowflake) is the bounded domain, bounded by a compact boundary $\Gamma$, which is a $d$-set (see Definition 2.3) with $d = \log 4/ \log 3>n-1 = 1$. The truncated domain $\Omega_S$ is between the boundary $\Gamma$ and the boundary $S$ (presented by the same von Koch fractal as $\Gamma$). The boundaries $\Gamma$ and $S$ have no an intersection and here are separated by the boundary of a ball $B_r$ of a radius $r>0$. The domain, bounded by $S$, is called $\Omega_1 = \overline{\Omega}_0\cup \Omega_S$, and the exterior domain is $\Omega = \mathbb{R}^n\setminus \overline{\Omega}_0$ Figure Options Download full-size image Download as PowerPoint slide Umberto Mosco, Maria Agostina Vivaldi. Vanishing viscosity for fractal sets. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 1207-1235. doi: 10.3934/dcds.2010.28.1207 Joseph Squillace. Estimating the fractal dimension of sets determined by nonergodic parameters. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5843-5859. doi: 10.3934/dcds.2017254 Bernd Kawohl, Jiří Horák. On the geometry of the p-Laplacian operator. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 799-813. doi: 10.3934/dcdss.2017040 V. V. Chepyzhov, A. A. Ilyin. On the fractal dimension of invariant sets: Applications to Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 117-135. doi: 10.3934/dcds.2004.10.117 Ming Li, Shaobo Gan, Lan Wen. Robustly transitive singular sets via approach of an extended linear Poincaré flow. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 239-269. doi: 10.3934/dcds.2005.13.239 Vincenzo Ferone, Carlo Nitsch, Cristina Trombetti. On a conjectured reverse Faber-Krahn inequality for a Steklov--type Laplacian eigenvalue. Communications on Pure & Applied Analysis, 2015, 14 (1) : 63-82. doi: 10.3934/cpaa.2015.14.63 Hernán Cendra, Viviana A. Díaz. Lagrange-d'alembert-poincaré equations by several stages. Journal of Geometric Mechanics, 2018, 10 (1) : 1-41. doi: 10.3934/jgm.2018001 Fang Liu. An inhomogeneous evolution equation involving the normalized infinity Laplacian with a transport term. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2395-2421. doi: 10.3934/cpaa.2018114 Jonathan Meddaugh, Brian E. Raines. The structure of limit sets for $\mathbb{Z}^d$ actions. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4765-4780. doi: 10.3934/dcds.2014.34.4765 Bernd Kawohl, Guido Sweers. On a formula for sets of constant width in 2d. Communications on Pure & Applied Analysis, 2019, 18 (4) : 2117-2131. doi: 10.3934/cpaa.2019095 Yohan Penel. An explicit stable numerical scheme for the $1D$ transport equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 641-656. doi: 10.3934/dcdss.2012.5.641 Lijian Jiang, Craig C. Douglas. Analysis of an operator splitting method in 4D-Var. Conference Publications, 2009, 2009 (Special) : 394-403. doi: 10.3934/proc.2009.2009.394 Alexandre Girouard, Iosif Polterovich. Upper bounds for Steklov eigenvalues on surfaces. Electronic Research Announcements, 2012, 19: 77-85. doi: 10.3934/era.2012.19.77 Tehuan Chen, Chao Xu, Zhigang Ren. Computational optimal control of 1D colloid transport by solute gradients in dead-end micro-channels. Journal of Industrial & Management Optimization, 2018, 14 (3) : 1251-1269. doi: 10.3934/jimo.2018052 Hantaek Bae, Rafael Granero-Belinchón, Omar Lazar. On the local and global existence of solutions to 1d transport equations with nonlocal velocity. Networks & Heterogeneous Media, 2019, 14 (3) : 471-487. doi: 10.3934/nhm.2019019 Àngel Jorba, Pau Rabassa, Joan Carles Tatjer. Local study of a renormalization operator for 1D maps under quasiperiodic forcing. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 1171-1188. doi: 10.3934/dcdss.2016047 Rafael Obaya, Víctor M. Villarragut. Direct exponential ordering for neutral compartmental systems with non-autonomous $\mathbf{D}$-operator. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 185-207. doi: 10.3934/dcdsb.2013.18.185 Manuel Fernández-Martínez. Theoretical properties of fractal dimensions for fractal structures. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1113-1128. doi: 10.3934/dcdss.2015.8.1113 Wenru Huo, Aimin Huang. The global attractor of the 2d Boussinesq equations with fractional Laplacian in subcritical case. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2531-2550. doi: 10.3934/dcdsb.2016059 Yutaka Tsuzuki. Solvability of $p$-Laplacian parabolic logistic equations with constraints coupled with Navier-Stokes equations in 2D domains. Evolution Equations & Control Theory, 2014, 3 (1) : 191-206. doi: 10.3934/eect.2014.3.191 Kevin Arfi Anna Rozanova-Pierrat
CommonCrawl
Super-strong magnetic field-dominated ion beam dynamics in focusing plasma devices Influence of spatial-intensity contrast in ultraintense laser–plasma interactions R. Wilson, M. King, … P. McKenna Observation of a high degree of stopping for laser-accelerated intense proton beams in dense ionized matter Jieru Ren, Zhigang Deng, … Yongtao Zhao High energy proton micro-bunches from a laser plasma accelerator Ashutosh Sharma & Christos Kamperidis Microcoulomb (0.7 ± $$\frac{0.4}{0.2}$$ 0.4 0.2 μC) laser plasma accelerator on OMEGA EP J. L. Shaw, M. A. Romo-Gonzalez, … D. H. Froula Near-100 MeV protons via a laser-driven transparency-enhanced hybrid acceleration scheme A. Higginson, R. J. Gray, … P. McKenna Generation of focusing ion beams by magnetized electron sheath acceleration K. Weichman, J. J. Santos, … A. V. Arefiev Demonstration of a compact plasma accelerator powered by laser-accelerated electron beams T. Kurz, T. Heinemann, … A. Irman Direct observation of ion acceleration from a beam-driven wave in a magnetic fusion experiment R. M. Magee, A. Necas, … T. Tajima Evidence of radial Weibel instability in relativistic intensity laser-plasma interactions inside a sub-micron thick liquid target Gregory K. Ngirmang, John T. Morrison, … W. Mel Roquemore A. Morace1, Y. Abe1, J. J. Honrubia2, N. Iwata1, Y. Arikawa1, Y. Nakata1, T. Johzaki3, A. Yogo1, Y. Sentoku1, K. Mima1, T. Ma4, D. Mariscal4, H. Sakagami5, T. Norimatsu1, K. Tsubakimoto1, J. Kawanaka1, S. Tokita1, N. Miyanaga1, H. Shiraga1, Y. Sakawa1, M. Nakai1, H. Azechi1, S. Fujioka1 & R. Kodama1 Scientific Reports volume 12, Article number: 6876 (2022) Cite this article Astronomy and planetary science High energy density physics is the field of physics dedicated to the study of matter and plasmas in extreme conditions of temperature, densities and pressures. It encompasses multiple disciplines such as material science, planetary science, laboratory and astrophysical plasma science. For the latter, high energy density states can be accompanied by extreme radiation environments and super-strong magnetic fields. The creation of high energy density states in the laboratory consists in concentrating/depositing large amounts of energy in a reduced mass, typically solid material sample or dense plasma, over a time shorter than the typical timescales of heat conduction and hydrodynamic expansion. Laser-generated, high current–density ion beams constitute an important tool for the creation of high energy density states in the laboratory. Focusing plasma devices, such as cone-targets are necessary in order to focus and direct these intense beams towards the heating sample or dense plasma, while protecting the proton generation foil from the harsh environments typical of an integrated high-power laser experiment. A full understanding of the ion beam dynamics in focusing devices is therefore necessary in order to properly design and interpret the numerous experiments in the field. In this work, we report a detailed investigation of large-scale, kilojoule-class laser-generated ion beam dynamics in focusing devices and we demonstrate that high-brilliance ion beams compress magnetic fields to amplitudes exceeding tens of kilo-Tesla, which in turn play a dominant role in the focusing process, resulting either in a worsening or enhancement of focusing capabilities depending on the target geometry. A significant fraction of the visible Universe is composed by matter in extreme conditions of temperature, density and pressure. When the pressure in a physical system exceeds 1 Mbar, this is defined as a high energy density (HED) state, which corresponds to the pressure required to deform the water molecule or in other words the pressure at which water becomes compressible, corresponding to an energy density exceeding 1011 J/m31. In nature we can find numerous examples of HED states, from the cores of gas-giant planets where the extreme pressures modify the fundamental properties of hydrogen and water–ice, leading to highly conductive interiors at the origin of the large magnetic fields characteristic of these planets2,3,4, to the interiors of brown dwarfs5, stars and more exotic objects like white dwarfs and neutron stars, where super-strong magnetic fields of amplitudes respectively of 106–107 Gauss and 1011 Gauss are often associated to these objects and determine the plasma physics of their surroundings6,7. These fields are inferred through UV spectroscopy showing extreme Zeeman splitting in the line emission of the plasma ions in the exotic atmospheres of these objects8. High power lasers made possible to recreate HED conditions in the laboratory and begin a more experimental and less observational/deductive study, allowing for example to compress matter to densities and pressures like those found in the interior of giant planets9 and to directly conduct measurements that would otherwise be impossible, offering insights into worlds once accessible only theoretically. One of the modalities to create HED conditions in matter is through ultra-fast heating of material samples and compressed plasmas using high-brilliance laser-generated ion and proton beams. Since their discovery over two decades ago10, high-intensity laser-generated proton and ion beams have been intensively investigated by the scientific community due to their remarkable properties, that make them ideal ion sources for application to High Energy Density (HED) Physics. The acceleration method treated in this work is called "target normal sheath acceleration" (TNSA), where ions from the surface of a thin foil are accelerated by a charge-separation electric field set up by relativistic electrons generated via intense laser-plasma interaction, that propagate through the target volume and expand into vacuum11,12. TNSA proton beams are accelerated to multi-MeV energies in few picoseconds. They carry a significant fraction of the laser energy (0.5 to 10% depending on the laser system and targets), they are highly directional and characterized by a quasi-laminar flow that guarantees their focusability13,14. These properties make them suitable sources for ultra-fast heating of solid samples or dense plasmas to tens or hundreds of eV's, attaining plasma pressures typical of the interior of giant planets, thus allowing the study of the properties and equations of state of matter in these extreme conditions15,16. Here we focus our attention to the application of intense proton beams to the creation of HED states, and on their control and focusing by plasma devices such as cone targets. This type of devices was first envisioned for application to relativistic electron-based Fast Ignition research17, where the function of the cone is simply to maintain a clear path for the ultra-intense laser from the ablation plasma originating from the implosion of the fusion fuel capsule. They subsequently were re-proposed for ion/proton based Fast Ignition18,19, with the addition of a hemispherical shell (hemi) to generate the ion beam, mounted on the cone inner wall at about 300 µm from the cone tip. In this case the cone has two functions: keep the clear path for the intense laser as mentioned before and focus down the proton beam towards the tip of the cone by charge separation electric fields arising from the fast electron propagation along the cone wall. Several works have been dedicated to the physics of cone targets for ion focusing20,21,22 and in particular a seminal work by Bartal et al.23 investigated the proton focusing with tip-less cone targets and determined that electric fields play a dominant role both inside the cone, as focusing field, and outside the cone where the focused proton beam is subject to hot electron pressure that results in a defocusing, radial electric field surrounding the ion beam waist, ultimately driving its expansion. In this work we show that previous results represent a partial picture of the proton beam dynamics in focusing devices and are limited to relatively low-drive laser energies and rather large-aperture short-length cone targets. For higher laser energies and cone geometries more appropriate to high-temperature plasma heating and inertial fusion experiments, however, we demonstrate that return current-generated magnetic fields play a dominant role in the proton beam dynamics, constituting the root-cause of proton beam focusing to the cone tip and that, depending on the cone geometry, either worsen, resulting in a ring-like proton emission, or strongly enhance the exiting proton beam collimation, even overcoming the hot electron pressure and allowing for proton beam divergence lower than 10°. This has important implications, both for the understanding of the physics of focusing plasma devices and for applications of these ion sources to HED physics. Circular proton beams could be used to drive or enhance hydrodynamic implosions, while highly collimated proton beams can be used for efficient eating of samples and precision irradiation of material and biological samples at a distance. An extremely relevant corollary of this work, that could have major implications in extreme field science and laboratory astrophysics, is the generation of macroscopic magnetic fields with amplitude exceeding 10 kilo-Tesla, obtained by compression of relatively mild, 0.5 kilo-tesla magnetic fields by the highly energetic TNSA plasma inside the cone cavity. By studying the dynamics of TNSA proton beams in focusing devices, we also performed the first successful proton radiography of > 10 kT magnetic fields. This can provide a method to generate and study in controlled conditions the physics of highly magnetized plasmas. We investigate two types of cone targets as shown in (Fig. 1A,B), the first one is a classic ion fast ignition cone target, consisting of a free-standing gold cone with 50 µm tip and an hemi attached to the inner cone walls at 300 µm from the cone tip. The second one is a tip-less buried cone target, consisting of a tip-less gold cone 300 µm long embedded into a cylinder of Epoxy resin with diameter of 800 µm, and an hemi directly attached to the cylinder, coaxially with the cone at 300 µm from the cone tip. Schematic of the experimental setup. (A) Schematic of the classic free-standing cone as first proposed for proton fast-ignition. The LFEX laser is focused at normal incidence due to the cone geometry. The RCF-stack is positioned to the top-left quadrant in order to avoid the LFEX 0th order light (direct reflections from the compressor gratings). (B) Schematic of the tip-less buried cone target. The cone geometry allows for 45° laser incidence angle. In this way it is possible to avoid the risk of 0th order light irradiating the RCF stack and to collect the majority of the proton beam. The aperture at the cone tip is 50 µm in diameter and the radius of the hemi is 350 µm for both types of cones. The latter is rather similar to the cones investigated in the above-referenced works, but with smaller tip aperture. As main diagnostic we used radio-chromic film stack (RCF)24,25, as they can provide both spatially and energy resolved information on the generated proton beams. Experiment on LFEX laser The experiment was performed on LFEX laser at the Institute of Laser Engineering, Osaka University. In this experiment, LFEX delivered 600 J of laser energy on target in 1.5 ps with four beamlets combined, for a nominal intensity of approximately 1 × 1019 W/cm2. In case of classic free-standing cone, LFEX was focused at normal incidence on the hemi, along the cone axis direction. The RCF stack was aligned normally 25 mm from the cone tip and with the top-left corner adjacent to the cone axis. This peculiar setup is required in order to avoid the RCF stack from being hit by the so-called "0th order light", constituted by the direct reflections of the uncompressed (2 ns) and partially compressed (10 ps) LFEX beams off the compressor gratings (due to the diamond compressor design), which are focused by the parabola about 8 mm to the side of the LFEX focus (see Fig. 1A). The tip-less buried cone target geometry instead provides more flexibility and allows for large laser incidence angles as well. For this target we opted for 45 degrees incidence, which allows to collect the entire proton beam instead of a single quadrant as in the free-standing cone case, as schematized in Fig. 1B. Simple free-standing hemi target, identical to those attached to the cone targets and aligned at 45 degrees incidence from LFEX laser, were also investigated for comparison. Experimental results for free-standing cone targets. Free-standing cone target-accelerated protons have a maximum energy of 12 MeV and display a clearly diverging annular pattern. The average divergence angle is comprised between 10 and 30°. For middle to high energies (E > 6 MeV), the quasi-totality of the protons is deflected at angles greater or equal than 10 degrees, while for lower energies we observe proton also at smaller angles. Original raw data, post processed data and analysis are displayed in Fig. 2. The apparent high density of protons at small angles in the lower energy films is due to the logarithmic response of the RCF (ionizing radiation-induced optical density). Post-processing of the image, however, reveals that the majority of the protons are emitted at angles greater than 10° even at low energy. Example of experimental and post-processed data for free-standing cone target. (A) Raw RCF data showing the proton beam distribution for free-standing cone target. The RCF film corresponding to the post processed data in (B) are framed in red. (B) Post-processed data with 3-dimensional spectral unfolding showing the angularly resolved proton energy spectrum. The post-processed results are displayed in different color-scale to make them clear to the reader. The unit of solid angle \(d\Omega\) is fixed at 7 × 10–6 sr. Clear ring-like pattern with large divergence is observed, with complete absence of protons in a 0.225 sr solid angle for energies > 6 MeV. At lower energy we can observe presence of protons emitted more at smaller angles. However, post-processed data show that even at low energies the distribution presents the ring-like pattern with average divergence angle > 10°. (C) Proton energy spectrum obtained from the spectral unfolding of the RCF data. (D) Angular distribution of the proton beam for all energies, the inlet image shows the angular distribution for protons with energy ≥ 9.2 MeV which are not clearly visible in the large format image. Experimental results for tip-less buried cone targets. Tip-less buried cone targets yielded very different results. From the RCF stack data we can observe that the proton beam presents a highly collimated component that extends from low energies to about 13.5 MeV, followed at higher energies by a component with larger divergence, the latter containing only about 0.1% of the proton beam energy (see Fig. 3A–C). The highly collimated component has divergence angles comprised between 11 degrees at low energy and 6 degrees at high energy as shown in Fig. 3D. These values are far lower than the typical ones for TNSA accelerated protons, where the lowest divergence, corresponding to the highest energy protons, for maximum proton energies of 15–20 MeV is about 10 degrees half-angle10,26,27, while the majority of the beam is distributed over much wider angles up to 25–30 degrees half-angle for the lower energy component. Example of experimental and post-processed data for tip-less buried cone target. (A) Raw RCF data showing the proton beam distribution tip-less buried cone target. The RCF film corresponding to the post processed data in (B) are framed in red. (B) Selection of post-processed data from the measurement in (A). It is evident the presence of a highly collimated proton beam component up to 13 MeV, characterized by divergence angles comprised between 11 degrees at low energy and 6.5 degrees at higher energy. Measurement on the first two RCF foils, corresponding to energies of 3.4 and 4.8 MeV is more difficult due to RCF saturation in the green channel at the beam center. For Energies exceeding 13 MeV the collimated component fades and gives place to a broader distribution with significantly higher divergence, starting at about 25 degrees, and reducing for higher energies similarly to typical TNSA proton beams. (C) Proton energy spectrum calculated from the spectral unfolding of the RCF data. (D) Proton beam angular distribution for all energies, clearly showing a divergence ≤ 11 degrees for proton energies between 4.8 and 9.8 MeV. The enclosed plots represent the angular distribution for (up) the mid-energy component, showing that the collimation is maintained till 12.7 MeV and the high high-energy component, with higher divergence (down) typical of classic TNSA proton beams. From the RCF post-processing analysis we find that a total of 6.47 × 1012 protons have been accelerated for a total beam energy of 5.3 J (about 1% laser-to-proton energy conversion efficiency), resulting in an average current density of 2 × 109 A/cm2, which can be efficiently used for material sample and plasma heating in HED physics experiments. In the previous section we have shown that classic free-standing cone targets and buried cone targets yield very different results in terms of proton beam dynamics and exiting beam divergence. The first produces proton beams with a ring-like spatial profile and divergence comprised between 10 and 30 degrees, while the second produces highly collimated proton beams with divergence comprised between 6 and 11 degrees for the majority of the spectrum. In this section we discuss in detail the physics that leads to these two apparently antithetic results, and we will show that both behaviors descend directly from the complex interaction between the self-generated magnetic field inside the cone and the TNSA plasma, which is summarized in the bi-dimensional particle in cell simulation results in shown in Figs. 4 and 5. Bi-dimensional PIC simulations of proton beam generation and dynamics in free-standing cone target (A) Map of the Bz component of the magnetic field in the simulation for three simulation times: 1.2, 2.2 and 3.6 ps. At 1.2 ps, the surface current-generated magnetic field is initially distributed in the inner volume of the cone target. At later times it is compressed against the cone walls and tip by the expanding TNSA plasma, with magnetic field amplitude exceeding 10 kT and sharp gradients. (B) Map of the Ey component of the electric field for simulation times: 1.2, 2.2 and 3.6 ps. At early times the Ey component is produced mainly by fast electrons propagating along the cone walls. At intermediate times (not represented here) the electric field fades as the fast electron flow reduces. At 2.2 ps, when the TNSA plasma compresses the magnetic field inside the cone, a sudden strengthening of the electric field occurs due to the charge density gradient induced by the hot electron confinement by the magnetic field. Modulating the magnetic field gradients, the Ey component presents a de-collimating configuration that helps, together with the B-field, to deflect the proton beam exiting the cone. (C) Hot electron density map at 1.2, 2.2 and 3.6 ps simulation times. Only electrons coming from the hemi-shell are visualized in these maps. Initially the electron beam propagates and refluxes within the hemi-shell and the cone structure accelerating the proton beam via TNSA and generating focusing electric fields along the cone walls. At 2.2 ps as the TNSA plasma compresses the B-field, we can clearly observe the enhanced electron density in the area corresponding to the B-field gradients and the reduced electron density inside the B-field region. This is at the origin of the charge density gradient leading to the enhanced electric field in (B). This configuration is still clearly visible at 3.6 ps simulation time, with electrons accumulating at the edges of the magnetic field both on the cone walls and tip. (D) Hemi-shell proton density map at 1.2, 2.2 and 3.6 ps simulation time. At 1.2 ps we observe the protons being accelerated at the front and rear side of the hemi-shell. At later times, as the proton beam propagates inside the cone, we can clearly appreciate the focusing effect of the Ey component at 2.2 ps. At 3.6 ps, the proton beam is being deflected at the cone tip by the combined action of the Ey and Bz components, ultimately resulting in ring-like spatial distribution with large divergence angle, as observed in the experimental results. Bi-dimensional PIC simulations of proton beam generation and dynamics in tip-less buried cone targets. (A) Map of the Bz component of the magnetic field for simulation times of 2.8, 3.2 and 3.6 ps. At 2.8 ps we can observe the magnetic field compressed against the cone walls as in the free-standing cone case, with part of it flowing through the tip and along the outer rear surface of the target as the TNSA beam exits the cone. At 3.2 ps, as higher density TNSA plasma crosses the cone tip, part of the magnetic field is carried with the beam and forms a channel around it, which is maintained throughout the rest of the simulation. This B-field configuration prevents the hot electrons from radially expand and defocus the proton beam. (B) Map of the Ey component of the electric field for simulation times of 2.8, 3.2 and 3.6 ps. As the highest energy protons start exiting the tip at 2.8 ps, a strong de-focusing electric field surrounds the beam waist as result of the hot electron pressure. At later times of 3.2 and 3.6 ps we observe the disappearance of this de-focusing field, and the appearance of a focusing electric field structure in correspondence of the B-field gradient, preventing lateral expansion and allowing for a highly collimated beam to ensue. (C) Hot electron density map at 2.8, 3.2 and 3.6 ps. Despite the thick bulk material surrounding the cone, fast electrons are mostly confined in the cone cavity by the compressed magnetic field. At 2.8 ps we can see the hot electrons exiting the tip in a narrowly focused beam, following the proton density distribution. These electrons are responsible for the de-focusing electric field observed in (B). At later simulation times we observe the electrons being radially confined by the magnetic field structure, giving rise to the collimating electric field discussed in (B). (D) Proton beam density map at 2.8, 3.2 and 3.6 ps. At 2.8 ps we observe a tightly focused proton beam exiting the tip of the cone. These higher energy protons are going to be defocused by the hot electron pressure-generated electric field discussed above. At later simulation times, as the magnetic and electric field focusing structure develops, we observe that the majority of the proton beam is emitted with very low divergence, in full agreement with the experimental results. Being this a very complex and dynamic process, we decided to add movies for the free-standing cone simulations as Supplementary material. The reader may want to view the movies entitled: "Supplementary_Bz_movie", "Supplementary_Ex_movie", "Supplementary_Ey_movie", "Supplementary_Eden_movie" and "Supplementary_Pden_movie", which refer respectively to the evolution of the z-component of the B-field Bz, the x-component of the electric field Ex, the y-component of the electric field Ey, the electron density map ne and the proton density map np. When a solid target is irradiated by a high intensity laser pulse, about half of the absorbed laser energy goes in the form of a bright relativistic electron beam that propagates inside the target and along its surface. In order to propagate, the relativistic electron current needs to be counterbalanced by a so-called return current composed by electrons from the target material. When the target in question is a cone-attached hemi, a surface current of electrons will propagate from the cone walls to the hemi, replenishing the electron charge that left the hemi during laser-plasma interaction. This surface current manifest itself via the generation of macroscopic magnetic fields, both inside the target and along the target surface and filling the inner volume of the cone with a relatively low amplitude (0.5–1 kT) magnetic field (Fig. 4A, 1.2 ps). At the same time, the fast electron flow from the hemi to the cone walls produces charge-separation electric fields (Fig. 4B, 1.2 ps) that are considered the responsible for the proton beam enhanced focusing on cone targets, but as we will see in the following paragraphs, this only occurs in the initial stage of ion acceleration. As the energetic TNSA plasma expands inside the cone, the magnetic field is compressed against the cone walls and the cone tip, resulting in sharp gradients with magnetic field amplitudes largely exceeding 10 kT (Fig. 4A, 2.2 ps). In these conditions, the hot electrons present in the TNSA plasma (although we talk about proton/ion beams, TNSA plasmas are in average charge neutral) cannot penetrate the B-field, as their average Larmor radius in > 10 kT magnetic fields is sub-µm (Fig. 4C, 2.2 ps). This produces a charge density gradient, resulting in the enhancement of the electric field at the B-field/TNSA plasma interface, which is ultimately responsible for the proton beam focusing down the cone tip, given that the original charge separation electric field has been drastically reduced by the plasma filling the cavity. By comparing the magnetic field (Bz), electric field (Ey), the electron density (ne) and proton density (np) maps reported in the simulation results of Fig. 4A–D at simulation time of 2.2 ps, it appears clear that the TNSA plasma electrons are prevented from penetrating the magnetic field, which results in higher electron density in correspondence to the Bz gradient, and lower electron density in the region of high magnetic field. This electron density gradient enhances the Ey component of the electric field, which is maintained despite the plasma filling of the cone cavity. The proton beam reacts to the electric field and it is further guided towards the tip of the cone. At later simulation time of 3.6 ps, the magnetic field is fully compressed at the cone walls and tip. The Ey component of the electric field follows the Bz gradient with two distinct effects on the proton beam. On one hand, the beam is directed and focused towards the cone tip. On the other hand, the electric and magnetic field configuration at the cone tip is de-collimating, and their combined action on the protons crossing the cone tip results in deflected proton trajectories in a ring-like shape with large divergence angle, as observed in the experimental data. The electric field generated by the interaction of the TNSA plasma with the surface-current generated B-field not only influences the proton beam in terms of focusing and subsequent deflection at the cone tip, but it also causes their slow-down when exiting the tip. This is clearly visible (Fig. 1 in the Supplementary Material) by looking at the negative Ex component that increases when the TNSA plasma reaches the cone tip. This also explains the lower maximum proton energy observed for the free-standing cone target case compared to the tip-less buried cone case. The physics for tip-less buried cone targets is in many ways like the free-standing cone one and it is determined by the interaction and interplay between the magnetic field and the TNSA plasma. From the simulation results in Fig. 5, the extension and maximum amplitude of the B-field is lower compared to the free-standing cone, as part of it flows freely outside the tip and it is not confined in the cone. Also, for tip-less buried cone the fast electron current along the cone walls is reduced as a large fraction of them flows in the bulk plasma surrounding the cone. This also contributes to reduce the amplitude of the surface-current generated magnetic field. Nevertheless, we clearly observe the stages of TNSA plasma expansion in the cone, the consequent B-field compression at the cone walls with the enhancement of the Ey component, providing efficient focusing up to the cone tip. However, the dynamics radically changes at the cone tip region, where the absence of tip allows the protons to flow freely outside the cone, without B-field accumulation and compression at the tip. In order to better explain the physics at the cone tip, we display the simulation results in Fig. 5 at later times of 2.8, 3.2 and 3.6 picoseconds, where the relevant dynamics occurs. At first, the highest energy tail of the proton beam exits the tip, maintaining the trajectory it had inside the cone, resulting in a narrowly focused beam (Fig. 5D at 2.8 ps). However, the hot electron pressure gives rise to a de-focusing electric field (Fig. 5B–D at 2.8 ps), which leads to the proton beam expansion as it propagates further into vacuum. This result agrees with the experimental data showing broader proton emission at very high energies, from 13.7 to 17.3 MeV, as in more classic TNSA data, and it is also in agreement with the above-referenced work by Bartal et al., where the proton trajectories exiting the cone are affected by the hot electron pressure, acting as a de-focusing agent that drives the proton beam lateral expansion as it propagates in vacuum. At later times however, the dynamics significantly changes as the TNSA plasma carries part of the magnetic field outside the cone. As soon as the B-field follows the TNSA beam outside the cone tip, the de-focusing electric field at the proton beam waist disappears and is replaced by a focusing electric field structure, in correspondence to the magnetic field distribution outside the cone (Fig. 5A,B at 3.2 and 3.6 ps). The electric field is originated by the hot electron confinement provided by the magnetic field structure outside the cone (Fig. 5C at 3.2 and 3.6 ps). This mechanism quenches the effect of the hot electron pressure and collimates a large fraction of the proton beam as it propagates into vacuum, giving rise to the high flux component observed in the experimental data. The proton beam dynamics at the cone tip region for the two types of targets is more easily understandable by looking at the proton energy flux maps shown in Fig. 6. Map of the proton energy flux for free-standing and tip-less buried cone targets. (A) Proton energy flux superposed to the Bz map at 3.6 picoseconds in the simulation. It appears clear that the deflection of the proton beam occurs prior to reaching the tip due to the combined action of electric and magnetic field. (B) Proton energy flux superposed to the Bz map at 3.6 picoseconds in the simulation. It is evident that the majority of the protons exiting the cone tip are collimated by the azimuthal magnetic field structure and the consequent focusing electric field. For free-standing cone target we can clearly observe the proton beam being guided to the cone tip, where is then split by the magnetic and electric field in two diverging directions (Fig. 6A), which in 3 dimensions would correspond to a ring-like emission. It is important to notice the splitting of the proton beam occurs inside the cone, in correspondence to the magnetic and electric fields, and it is not a post-emission effect related to the cone geometry itself. For tip-less buried cone target (Fig. 6B), we observe the proton beam being focused at the tip and then further collimated outside the cone by the azimuthal magnetic field and the radial electric field distribution, preventing the beam from expanding due to radiation pressure, and resulting in a high-energy flux beam. In summary, the mechanisms of proton beam focusing and emission from cone targets are determined by the self-generated magnetic field and its interaction with the expanding TNSA plasma. For the focusing stage, the B-field helps maintaining a focusing electric field at the cone walls, even when the entire cone cavity is filled with plasma, by preventing the electrons from free-flowing to the cone walls and thus inducing hot electron density gradients that preserve and enhance the electric field. This is very different from the explanations provided in previous works, where the original TNSA-type electric field is considered the only responsible for the proton beam focusing. For the emission stage, the magnetic field and the associated electric field determine the spatial profile and divergence of the proton beam by either accumulating at the cone tip and deflecting the incoming protons as in the free-standing cone target, or by flowing outside the cone, preventing the hot electron pressure from exerting radial pull on the proton beam and providing instead a collimating structure that allows the proton beam to propagate with minimum divergence. In the work by Bartal et al., the cone is a tip-less buried type with very large aperture tip (127 µm), short distance between the hemispherical shell and the tip (150 µm) and larger aperture angle (60 degrees). This cone geometry allows for a large fraction of the proton beam to propagate unperturbed in the forward direction (especially the high-energy, low divergence protons), while only the higher divergence protons would be focused by the fields at the cone walls. In addition, the experiment was performed on a laser of much smaller scale compared to LFEX (about 10% of the LFEX energy on target) with much reduced hot electron current density, lower amplitude self-generated magnetic field and much lower energy carried by the TNSA beam. All these factors lead to a much-reduced energy density inside the cone, consequently the dynamics described in our publication does not manifest in Bartal and co-authors work. On the other end, the effect of hot electron pressure-driven de-focusing of the proton beam, first described in Bartal's work, is confirmed in our measurements and simulations as well (see Figs. 3D and 5B at 1.2 ps), with the highest energy part of the proton beam experiencing the radial pull exerted by the hot electron pressure, resulting in broader angular distribution at very high energies. The results presented in our work have implications that go beyond the generation of high energy density states with ion and proton beams. Control and collimation of laser-generated ion beams is being actively pursued by several research teams around the world. Tip-less buried cones represent a rather simple way to obtain collimated beam without requiring complex experimental setups. For smaller scale laser facilities, a key aspect would be to scale-down the cone target size while maintaining the aspect ratio, as to guarantee sufficient energy density inside the cone for the magnetic field-driven collimation to occur. Moreover, it provides with a way to generate super-strong magnetic fields with amplitudes exceeding 10 kT, by taking advantage of the TNSA plasma pressure and treating it as a piston to compress the magnetic field, which is naturally generated as result of the fast electron return current. This can allow for relatively simple experimental setups with targets characterized by a partially enclosed volume, to allow for magnetic field compression, and some side windows/apertures to allow for diagnostics to peek-in and observe the physics of plasmas in super-strong magnetic fields, recreating conditions close those in the atmosphere of highly magnetized white dwarfs. Experimental setup The experiment was conducted on LFEX laser at the Institute of Laser Engineering, Osaka University. LFEX laser is composed by four beamlets and delivers up to 1 kJ of laser energy on target in 1.5 picoseconds, over a spot diameter of approximately 60 µm, resulting in an average intensity on target of 1 × 1019 W/cm2. In this experiment the energy on target was limited to 600 J due to a limitation of the LFEX amplifiers output. The cone targets are made of gold 10 µm thick, with an aperture angle of 45-degrees and tip size of 50 µm. For the tip-less buried cone, a thick Epoxy resin wall is added, giving this target the aspect of a cylinder with 800 µm base diameter and height of 300 µm. The hemispherical shell, made of CH plastic has a radius of curvature of 350 µm and cross-sectional diameter of 300 µm. CH plastic was chosen because of the hydrogen-rich bulk material as LFEX laser is capable of fully depleting the contaminant layer of hydrocarbons that would constitute the proton source in metallic targets. The diagnostic used was a RCF stack composed by 15 HD-V2 films followed by 20 EBT3 films. The stack was positioned at 2.5 cm distance from the target, and it was shielded with 105 µm Al foil which would protect the films from target debris. The LFEX incidence angle on target was either normal incidence for free-standing cone targets or 45-degrees incidence for the tip-less buried cone targets as shown in Fig. 1A and B. Data and statistical analysis For data analysis and statistics, we refer to our recent publication in Review of Scientific Instruments25, describing the dosimetry calibration of Gafchromic HD-V2, MD-V3 and EBT3 films, that we briefly summarize in this section. Dosimetry calibration was performed by irradiating the RCF films with a 130 Tera-Becquerel Co60 g-ray source with different exposure time, corresponding to radiation doses ranging from 1 Gy to 100 kGy. The data were scanned using a response-calibrated Epson GT-X980 flatbed film scanner, that allows to calculate the optical density associated to the dose in each RCF and to obtain the optical density-to-dose calibration curves in red, green and blue channels. For RCFs, the highest optical density is recorded in the red channel, however for high-dose exposures the red channel is not the best option given lower saturation threshold together with solarization effect that occurs for extremely high doses and that could lead to underestimation of the dose in the film. Therefore, the experimental data presented in this work are analyzed in the green channel, with an error associated to the calculated dose of 7.1% for HD-V2 and 5.1% for EBT3. To this error must be added the one associated to the batch-to-batch variation in RCF response as declared by Ashland-Gafchromic, corresponding to 20%. Once the dose per RCF is obtained, data post-processing is performed via three-dimensional spectral unfolding procedure, entirely based on a method developed by Schollmeier and co-authors28. The post-processing code accounts for low-energy-transfer as well as straggling during transport in the RCF stack, providing as result the proton beam energy spectrum and angular distribution. Particle in cell simulations Particle in Cell simulations have been performed with the Epoch2d code29 using two different simulation setups according to the different cone geometries and laser-plasma interaction conditions. The simulation box was 230 µm in the longitudinal dimension and 170 µm in the transverse dimension, with cell size l/30 in both dimensions. The cone walls have been modeled as Au18+ with density of 60 nc and the hemi as pure hydrogen with density of 40 nc and a sharp, 2 µm scale-length pre-formed plasma. A thin, 0.25 µm contaminant layer of hydrogen is also set on all the inner and outer cone surfaces. The choice of pure hydrogen instead of CH plasma as hemi-shell material is since in relativistic laser-plasma interaction the laser energy absorption occurs through collisionless mechanisms, therefore no significant difference is expected between the two materials in terms or proton generation. Moreover, our experimental data are only related to protons, as the heavier ion stopping power is much higher compared to hydrogen and they are entirely stopped within the aluminium filter in front of the RCF pack. All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Data are stored at ILE and can be made available upon reasonable request. Drake, P. R. High-Energy-Density Physics (Springer, 2010). Millot, M. et al. Experimental evidence for superionic water ice using shock compression. Nat. Phys. 14, 297–302 (2018). Cheng, B., Mazzola, G., Pickard, C. J. & Ceriotti, M. Evidence for supercritical behaviour of high-pressure liquid hydrogen. Nature 585, 217–220 (2020). Celliers, P. M. et al. Insulator-to-conducting transition in dense fluid helium. Phys. Rev. Lett. 104, 1–4 (2010). Hayes, A. C. et al. Plasma stopping-power measurements reveal transition from non-degenerate to degenerate plasmas. Nat. Phys. 16, 432–437 (2020). Wickramasinghe, D. T. & Ferrario, L. The origin of the magnetic fields in white dwarfs. Mon. Not. R. Astron. Soc. 356, 1576–1582 (2005). Cardall, C. Y., Prakash, M. & Lattimer, J. M. Effects of strong magnetic fields on neutron star structure. Astrophys. J. 554, 322–339 (2001). Murdin, B. N. et al. Si: P as a laboratory analogue for hydrogen on high magnetic field white dwarf stars. Nat. Commun. 4, 1–7 (2013). Kraus, D. et al. Formation of diamonds in laser-compressed hydrocarbons at planetary interior conditions. Nat. Astron. 1, 606–611 (2017). Snavely, R. A. et al. Intense high-energy proton beams from Petawatt–Laser irradiation of solids. Phys. Rev. Lett. 85, 2945–2948 (2000). Mora, P. Plasma expansion into vacuum. Phys. Rev. Lett. 90, 185002 (2003). Passoni, M., Bertagna, L. & Zani, A. Target normal sheath acceleration: Theory, comparison with experiments and future perspectives. New J. Phys. 12, 045012 (2010). Cowan, T. E. et al. Ultralow emittance, multi-MeV proton beams from a laser virtual-cathode plasma accelerator. Phys. Rev. Lett. 92, 20 (2004). Toncian, T. et al. Ultrafast laser-driven microlens to focus and energy-select mega-electron volt protons. Science 312, 410 (2006). Foord, M. E., Reisman, D. B. & Springer, P. T. Determining the equation-of-state isentrope in an isochoric heated plasma. Rev. Sci. Instrum. 75, 2586–2589 (2004). Patel, P. K. et al. Isochoric heating of solid-density matter with an ultrafast proton beam. Phys. Rev. Lett. 91, 125004 (2003). Kodama, R. et al. Fast heating of ultrahigh-density plasma as a step towards laser fusion ignition. Nature 412, 798–802 (2001). Fernández, J. C. et al. Fast ignition with laser-driven proton and ion beams. Nucl. Fusion. 54, 054006 (2014). Roth, M. et al. Fast ignition by intense laser-accelerated proton beams. Phys. Rev. Lett. 86, 436–439 (2001). Qiao, B. et al. Dynamics of high-energy proton beam acceleration and focusing from hemisphere-cone targets by high-intensity lasers. Phys. Rev. E. 87, 013108 (2013). Honrubia, J. J., Morace, A. & Murakami, M. On intense proton beam generation and transport in hollow cones. Matter Radiat. Extrem. 2, 28 (2017). Zhou, D. B. et al. Control of target-normal-sheath accelerated protons from a guiding cone. Phys. Plasmas 22, 063103 (2015). Bartal, T. et al. Focusing of short-pulse high-intensity laser-accelerated proton beams. Nat. Phys. 8, 139–142 (2011). Bin, J. H. et al. Absolute calibration of GafChromic film for very high flux laser driven ion beams. Rev. Sci. Instrum. 90, 1–6 (2019). Abe, Y. et al. Dosimetric calibration of GafChromic HD-V2, MD-V3, and EBT3 films for dose ranges up to 100 kGy. Rev. Sci. Instrum. 92, 3–8 (2021). Fuchs, J. et al. Spatial Uniformity of laser-accelerated ultra-high current MeV electron propagation in metals and insulators. Phys. Rev. Lett. 91, 255002 (2003). Schollmeier, M. et al. Laser beam-profile impression and target thickness impact on laser-accelerated protons. Phys. Plasmas 15, 053101 (2008). Schollmeier, M., Geissel, M., Sefkow, A. B. & Flippo, K. A. Improved spectral data unfolding for radiochromic film imaging spectroscopy of laser-accelerated proton beams. Rev. Sci. Instrum. 85, 043305. https://doi.org/10.1063/1.4870895 (2014). Arber, T. D. et al. Contemporary particle-in-cell approach to laser-plasma modelling. Plasma Phys. Control. Fusion. 57, 113001 (2015). The authors thank the technical support staff of ILE and the Cyber Media Center at Osaka University for assistance with the laser operation, target fabrication, plasma diagnostics, and computer simulations. This work was supported by the Collaboration Research Program between the National Institute for Fusion Science and the Institute of Laser Engineering at Osaka University under the contract NIFS21KUGK140 and by the Japanese Ministry of Education, Science, Sports, and Culture through Grants-in-Aid, KAKENHI Grants No. 70724326. This work was also partially supported by the EUROfusion research Grant ENR-IFE19.CCFE-01-T001-D001 and the grant RTI2018-098801-B-100 of the Spanish Ministry of Science and Innovation. Institute of Laser Engineering, Osaka University, Suita, Japan A. Morace, Y. Abe, N. Iwata, Y. Arikawa, Y. Nakata, A. Yogo, Y. Sentoku, K. Mima, T. Norimatsu, K. Tsubakimoto, J. Kawanaka, S. Tokita, N. Miyanaga, H. Shiraga, Y. Sakawa, M. Nakai, H. Azechi, S. Fujioka & R. Kodama ETSI Aeronautica y del Espacio, Universidad Politecnica de Madrid, Madrid, Spain J. J. Honrubia Hiroshima University, Hiroshima, Japan T. Johzaki Lawrence Livermore National Laboratory, Livermore, USA T. Ma & D. Mariscal National Institute of Fusion Science, Toki, Japan H. Sakagami A. Morace Y. Abe N. Iwata Y. Arikawa Y. Nakata A. Yogo Y. Sentoku K. Mima T. Ma D. Mariscal T. Norimatsu K. Tsubakimoto J. Kawanaka S. Tokita N. Miyanaga H. Shiraga Y. Sakawa M. Nakai H. Azechi S. Fujioka R. Kodama A.M. designed and lead the experiment, conducted the data analysis and post-processing, performed the totality of the simulations, including simulation analysis routines and led the interpretation of the experimental results. He also contributes to the RCF-scanner calibration. Y.A. led the RCF and scanner calibration that allowed for the data analysis for this work. J.J.H. provided valuable insights and early discussion that led to the conception of this experiment. Y.A. and A.Y. provided support during the experiment. Y.N., T.J., N.I., Y.S., K.M., T.M., D.M., H.S., Y.S., M.N., H.A., S.F., R.K., contributed to the discussion of the results. T.N. prepared the targets used in the experiment, K.T., J.K., S.T., N.M., provided LFEX operation support. The manuscript was written by A.M., Y.A. prepared Fig. 1 of the manuscript. Correspondence to A. Morace. Supplementary Legends. Supplementary Movie 1. Morace, A., Abe, Y., Honrubia, J.J. et al. Super-strong magnetic field-dominated ion beam dynamics in focusing plasma devices. Sci Rep 12, 6876 (2022). https://doi.org/10.1038/s41598-022-10829-1
CommonCrawl
OSA Publishing > Biomedical Optics Express > Volume 11 > Issue 4 > Page 2109 Christoph Hitzenberger, Editor-in-Chief Feature Issues Polarization contrast optical diffraction tomography Jos van Rooij and Jeroen Kalkman Jos van Rooij and Jeroen Kalkman* Department of Imaging Physics, Lorentzweg 1, 2628 CJ, Delft, The Netherlands *Corresponding author: [email protected] Jeroen Kalkman https://orcid.org/0000-0003-1698-7842 J van Rooij J Kalkman Issue 4, •https://doi.org/10.1364/BOE.381992 Jos van Rooij and Jeroen Kalkman, "Polarization contrast optical diffraction tomography," Biomed. Opt. Express 11, 2109-2121 (2020) Large-scale high-sensitivity optical diffraction tomography of zebrafish (BOE) High-resolution polarization-sensitive optical coherence tomography for zebrafish muscle imaging (BOE) Polarization sensitive optical coherence tomography for imaging microvascular information within living tissue without polarization-induced artifacts (BOE) Extended depth of field Imaging techniques Phase contrast Three dimensional imaging Original Manuscript: November 20, 2019 Revised Manuscript: March 3, 2020 Manuscript Accepted: March 4, 2020 Polarization contrast imaging Discussion and conclusion References and links Equations (16) We demonstrate large scale polarization contrast optical diffraction tomography (ODT). In cross-polarized sample arm detection configuration we determine, from the amplitude of the optical wavefield, a relative measure of the birefringence projection. In parallel-polarized sample arm detection configuration we image the conventional phase projection. For off-axis sample placement we observe for polarization contrast ODT, similar as for phase contrast ODT, a strongly reduced noise contribution. In the limit of small birefringence phase shift δ we demonstrate tomographic reconstruction of polarization contrast images into a full 3D image of an optically cleared zebrafish. The polarization contrast ODT reconstruction shows muscular zebrafish tissue, which cannot be visualized in conventional phase contrast ODT. Polarization contrast ODT images of the zebrafish show a much higher signal to noise ratio (SNR) than the corresponding phase contrast images, SNR=73 and SNR=15, respectively. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement 3D imaging in the life sciences is of great importance for studying fundamental biology and performing (pre-) clinical studies. For these studies, label free optical imaging methods play an important role. There are various label-free contrast mechanisms such as scattering, absorption, or refractive index (RI). However, in some cases these contrast mechanisms are not sufficiently sensitive to observe the relevant information, hence, there is a need for imaging with alternative types of intrinsic contrast. Optical diffraction tomography (ODT) has shown to be an effective tool for 3D imaging of RI contrast on the scale of cells [1] or small organisms [2]. More recently, phase contrast ODT was applied on a millimeter scale, where different structural features of a zebrafish larva and a cryo-injured heart could be distinguished in 3D using RI contrast [3]. However, some types of tissue are not visible in conventional phase contrast ODT. An alternative form of contrast is given by the polarization change of the optical wavefield caused by tissue birefringence. Birefringent samples are not described by a single scalar RI value per voxel that contributes to the optical path length, but the RI value experienced by the wavefield depends on its polarization state. Polarization contrast has been widely applied in microscopy [4,5], digital holography [6], optical coherence tomography [7], and optical projection tomography [8]. Birefringence provides a high-constrast label-free mechanism for imaging fibrous structures such as muscle (collagen) or brain (myelin) tissue. Muscle tissue has been imaged in 3D using polarization sensitive optical projection tomography (OPT), as an extension of brightfield OPT using a white light source [8]. However, with OPT phase information is lost and refractive index contrast cannot be determined. In this work we show that in addition to phase contrast also polarization contrast is compatible with large scale ODT and offers a significantly higher signal to noise ratio (SNR) compared to conventional phase contrast ODT. We determine under what conditions a birefringent sample can be properly reconstructed using conventional filtered backprojection (FBP). Furthermore, we show that off-axis sample placement, which has been used in conventional ODT [9] for noise reduction, also for polarization ODT offers significant noise reduction and that the same steps of numerical refocusing to correct for defocus can be applied. Finally, we demonstrate 3D multi-contrast imaging of a zebrafish larva using two orthogonal components of the transmitted wavefield, from which a conventional phase contrast and polarization contrast ODT image are reconstructed. 2. Polarization contrast imaging In conventional ODT, refractive index differences in the sample cause a change in optical path length of the transmitted light wave. Assuming an isotropic medium, each voxel in the sample gives a fixed contribution to the optical path length of a ray traveling through it regardless of its' polarization. However, when a sample is birefringent this contribution generally depends on the orientation of the polarization of the wave with respect to the medium. Here we use Jones calculus to calculate light interactions. We assume that the birefringent tissue locally can be described as uniaxial, where the optical axis corresponds to the predominant fiber direction. The birefringent tissue is modeled as a wave retarder that introduces a relative phase shift $\delta$ along the fast axis with respect to the slow axis, and introduces a common phase shift $\epsilon$ (i.e. the average phase of the two components) for both polarization components. The relative phase shift $\delta$ between the two components is then defined as [10] (1)$$\delta=k \Delta \cos^2(\alpha(\beta)) \, , \textrm{with} \, \Delta = \int [n_{e}(s)-n_{o}(s)]~\textrm{d}s \, ,$$ where $\alpha$ is the fiber inclination angle relative to the $x$-$y$ plane of the polarizers as indicated in Fig. 1(a-b). The wavenumber $k$ is given by $k=\frac {2\pi }{\lambda }$ and $\Delta$ is the optical path difference integrated over the sample. As indicated in Fig. 1(a-b), the angle $\varphi$ indicates the angle of rotation of the optic axis of the uni-axial sample with respect to the $x$-axis projected onto the $x-y$ plane. The rotation angle of the polarizers is given by $\rho$, which is the angle of the cross/parallel polarizers to the $x$-axis. The birefringent object is assumed to rotate around the $x$-axis for tomographic measurement with angle $\beta$, which is shown in Fig. 1(c). We define the tilt angle of the object with respect to the $x$ axis as $\gamma$ as show in Fig. 1(a-b). During tomographic measurements, the tilt angle $\gamma$ stays constant. The tomographic rotation causes $\alpha$ and $\varphi$ to change for each projection according to (2)$$\alpha=\gamma \sin{\beta} \hspace{7mm} \textrm{and} \hspace{7mm} \varphi=\gamma \cos{\beta}$$ Fig. 1. Schematic of the ODT sample arm geometry. (a) The orientation of the uniaxial sample is defined by the inclination angle $\alpha$ and the direction angle $\varphi $. The sample is rotated around the $x$-axis for tomographic measurement. The input polarization state is along the $x$-axis, after which the parallel polarized $x$-component (a) or the cross-polarized $y$-component (b) of the complex wave is measured for each projection angle. The tomographic angle $\beta$ is defined with respect to the fiber orientation in the $y$-$z$ plane. The angle of the polarizers $\rho$ is defined with respect to the $x$ axis. Download Full Size | PPT Slide | PDF We assume an incoming beam polarized along the $x$-axis that travels through the sample in the $z$-direction. Both the $x$ and $y$ components are extracted by placing an analyzer in the sample arm that can be rotated to align with the parallel $x$ or cross-polarized $y$-axis. The complex wavefield of an incoming wave polarized along the $x$-axis after transmission through the birefringent medium is (3)$$U=\begin{pmatrix} \textrm{e}^{-\frac{1}{2} i (\delta -2 \epsilon )} \left(\sin ^2(\rho -\varphi )+\textrm{e}^{i \delta } \cos ^2(\rho -\varphi )\right)\\ -i \textrm{e}^{i \epsilon} \sin \left(\frac{\delta }{2}\right) \sin (2\rho -2\varphi ) \end{pmatrix} \, ,$$ with $\epsilon$ defined as the average phase (4)$$\epsilon = \frac{2\pi}{\lambda}\int \frac{n_{e}(s)+n_{o}(s)}{2}~\textrm{d}s \, .$$ 2.1 Parallel-polarization output The first component in Eq. (3) is the $x$-component of the transmitted field with a polarization parallel to that of the input field. It can be extracted by placing a polarizer aligned along the $x$-axis after the sample. The $x$-component in Eq. (3) contains phase contributions of both the conventional phase contrast $\epsilon$ and the birefringence contrast $\delta$. The phase of this component is defined as the inverse tangent of the imaginary part divided by the real part (5)$$\phi_{U_{x}}=\tan ^{{-}1}\left(\frac{\cot \left(\frac{\delta }{2}\right) \sin (\epsilon ) \sec (2\rho -2\varphi )+\cos (\epsilon )}{\cot \left(\frac{\delta }{2}\right) \cos (\epsilon ) \sec (2\rho -2\varphi )-\sin (\epsilon )}\right).$$ The derivative of $\phi _{U_x}$ with respect to $\epsilon$ is equal to unity and thus the measured phase of the $x$-component is a linear function of the phase contrast projection $\epsilon$. There is however also a contribution to the phase of the birefringence $\delta$, which is in general non-linear. This can be seen by taking the derivative of Eq. (5) with respect to $\delta$, i.e., (6)$$\frac{\partial \phi_{U_{x}} }{\partial \delta} = \frac{\csc ^2\left(\frac{\delta }{2}\right) \sec (2\rho -2\varphi )}{2 \cot ^2\left(\frac{\delta }{2}\right) \sec^2(2\rho - 2\varphi )+2} \, ,$$ where csc is the cosecant or the reciprocal of the sine function. For small values of $\delta$, Eq. (5) can be expanded (in zeroth and first order) as (7)$$\phi_{U_{x}}\approx \tan ^{{-}1}(\tan (\epsilon )) + \frac{1}{2} \delta \cos (2\rho -2\varphi ) \, .$$ For small values of $\delta$, the measured phase of the $x$-component will thus be dominated by the average phase $\epsilon$, where $\tan ^{-1}(\tan (\epsilon ))$ is the wrapped average phase. 2.2 Cross-polarization output The vertical $y$-component is the second component of the field in Eq. (3) and is perpendicular to the input polarization. The amplitude of this component is given by (8)$$|U_{y}|=\left| \sin \left(\frac{\delta }{2}\right)\right| \left| \sin (2\rho - 2\varphi )\right| \, .$$ Similar to what is done in polarimetry it can be measured using crossed polarizers. The presence of birefringence causes modulation in the amplitude of the wavefield as $\delta$ appears in the $y$-component as $\sin \left (\frac {\delta }{2}\right )$ in the amplitude. The amplitude modulation is utilized to generate qualitative birefringence contrast projections in 2D. However, this is problematic for 3D tomographic reconstruction as tomographic reconstruction algorithms usually assume a linear relation between contrast and projection. The projection function $\delta$ is thus not measured directly and must be retrieved. Taking the inverse sine of the modulation term we obtain (9)$$\sin^{{-}1}\left ( \left | \sin \left(\frac{\delta }{2}\right )\right | \right )=\left\{\begin{array}{l} \frac{\delta}{2} - m\pi \: \: \textrm{if} \: \: 0\leq \frac{\delta}{2}< \frac{\pi}{2} \: \: \textrm{mod}\: \: \pi \\-\frac{\delta}{2}+m\pi \: \: \textrm{if} \: \: \frac{\pi}{2}\leq \frac{\delta}{2}< \pi \: \: \textrm{mod}\: \: \pi \end{array}\right. \, ,$$ with $m$ and integer. In Eq. (9) the absolute value in the inverse sine is taken since the amplitude is the square root of the intensity and is thus always positive. The inverse sine changes the sign of the original $\frac {\delta }{2}$ function for values $\frac {\pi }{2} \leq \frac {\delta }{2}<\pi$ mod $\pi$, making the inverse sine of the signal not directly suitable as a linear input projection for FBP reconstruction. Moreover, to reconstruct for arbitrary large $\delta$, the signal needs to be unwrapped using phase unwrapping. However, from Eq. (9) it follows that in case the maximum value of $\delta$ in the projection does not exceed $\pi$, the signal can be directly retrieved by taking the inverse sine and no further processing is necessary. Even more, if $\delta$ is small, the amplitude of the $y$-component of Eq. (3) can be approximated as a linear function of $\delta$, since for small values of $\delta$ it holds that (10)$$|U_{y}|\approx\frac{1}{2} \delta \left| \sin (2 \rho -2 \varphi )\right| \, .$$ To demonstrate the general approach of tomographic birefringence tomography a polarization contrast calculation for the case of a uniaxial birefringent cylinder of 10 mm radius with a maximum projected phase shift of $\delta =18$ radians is shown in Fig. 2. The blue line indicates the original phase shift as a function of position after a plane wavefront travels through the cylinder and this is the signal that has to be retrieved. Fig. 2. Phase shift $\delta$ between the two orthogonal polarizations in the case of a uniaxial birefringent cylinder with maximum projected phase shift $\delta =18$ radians as quantified with polarization contrast imaging. The red line shows two times the inverse sine of the measured $|$sin($\delta /2$)$|$ term. The green line is obtained by flipping the inverse sine function in the appropriate domains and adding $\pi$ according to Eq. (9). The function $\delta$ can then be retrieved with standard phase unwrapping and is plotted in magenta and corresponds with the original birefringence distribution. Thus, in theory the projection function $\delta$ can be retrieved. However, in practice this may not be possible, for example, when the data is noisy or the jumps in the sinusoidal signal of the transmitted field $U_{y}$ are not properly sampled due to large increase of $\delta$. 2.3 Polarization tomography In 3D polarization sensitive tomographic imaging, the sample is rotated and the $x$ (parallel) and $y$ (cross) components of the wave are recorded for each angle for phase and polarization contrast respectively. Due to the small contribution of the birefringence contrast in the $x$-component phase it can be used for conventional ODT. However, it should be noted that in order to preserve the linear relationship between the projection and the $y$-component, it can be seen from Eq. (10) that not the intensity (amplitude squared) of the wavefield should be taken as the projection, but the square root of the intensity (amplitude). However, in general $\delta$ itself depends on the tomographic rotation angle $\beta$ through $\alpha$ in Eq. (1) and Eq. (2). Furthermore, the angle $\varphi$ in Eq. (10) depends on $\beta$ as well through Eq. (2). Using these dependencies we find that for small $\delta$ the $y$-component of the field is (11)$$|U_{y}|(\beta)\approx\frac{1}{2} k \Delta \cos ^2(\gamma \sin (\beta )) \left| \sin (2 \rho -2 \gamma \cos (\beta ))\right| \, .$$ Thus, even though the amplitude of $U_{y}$ is linear with respect to $\Delta$, the signal is non linear with respect to the rotation angle $\beta$. The first non-linearity occurs due to the $\cos ^2(\gamma \sin (\beta ))$ term in Eq. (11). In Appendix A we show that this term causes an angular modulation across the projections in the Radon transform, which translates to a slowly varying angular background modulation in the tomographic reconstruction, that leaves the object contrast intact. The second term $\left | \sin (2 \rho -2 \gamma \cos (\beta ))\right |$ in Eq. (11) modulates the amplitude as a function of the tomographic angle $\beta$. This can be compensated for by taking the cross-polarization angle $\rho$ such that $\left | \sin (2 \rho -2 \gamma \cos (\beta ))\right |$ is maximum. Experimentally, this implies that tomographic image acquisition should be done for a sufficient number of cross-polarizer angles $\rho$, and for each projection angle $\beta$ the maximum amplitude projection is subsequently selected [8]. Thus, despite the angular dependency of the phase shift $\delta$, a linear reconstruction algorithm can be used for polarization contrast tomography. The question arises whether the phase of the crossed-polarizer component can be used to do the conventional phase reconstruction, so that capturing of $U_{x}$ is not necessary. In cross polarization, the phase $\epsilon$ of the transmitted $y$-component is defined for any path through the birefringent sample where the field amplitude is not zero. Hence, this component cannot be used to reconstruct the conventional RI contrast $\epsilon$ across the whole sample. However, the phase of the $y$-component can be used in order to propagate the wavefield. This can be used to numerically refocus the wavefield if necessary, for example in the case of off-axis placement of the sample for noise suppression [3,9], or to extend the depth of field of the imaging system [2]. 3.1 Acquisition of projections In ODT, the scattered field is recorded from multiple angles using digital holography. The digital holography setup is shown in Fig. 3 and consists of a Mach-Zehnder interferometer operated in transmission. The light source is a HeNe laser with a wavelength of 633 nm and an output power of 3 mW. Two lenses (Thorlabs, LD2568 and LA1979) are used to expand and collimate the illuminating laser beam to a full width at half maximum (FWHM) of approximately 15 mm. Fig. 3. Experimental setup for acquiring the digital holograms. HeNe: Helium Neon laser, BE: Beam expander, BS: Beam splitter, IML: Index matching liquid, S: Sample rotated around the z-axis, MO: Microscope objective, M: Mirror, TL: Tube lens, PZT: Mirror mounted on piezo stage, C: Camera, P: Polarizer, WP: Half-wave plate. In the object arm a 10X objective lens (NA=0.3) is used in combination with a 200 mm focal length tube lens (Thorlabs) to image the sample in close proximity to the detector of a CMOS camera, (Basler beA4000-62kc) with $4096 \times 3072$ pixels and a pixel pitch of 5.5 μm. A rotation mount (Thorlabs CR1) rotates the sample stepwise over 360$^{\circ }$. One polarizer is placed in front of the sample (P1), and a second one is placed behind the sample (P2). For acquisition of the regular phase contrast projections, the optical axes of the polarizers are made parallel and a acquisition of 720 projections over 360$^{\circ }$ is performed. For the polarization contrast projections, the relative angle between both polarizers is kept constant at 90$^{\circ }$. The complete tomographic measurement is then carried out as before. The polarization contrast measurement is then repeated after simultaneous rotation of both the polarizers by 30$^{\circ }$ and 60$^{\circ }$, respectively. In the reference arm, a polarizer (P3) is placed in order to maximize the fringe contrast at the detector; this polarizer is rotated simultaneously with the polarizers in the object arm. A half-wave plate is placed behind the beam expander in order to maximize the signal at the detector. In the reference arm, a 10X Olympus microscope objective partly compensates for the object wave curvature to avoid the presence of too high spatial frequencies on the camera. The mirror in the reference arm is mounted onto a piezoelectric transducer (Thorlabs, KPZ 101) controlled by a computer for phase-shifting the digital hologram. We capture four holograms with reference arm phase shift increments of $\pi /2$ between each subsequent hologram. From a linear combination of these holograms a complex hologram is formed where the zeroth and out of focus conjugate orders are removed [11]. In this way we maximize the lateral resolution in the reconstructed image. This is specifically important for large scale ODT where magnification is low but an as high as possible NA is desired. 3.2 Phase and polarization projections Autofocus correction is applied on the digital hologram in order to obtain the wavefield in the object region. The object position is determined by calculating a focus metric (grayscale variance) as a function of the reconstruction distance. For transparent objects the gray scale variance has a minimum value when the reconstruction distance is located at the object. For polarization contrast projections, the gray scale variance has a maximum value when reconstructed in focus. For both cases separately the minimum/maximum is determined for ten samples of a full rotation acquisition (i.e. 0$^{\circ }$, 36$^{\circ }$, 72$^{\circ }$, etc.). A sinusoidal function is then fitted to the minimum/maximum as a function of the projection angle to determine the object distance as a function of projection angle. For every angle the hologram is reconstructed for both the phase and polarization contrast data, with the object in focus by propagating the field to the object plane using the angular spectrum method for diffraction calculation, which is exact and valid for small propagation distances. In case of the phase projections, the phase is then calculated by taking the argument of the reconstructed wavefield. The phase projections are unwrapped using a least squares phase unwrapping algorithm [12]. For the polarization contrast projections, the amplitude of the cross-polarized component is calculated. This amplitude then gives a direct, but scaled measure for the birefringence: scaled $n_e-n_o$. For the different (cross) polarizations, the projections are misaligned horizontally by a few pixels. This is corrected by determining the center of rotation from the maximum variance of the tomographic reconstruction as a function of the shift for each polarization contrast sinogram individually. The projections are then shifted to the correct location using the circular shift function of MATLAB. The wavefield amplitude of the projections for the three angles are stacked, and the maximum value for each camera coordinate is extracted to form a single maximum birefringence projection sinogram. Tomographic imaging is performed with 720 projections over 360$^{\circ }$ (steps of 0.5$^{\circ }$) with four phase steps per projection. At every projection angle and phase step, four measurements are taken (one for phase, three for polarization contrast) in total. The net acquisition time for a full 3D measurement is approximately 7 minutes with the total acquired data around 160 GB. 3.3 Tomographic image reconstruction and visualization For reconstruction of the phase contrast, assuming that RI variation in the sample is sufficiently small so that refraction does not occur, a phase projection is a scaled integral over the RI variation with respect to the background medium along the illumination direction. The average refractive index difference $\Delta n_{avg}$ is calculated from the phase by using the system magnification and the pixel pitch [9]. Subsequently, the $\Delta n_{avg}$ object is reconstructed using the FBP algorithm on a slice by slice basis. For polarization contrast, the maximum birefringence projection sinogram $\delta$ is reconstructed using the FBP algorithm as $n_e - n_o$. We used the Drishti software package [13] to visualize and merge the phase and polarization contrast reconstructions with a non-linear transfer function. 3.4 Noise suppression in polarization sensitive ODT The sample is displaced from the center of rotation by approximately 0.5 mm. Figure 4 shows the noise distribution, in standard deviation $\sigma$, in a tomographic ODT reconstruction of both the polarization contrast (a) and (b) and the phase contrast (c) and (d). The polarization contrast ODT reconstruction suffers from increased noise in the region of the center of rotation, similar to what has been shown to be the case with phase contrast ODT. The noise at the center of rotation is approximately a factor 7 higher than outside of the center. This also shows that the on-axis noise reduction is even more significant in the case of polarization contrast ODT than in phase contrast ODT, where the noise reduction by off-axis placement was found to be in the order of a factor 2 for 720 projections [3]. Fig. 4. (a) Logarithm of the standard deviation $\sigma (n_e - n_o)$ of a single polarization contrast reconstructed slice. (b) Cross-section along the dashed line in figure (a) and the average standard deviation over all slices of the stack (red). (c) Logarithm of the standard deviation $\sigma (\Delta n_{avg})$ of the phase contrast reconstructed slice. (d) Cross-section along the dashed line in figure (c) and the average standard deviation over all slices of the stack (red). 3.5 Zebrafish sample preparation The sample is a 3 day old zebrafish embryo (wild type). The eggs are grown on a petridish and subsequently placed in PTU (1-phenyl 2-thiourea) to prevent pigment formation. At 72 hours, the eggs are dechorionated and fixated in 4% paraformaldehyde. Then, the eggs are washed with Phosphate buffered saline three times, after which it is replaced with 100% MeOH in two cycles for dehydration. The embryos are placed in small cylinders (4 mm diameter) and mixed with agarose (2% mass-percentage). After the agarose is dry, the agarose containing the embryo's is removed from the cylinders and as a whole placed in BABB, a mixture of benzyl alcohol (Sigma B-1042) and benzyl benzoate (Sigma B-6630) in a 1:2 ratio, which makes the sample completely transparent [14]. During this process, the RI of the sample becomes almost that of the BABB clearing solution. We used a clearing time of 3 hours (similar to [3]) that ensures that the sample is transparent enough for optical phase tomography, while at the same time maximizing remaining RI contrast in order to keep a good signal (RI contrast in the reconstruction) to noise (background) ratio in the final reconstruction. The polarization and phase contrast projections of a 3 day old zebrafish tail are shown in Fig. 5(a)-(b) and (d)-(e), respectively. The phase contrast projections are similar to our earlier work on ODT applied to zebrafish larvae [3]. In the polarization contrast projections most of the larva appears dark, due to the absence of birefringent tissue, except in the tail where the developing highly birefringent muscle tissue (myotome) is located. The polarization contrast results are found to be similar in comparison with 2D polarization contrast measurements of Jacoby et al. [15]. The histograms of the 3D polarization and phase contrast reconstructions are shown in Fig. 5(c) and (f) respectively. The polarization contrast histogram of the scaled birefringence shows two components, namely the background and the myotome tissue. In the phase contrast histogram of the polarization averaged refractive index multiple peaks, corresponding to different organs, are visible [3]. Fig. 5. Reconstructed amplitude (a) and (b) and phase projections (d) and (e) from two different angles of a 3 day old optically cleared zebrafish larva, illustrating the different contrasts obtained through polarization and phase contrast respectively. In (c) and (f) the histograms of the full 3D data set are plotted for the polarization and phase contrasts, respectively. The background contribution is indicated in both histograms, and the myotome and interstitial tissue for the polarization and phase contrast respectively. A 3D visualization of the phase contrast, the polarization contrast, the merged datasets and transverse cross-sections after tomographic reconstruction using FBP are shown in Fig. 6. It can be clearly seen from the visibility of the developing muscle tissue (myotome) that the phase and polarization contrast offer complementary contrasts, even though they spatially overlap. The anatomical structures are annotated based on reference data from microscopy [15] and OPT [16]. A striking result is the high contrast obtained in the polarization contrast projections compared to the phase projections. We quantify this by calculating the standard deviation of a background region outside of the center (since the level of noise is lower there), and estimate the mean of the signal in the tail at the same location for both the polarization and phase contrast reconstructions. For polarization contrast ODT, this yields an SNR of approximately SNR=$73$, and for the phase contrast ODT we obtain an SNR of approximately SNR=$15$. Polarization contrast ODT thus yields significantly higher SNR than phase contrast ODT for imaging the zebrafish tail. Fig. 6. 3D visualization of the phase (a) and polarization (b) contrast, and combined (c) ODT reconstructions of a 3 day old zebrafish larva tail. In phase contrast, the tail (in red) and the spinal cord (in purple) appear, but not the developing muscle tissue (myotome), which is birefringent. In the polarization contrast reconstruction the structure of the myotome can be clearly discerned. Insets show transverse cross sections in linear intensity scale taken at the dashed line. Scalebar for 3D reconstruction corresponds to 200 µm. 5. Discussion and conclusion We demonstrate 3D polarization contrast ODT, which has previously only been achieved only with OPT. Applying it within the framework of ODT makes it possible to image both phase and polarization contrast and make use of the benefits of ODT such as numerical refocusing and extended depth of field, due to the fact that both phase and amplitude of the polarization contrast field are measured. 5.1 Polarization ODT contrast Coherent speckle causes increased noise levels close to the center of rotation in polarization contrast ODT similar as in conventional phase contrast ODT and the same strategy of off-axis placement and numerical refocusing can be applied to reduce the noise level up to a factor of 7. The polarization contrast ODT reconstruction yields a significantly higher signal to noise ratio compared to the phase contrast reconstruction. We attribute this to the fact that in phase contrast ODT the refractive index differences decrease during clearing, leading to a reduction of the signal to noise ratio in the reconstructed images. For polarization contrast ODT, the background is zero (no transmission in the absence of birefringence) and consequently leads to a relatively high contrast when birefringent tissue is present. Besides this qualitative argument, also quantitatively, the value of the average refractive index, which is proportional to $n_e + n_o$, and the birefringence $n_e - n_o$ may vary during the clearing process [17] and thus influence the image contrast in both ODT modes. 5.2 Limit on maximum projected $\delta$ Straightforward tomographic reconstruction only yields valid results for polarization contrast ODT in case $\delta$ is small. In phase projections of highly birefringent materials, such as a FEP (fluorinated ethylene propylene) tube, phase wrapping is clearly visible as a dense amplitude modulation. For cleared biological samples we have not observed dense amplitude modulation and, for all practical purposes, the wrapping problem is absent. Even for uncleared samples with 0.5 mm of birefringent tissue, phase wrapping is absent for birefringence lower than $n_e - n_o=6\cdot 10^{-4}$, which is still smaller than the typical birefringence of uncleared tissue [7]. For application outside of biomedicine, the wrapping of $\delta$ places a practical limitation on the amount of birefringence and/or the maximum sample thickness that can be imaged using conventional reconstruction. In principle, the correct projection and reconstruction can be retrieved in case the linearity requirement is violated using a modified unwrapping procedure based on the forward model. However, further research is needed for application of this procedure on experimental data. 5.3 Absolute quantification of birefringence A limitation of the current method is that the polarization contrast is qualitative. Absolute quantification of the birefringence is challenging as the magnitude of the signal is dependent on the incident field distribution, sample optical absorption, the light to electron conversion, and the fiber orientation. In principle the first three factors can be divided out using a reference measurement, e.g., from the amplitude of the parallel polarization state projection. A further complication comes from the tomographic angle dependence of $\delta$ that causes a modulation outside of a continuous region of birefringence. In case the macroscopic assumption of uniform birefringence across a region does not apply, but the fiber orientation changes significantly on small length scales this may cause reconstruction artifacts. The case for quantitative birefringence tomography (quantification of optic axis, $n_e$, and $n_o$) is more complicated as it requires more information per projection angle and a non-linear inversion scheme. This is outside of the scope of the current work. 5.4 Applicability of the uniaxial model The analysis and simulations in this paper are based on the assumption of uniaxial birefringence. The uniaxial model is a simple and widely used model in polarization microscopy, and applicable to fibrous structures such as myelin, elastin, and collagen. The well-defined fiber orientation as is present in the uniaxial model would be of importance to extract from the data. Further research is needed to determine whether fiber orientation can be retrieved in 3D, for example by performing more measurements under different input polarizations and using a full vectorial reconstruction [18]. Although the uniaxial model works for a large class of tissues, some types of tissues exhibit biaxial birefringence [19]. In addition, in some voxels there may be overlapping tissue fibers. Incorporating this in the tomographic reconstruction requires a more elaborate birefringence model. We demonstrated 3D polarization contrast ODT. The developing muscle tissue in the tail of the zebrafish larva is known to be birefringent and cannot be discerned in conventional phase contrast ODT reconstruction. By illuminating the sample with a single polarization input state and measuring both the parallel (for the phase) and the orthogonal component (for the polarization contrast) with digital holography a conventional and polarization contrast ODT reconstruction of the same object can be obtained. A. Appendix Here we demonstrate the effect of the angular dependency in the amplitude projection on the tomographic reconstruction. The object we consider is a cylinder according to the orientation outlined in the theory section of this paper. The cylinder has radius $R$, and birefringence $n_{e}-n_{o}=\delta n$. For a plane wave traveling along the $z$-axis linearly polarized along the $x$-axis, the polarization contrast is retrieved from the cross-polarized transmitted component after traveling through the sample. This component $U_{y}$ is given by the $y$ component of Eq. (A2). Using the relations $\delta =k \Delta \cos ^2(\alpha (\beta ))$, $\alpha =\gamma \sin (\beta )$ and $\varphi =\gamma \cos (\beta )$, the full expression for $U_{y}$ on $\beta$ becomes (A1)$$U_{y}(\beta)={-}i\textrm{e}^{i \epsilon} \sin (2\rho - 2 \gamma \cos (\beta )) \sin \left(\frac{1}{2} k \Delta \cos ^2(\gamma \sin (\beta ))\right) \, ,$$ and the amplitude of $U_{y}(\beta )$ is (A2)$$|U_{y}(\beta)|=\left| \sin (2\rho - 2\gamma \cos (\beta )) \right| \left| \sin \left(\frac{1}{2} k \Delta \cos ^2(\gamma \sin (\beta ))\right)\right| \, .$$ For a cylinder located at the origin with a tilt $\gamma$ with respect to the $x$-axis of tomographic rotation, the cross-section seen by a wave traveling along the $z$-axis is an ellipse $f(y,z)$, with semi-major and semi-minor axes $a=R \sec (\gamma )$ and $b=R$, respectively. The Radon transform $\Re (f)$ for a 2D slice of the ellipse gives the path length experienced by the probing wave per projection angle $\beta$ and is given by [20] (A3)$$\Re(f)= \begin{cases} \frac{2 R^2 \sec (\gamma ) \sqrt{A-p^2}}{A} & p^2\leq A \\ 0 & \textrm{otherwise} \end{cases}$$ (A4)$$A=R^2 \cos ^2(\beta ) \sec^2(\gamma )+R^2 \sin^2(\beta ) \, ,$$ and $p$ is the transverse coordinate along the projection. Replacing $\Delta$ in Eq. (A2) with $\Re (f) \delta n$, the effective amplitude projection function measured at the detector becomes (A5)$$|U_{y}(p,\beta)|=\begin{cases} \left| \sin (2 \rho -2 \gamma \cos (\beta )) \sin \left(\frac{R^{2}\,\delta n\,k \sec (\gamma ) \cos ^2(\gamma \sin (\beta )) \sqrt{A-p^2}}{A}\right)\right| & p^2\leq A \\ 0 & \textrm{otherwise} \end{cases}$$ The projection functions $|U_{y}(p,\beta )|$ along with the resulting tomographic reconstructions are plotted in Fig. 7 for tilt angles $\gamma =0^{\circ }$ (a-b) and $\gamma =54^{\circ }$ (c-d). The simulation parameters are cross-polarizer angle $\rho =27^{\circ }$, $\delta n=1\cdot 10^{-5}$, $R=1$ mm and $\lambda =633 \cdot 10^{-9}$. For comparison, the case for a non-birefringent cylinder at $\gamma =54^{\circ }$ is shown (e-f). It can be seen that the angular dependency of the amplitude projections results in a modulation along the horizontal projection angle axis in Fig. 7(c). Since this does not cause modulation along the transverse coordinate axis and the amplitude is zero outside of the projection of the birefringent object, the contrast inside the birefringent sample is not modulated (Fig. 7(d)). Instead, it gives a slowly varying angular modulation in the background. Fig. 7. Plot of the projection functions $|U_{y}(p,\beta )|$ along with the resulting tomographic reconstructions for tilt angles $\gamma =0^{\circ }$ (a-b) and $\gamma =54^{\circ }$ (c-d). The simulation parameters are cross-polarizer angle $\rho =27^{\circ }$, $\delta n=1 \cdot 10^{-5}$, $R=1$ mm and $\lambda =633 \cdot 10^{-9}$. For comparison, the case for $\gamma =54^{\circ }$ for a non-birefringent cylinder is shown (e-f). We would like to thank Sonja Chocron and Jeroen Bakkers from the Hubrecht Institute for providing zebrafish larvae. We thank Dr. Miriam Menzel for useful discussions. The authors declare that there are no conflicts of interest related to this article. 1. K. Kim, K. S. Kim, H. Park, J. C. Ye, and Y. Park, "Real-time visualization of 3-D dynamic microscopic objects using optical diffraction tomography," Opt. Express 21(26), 32269 (2013). [CrossRef] 2. W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, "Extended depth of focus in tomographic phase microscopy using a propagation algorithm," Opt. Lett. 33(2), 171 (2008). [CrossRef] 3. J. van Rooij and J. Kalkman, "Large-scale high-sensitivity optical diffraction tomography of zebrafish," Biomed. Opt. Express 10(4), 1782 (2019). [CrossRef] 4. R. Oldenbourg, "A new view on polarization microscopy," Nature 381(6585), 811–812 (1996). [CrossRef] 5. F. Massoumia, R. Juškaitis, M. A. A. Neil, and T. Wilson, "Quantitative polarized light microscopy," J. Microsc. 209(1), 13–22 (2003). [CrossRef] 6. Z. Wang, L. J. Millet, M. U. Gillette, and G. Popescu, "Jones phase microscopy of transparent and anisotropic samples," Opt. Lett. 33(11), 1270 (2008). [CrossRef] 7. M. J. Everett, K. Schoenenberger, B. W. Colston, and L. B. D. Silva, "Birefringence characterization of biological tissue by use of optical coherence tomography," Opt. Lett. 23(3), 228 (1998). [CrossRef] 8. M. Fang, D. Dong, C. Zeng, X. Liang, X. Yang, A. Arranz, J. Ripoll, H. Hui, and J. Tian, "Polarization-sensitive optical projection tomography for muscle fiber imaging," Sci. Rep. 6(1), 19241 (2016). [CrossRef] 9. J. Kostencka, T. Kozacki, M. Dudek, and M. Kujawińska, "Noise suppressed optical diffraction tomography with autofocus correction," Opt. Express 22(5), 5731–5745 (2014). [CrossRef] 10. M. Menzel, K. Michielsen, H. D. Raedt, J. Reckfort, K. Amunts, and M. Axer, "A Jones matrix formalism for simulating three-dimensional polarized light imaging of brain tissue," J. R. Soc., Interface 12(111), 20150734 (2015). [CrossRef] 11. I. Yamaguchi and T. Zhang, "Phase-shifting digital holography," Opt. Lett. 22(16), 1268 (1997). [CrossRef] 12. D. C. Ghiglia and L. A. Romero, "Robust two-dimensional weighted and unweighted phase unwrapping that uses fast transforms and iterative methods," J. Opt. Soc. Am. A 11(1), 107 (1994). [CrossRef] 13. A. Limaye, "Drishti: a volume exploration and presentation tool," Proc. SPIE 8506, 85060X (2012). [CrossRef] 14. A. d'Esposito, D. Nikitichev, A. Desjardins, S. Walker-Samuel, and M. F. Lythgoea, "Quantification of light attenuation in optically cleared mouse brains," J. Biomed. Opt. 20(8), 080503 (2015). [CrossRef] 15. A. S. Jacoby, E. Busch-Nentwich, R. J. Bryson-Richardson, T. E. Hall, J. Berger, S. Berger, C. Sonntag, C. Sachs, R. Geisler, D. L. Stemple, and P. D. Currie, "The zebrafish dystrophic mutant softy maintains muscle fibre viability despite basement membrane rupture and muscle detachment," Development 136(19), 3367–3376 (2009). [CrossRef] 16. R. J. Bryson-Richardson, S. Berger, T. F. Schilling, T. E. Hall, N. J. Cole, A. J. Gibson, J. Sharpe, and P. D. Currie, "Fishnet: an online database of zebrafish anatomy," BMC Biol. 5(1), 34 (2007). [CrossRef] 17. D. Chen, N. Zeng, Q. Xie, H. He, V. V. Tuchin, and H. Ma, "Mueller matrix polarimetry for characterizing microstructural variation of nude mouse skin during tissue optical clearing," Biomed. Opt. Express 8(8), 3559 (2017). [CrossRef] 18. V. Lauer, "New approach to optical diffraction tomography yielding a vector equation of diffraction tomography and a novel tomographic microscope," J. Microsc. 205(2), 165–176 (2002). [CrossRef] 19. M. Ravanfar and G. Yao, "Measurement of biaxial optical birefringence in articular cartilage," Appl. Opt. 58(8), 2021 (2019). [CrossRef] 20. A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE Press, 2001). Article Order K. Kim, K. S. Kim, H. Park, J. C. Ye, and Y. Park, "Real-time visualization of 3-D dynamic microscopic objects using optical diffraction tomography," Opt. Express 21(26), 32269 (2013). [Crossref] W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, "Extended depth of focus in tomographic phase microscopy using a propagation algorithm," Opt. Lett. 33(2), 171 (2008). J. van Rooij and J. Kalkman, "Large-scale high-sensitivity optical diffraction tomography of zebrafish," Biomed. Opt. Express 10(4), 1782 (2019). R. Oldenbourg, "A new view on polarization microscopy," Nature 381(6585), 811–812 (1996). F. Massoumia, R. Juškaitis, M. A. A. Neil, and T. Wilson, "Quantitative polarized light microscopy," J. Microsc. 209(1), 13–22 (2003). Z. Wang, L. J. Millet, M. U. Gillette, and G. Popescu, "Jones phase microscopy of transparent and anisotropic samples," Opt. Lett. 33(11), 1270 (2008). M. J. Everett, K. Schoenenberger, B. W. Colston, and L. B. D. Silva, "Birefringence characterization of biological tissue by use of optical coherence tomography," Opt. Lett. 23(3), 228 (1998). M. Fang, D. Dong, C. Zeng, X. Liang, X. Yang, A. Arranz, J. Ripoll, H. Hui, and J. Tian, "Polarization-sensitive optical projection tomography for muscle fiber imaging," Sci. Rep. 6(1), 19241 (2016). J. Kostencka, T. Kozacki, M. Dudek, and M. Kujawińska, "Noise suppressed optical diffraction tomography with autofocus correction," Opt. Express 22(5), 5731–5745 (2014). M. Menzel, K. Michielsen, H. D. Raedt, J. Reckfort, K. Amunts, and M. Axer, "A Jones matrix formalism for simulating three-dimensional polarized light imaging of brain tissue," J. R. Soc., Interface 12(111), 20150734 (2015). I. Yamaguchi and T. Zhang, "Phase-shifting digital holography," Opt. Lett. 22(16), 1268 (1997). D. C. Ghiglia and L. A. Romero, "Robust two-dimensional weighted and unweighted phase unwrapping that uses fast transforms and iterative methods," J. Opt. Soc. Am. A 11(1), 107 (1994). A. Limaye, "Drishti: a volume exploration and presentation tool," Proc. SPIE 8506, 85060X (2012). A. d'Esposito, D. Nikitichev, A. Desjardins, S. Walker-Samuel, and M. F. Lythgoea, "Quantification of light attenuation in optically cleared mouse brains," J. Biomed. Opt. 20(8), 080503 (2015). A. S. Jacoby, E. Busch-Nentwich, R. J. Bryson-Richardson, T. E. Hall, J. Berger, S. Berger, C. Sonntag, C. Sachs, R. Geisler, D. L. Stemple, and P. D. Currie, "The zebrafish dystrophic mutant softy maintains muscle fibre viability despite basement membrane rupture and muscle detachment," Development 136(19), 3367–3376 (2009). R. J. Bryson-Richardson, S. Berger, T. F. Schilling, T. E. Hall, N. J. Cole, A. J. Gibson, J. Sharpe, and P. D. Currie, "Fishnet: an online database of zebrafish anatomy," BMC Biol. 5(1), 34 (2007). D. Chen, N. Zeng, Q. Xie, H. He, V. V. Tuchin, and H. Ma, "Mueller matrix polarimetry for characterizing microstructural variation of nude mouse skin during tissue optical clearing," Biomed. Opt. Express 8(8), 3559 (2017). V. Lauer, "New approach to optical diffraction tomography yielding a vector equation of diffraction tomography and a novel tomographic microscope," J. Microsc. 205(2), 165–176 (2002). M. Ravanfar and G. Yao, "Measurement of biaxial optical birefringence in articular cartilage," Appl. Opt. 58(8), 2021 (2019). A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging (IEEE Press, 2001). Amunts, K. Arranz, A. Axer, M. Badizadegan, K. Berger, J. Berger, S. Bryson-Richardson, R. J. Busch-Nentwich, E. Chen, D. Choi, W. Cole, N. J. Colston, B. W. Currie, P. D. d'Esposito, A. Dasari, R. R. Desjardins, A. Dong, D. Dudek, M. Everett, M. J. Fang, M. Fang-Yen, C. Feld, M. S. Geisler, R. Ghiglia, D. C. Gibson, A. J. Gillette, M. U. Hall, T. E. He, H. Hui, H. Jacoby, A. S. Juškaitis, R. Kak, A. C. Kalkman, J. Kim, K. Kim, K. S. Kostencka, J. Kozacki, T. Kujawinska, M. Lauer, V. Liang, X. Limaye, A. Lythgoea, M. F. Ma, H. Massoumia, F. Menzel, M. Michielsen, K. Millet, L. J. Neil, M. A. A. Nikitichev, D. Oldenbourg, R. Park, H. Park, Y. Popescu, G. Raedt, H. D. Ravanfar, M. Reckfort, J. Ripoll, J. Romero, L. A. Sachs, C. Schilling, T. F. Schoenenberger, K. Sharpe, J. Silva, L. B. D. Slaney, M. Sonntag, C. Stemple, D. L. Tian, J. Tuchin, V. V. van Rooij, J. Walker-Samuel, S. Wilson, T. Xie, Q. Yamaguchi, I. Yang, X. Yao, G. Ye, J. C. Zeng, C. Zeng, N. Zhang, T. Appl. Opt. (1) Biomed. Opt. Express (2) BMC Biol. (1) J. Biomed. Opt. (1) J. Microsc. (2) J. Opt. Soc. Am. A (1) J. R. Soc., Interface (1) Opt. Express (2) Opt. Lett. (4) Proc. SPIE (1) Sci. Rep. (1) OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here. Alert me when this article is cited. Click here to see a list of articles that cite this paper View in Article | Download Full Size | PPT Slide | PDF Equations on this page are rendered with MathJax. Learn more. (1) δ = k Δ cos 2 ⁡ ( α ( β ) ) , with Δ = ∫ [ n e ( s ) − n o ( s ) ] d s , (2) α = γ sin ⁡ β and φ = γ cos ⁡ β (3) U = ( e − 1 2 i ( δ − 2 ϵ ) ( sin 2 ⁡ ( ρ − φ ) + e i δ cos 2 ⁡ ( ρ − φ ) ) − i e i ϵ sin ⁡ ( δ 2 ) sin ⁡ ( 2 ρ − 2 φ ) ) , (4) ϵ = 2 π λ ∫ n e ( s ) + n o ( s ) 2 d s . (5) ϕ U x = tan − 1 ⁡ ( cot ⁡ ( δ 2 ) sin ⁡ ( ϵ ) sec ⁡ ( 2 ρ − 2 φ ) + cos ⁡ ( ϵ ) cot ⁡ ( δ 2 ) cos ⁡ ( ϵ ) sec ⁡ ( 2 ρ − 2 φ ) − sin ⁡ ( ϵ ) ) . (6) ∂ ϕ U x ∂ δ = csc 2 ⁡ ( δ 2 ) sec ⁡ ( 2 ρ − 2 φ ) 2 cot 2 ⁡ ( δ 2 ) sec 2 ⁡ ( 2 ρ − 2 φ ) + 2 , (7) ϕ U x ≈ tan − 1 ⁡ ( tan ⁡ ( ϵ ) ) + 1 2 δ cos ⁡ ( 2 ρ − 2 φ ) . (8) | U y | = | sin ⁡ ( δ 2 ) | | sin ⁡ ( 2 ρ − 2 φ ) | . (9) sin − 1 ⁡ ( | sin ⁡ ( δ 2 ) | ) = { δ 2 − m π if 0 ≤ δ 2 < π 2 mod π − δ 2 + m π if π 2 ≤ δ 2 < π mod π , (10) | U y | ≈ 1 2 δ | sin ⁡ ( 2 ρ − 2 φ ) | . (11) | U y | ( β ) ≈ 1 2 k Δ cos 2 ⁡ ( γ sin ⁡ ( β ) ) | sin ⁡ ( 2 ρ − 2 γ cos ⁡ ( β ) ) | . (A1) U y ( β ) = − i e i ϵ sin ⁡ ( 2 ρ − 2 γ cos ⁡ ( β ) ) sin ⁡ ( 1 2 k Δ cos 2 ⁡ ( γ sin ⁡ ( β ) ) ) , (A2) | U y ( β ) | = | sin ⁡ ( 2 ρ − 2 γ cos ⁡ ( β ) ) | | sin ⁡ ( 1 2 k Δ cos 2 ⁡ ( γ sin ⁡ ( β ) ) ) | . (A3) ℜ ( f ) = { 2 R 2 sec ⁡ ( γ ) A − p 2 A p 2 ≤ A 0 otherwise (A4) A = R 2 cos 2 ⁡ ( β ) sec 2 ⁡ ( γ ) + R 2 sin 2 ⁡ ( β ) , (A5) | U y ( p , β ) | = { | sin ⁡ ( 2 ρ − 2 γ cos ⁡ ( β ) ) sin ⁡ ( R 2 δ n k sec ⁡ ( γ ) cos 2 ⁡ ( γ sin ⁡ ( β ) ) A − p 2 A ) | p 2 ≤ A 0 otherwise
CommonCrawl
A unified approach to fractal Hilbert-type inequalities Tserendorj Batbold1, Mario Krnić2 & Predrag Vuković3 Journal of Inequalities and Applications volume 2019, Article number: 117 (2019) Cite this article In the present study we provide a unified treatment of fractal Hilbert-type inequalities. Our main result is a pair of equivalent fractal Hilbert-type inequalities including a general kernel and weight functions. A particular emphasis is devoted to a class of homogeneous kernels. In addition, we impose appropriate conditions for which the constants appearing on the right-hand sides of the established inequalities are the best possible. As an application, our results are compared with some previously known ones from the literature. The celebrated Hilbert inequality (see [8]) in its integral form asserts that $$ \int _{0}^{\infty } \int _{0}^{\infty }\frac{f(x)g(y)}{x+y}\,dx\,dy\leq \frac{ \pi }{\sin \frac{\pi }{p}} \biggl[ \int _{0}^{\infty }f^{p}(x) \,dx \biggr] ^{\frac{1}{p}} \biggl[ \int _{0}^{\infty }g^{q}(y) \,dy \biggr]^{ \frac{1}{q}}, $$ where \(f,g:(0,\infty )\rightarrow \mathbb{R}\) are non-negative integrable functions and p, q is a pair of non-negative conjugate exponents, i.e., \(\frac{1}{p}+\frac{1}{q}=1\), \(p>1\). In addition, the constant \(\frac{\pi }{\sin \frac{\pi }{p}}\) is the best possible in the sense that it cannot be replaced by a smaller positive constant so that the inequality remains valid. Hardy et al. [8], noticed that one can assign to (1) its equivalent form $$ \int _{0}^{\infty } \biggl[ \int _{0}^{\infty }\frac{f(x)}{x+y}\,dx \biggr] ^{p}\,dy \leq \biggl( \frac{\pi }{\sin \frac{\pi }{p}} \biggr)^{p} \int _{0}^{\infty }f^{p}(x) \,dx, $$ in the sense that (1) implies (2) and vice versa. During decades, Hilbert-type inequalities (1) and (2) have been extensively studied by numerous authors. A rich variety of extensions included inequalities with more general kernels, weight functions, and integration domains, as well as refinements of initial Hilbert-type inequalities (1) and (2). It is important to point out that these inequalities are still of interest to numerous authors. For an initial development of the Hilbert-type inequalities, the reader is referred to [8], while some recent results are collected in monographs [3] and [10]. Nowadays, an interesting topic in connection to classical inequalities is their extension on certain fractal spaces via the local fractional calculus. The local fractional calculus is primarily utilized to handle various non-differentiable problems that appear in complex systems of the real-world phenomena. In particular, the non-differentiability occurring in science and engineering has been modeled by the local fractional ordinary or partial differential equations. On the other hand, local fractional calculus is also an important tool in pure mathematics. Recently, by virtue of the local fractional calculus, a whole series of classical real inequalities have been extended to hold on certain fractal spaces. For the reader's convenience, denote by \({}_{a}I_{b}^{\alpha }f(x)\) and \({}_{a}I_{b}^{ \alpha }[{}_{a}I_{b}^{\alpha }h(x,y)]\) local fractional integrals $$ {}_{a}I_{b}^{\alpha }f(x)= \frac{1}{\varGamma (1+\alpha )} \int _{a}^{b}f(x) (dx)^{ \alpha } $$ $$ {}_{a}I_{b}^{\alpha }\bigl[{}_{a}I_{b}^{\alpha }h(x,y) \bigr]=\frac{1}{\varGamma ^{2}(1+ \alpha )} \int _{a}^{b} \int _{a}^{b}h(x,y) (dx)^{\alpha }(dy)^{\alpha }, $$ where \(0<\alpha \leq 1\) and where Γ stands for a usual gamma function defined by \(\varGamma (a)=\int _{0}^{\infty }t^{a-1}e^{-t}\,dt\), \(a>0\). Further, let \(C_{\alpha }(a,b)\) stand for a set of local fractional continuous functions on the interval \((a,b)\). Recently, Liu and Sun [13] established a pair of equivalent fractal Hilbert-type inequalities expressed in terms of the above fractional integrals. Namely, they showed that if \(\frac{1}{p}+ \frac{1}{q}=1\), \(p>1\), \(0<\alpha \leq 1\), and if \(f,g\in C_{\alpha }(a,b)\) are non-negative functions, then the following inequalities hold: $$\begin{aligned} &{}_{0}I_{\infty }^{\alpha } \biggl[{}_{0}I_{\infty }^{\alpha } \frac{f(x)g(y)}{ \max \{x^{\alpha },y^{\alpha }\}} \biggr] \\ &\quad \leq \eta (\alpha ) \bigl[{}_{0}I_{\infty }^{\alpha } \bigl(x^{\frac{ \alpha }{2}(p-2)}f^{p}(x) \bigr) \bigr]^{\frac{1}{p}} \bigl[{}_{0}I _{\infty }^{\alpha } \bigl(y^{\frac{\alpha }{2}(q-2)}g^{q}(y) \bigr) \bigr] ^{\frac{1}{q}} \end{aligned}$$ $$ {}_{0}I_{\infty }^{\alpha } \biggl[ y^{\frac{\alpha (2-q)}{2(q-1)}} \biggl[{}_{0}I_{\infty }^{\alpha } \frac{f(x)}{\max \{x^{\alpha },y^{ \alpha }\}} \biggr]^{p} \biggr] < \eta ^{p}(\alpha )_{0}I_{\infty } ^{\alpha } \bigl(x^{\frac{\alpha }{2}(p-2)}f^{p}(x) \bigr), $$ where \(\eta (\alpha )=\frac{2^{\alpha +1}}{\varGamma (1+\alpha )}\) and provided that the integrals on the right-hand sides of (3) and (4) are convergent. In addition, it has been also shown that the constants \(\eta (\alpha )\) and \(\eta ^{p}(\alpha )\) are the best possible. For some related extensions of classical inequalities to fractal spaces the reader is referred to recent papers [4,5,6, 9, 11,12,13, 16, 17]. In addition, for some recent results closely connected to this topic the reader is referred to [1, 2, 7, 15] and the references therein. The main objective of the present paper is a unified treatment of fractal Hilbert-type inequalities. In other words, we will establish a pair of fractal Hilbert-type inequalities with a general kernel and general weight functions that cover the above presented Hilbert-type inequalities. The paper is divided into four sections as follows: After this introductory part, in Sect. 2 we give a brief overview of basic definitions and properties of the local fractional calculus that will be the main tools in establishing our results. In Sect. 3, we derive our main result, i.e., a pair of equivalent fractal Hilbert-type inequalities with a general kernel and weight functions. A particular emphasis is devoted to a class of homogeneous kernels. In addition, we impose conditions for which the constants appearing on the right-hand sides of the corresponding Hilbert-type inequalities are the best possible. As an application, in Sect. 4 we discuss some particular choices of homogeneous kernels and power weight functions. In such a way, we show that the fractal Hilbert-type inequalities presented in this introduction are consequences of our general results. Preliminaries on local fractional calculus For the reader's convenience, in this section we give a brief overview of the local fractional calculus. More precisely, we give basic definitions and properties of the local fractional derivative and integral developed in [19] (see also [18]). Let \(\mathbb{R}^{\alpha }\), \(0<\alpha \leq 1\), be an α-type fractal set of real line numbers. For \(a^{\alpha }, b^{\alpha } \in \mathbb{R}^{\alpha }\), we define addition and multiplication by $$ a^{\alpha }+b^{\alpha }:=(a+b)^{\alpha }, \qquad a^{\alpha } \cdot b ^{\alpha }=a^{\alpha }b^{\alpha }:=(ab)^{\alpha }. $$ With these two binary operations, \(\mathbb{R}^{\alpha }\) becomes a field with an additive identity \(0^{\alpha }\) and a multiplicative identity \(1^{\alpha }\). The starting point in introducing the local fractional calculus on \(\mathbb{R}^{\alpha }\) is the concept of the local fractional continuity. A non-differentiable function \(f:\mathbb{R}\to \mathbb{R} ^{\alpha }\) is said to be local fractional continuous at \(x_{0}\) if, for any \(\varepsilon >0\), there exists \(\delta >0\) such that \(|x-x_{0}|< \delta \) implies that $$ \bigl\vert f(x)-f(x_{0}) \bigr\vert < \varepsilon ^{\alpha }. $$ The set of local fractional continuous functions on interval I is denoted by \(C_{\alpha }(I)\). The local fractional derivative of f of order α at \(x=x_{0}\) is defined by $$ f^{(\alpha )}(x_{0})=\frac{d^{\alpha }f(x)}{dx^{\alpha }}\bigg| _{x=x_{0}}= \lim_{x\to x_{0}}\frac{\varGamma (1+\alpha )(f(x)-f(x_{0}))}{(x-x _{0})^{\alpha }}, $$ where Γ stands for a usual gamma function. Now, let \(f^{(\alpha )}(x)=D_{x}^{\alpha }f(x)\). If there exists \(f^{(k+1) \alpha }(x)=\overbrace{D_{x}^{\alpha }\cdots D_{x}^{\alpha }}^{k+1}f(x)\) for every \(x\in I\), then we denote \(f\in D_{(k+1)\alpha }(I)\), where \(k=0,1,2,\ldots \) . The local fractional integral is defined for a class of local fractional continuous functions. Let \(f\in C_{\alpha }[a,b]\) and let \(P=\{t_{0},t _{1}, \ldots , t_{N}\}\), \(N\in \mathbb{N}\), be a partition of interval \([a,b]\) such that \(a=t_{0}< t_{1}<\cdots <t_{N-1}<t_{N}=b\). Further, for this partition P, let \(\Delta t_{j}=t_{j+1}-t_{j}\), \(j=0,\ldots ,N-1\), and \(\Delta t=\max \{\Delta t_{1},\Delta t_{2},\ldots , \Delta t_{N-1} \}\). Then the local fractional integral of f on the interval \([a,b]\) of order α (denoted by \({}_{a}I_{b}^{\alpha }f(x)\)) is defined by $$ {}_{a}I_{b}^{\alpha }f(x)=\frac{1}{\varGamma (1+\alpha )} \int _{a}^{b}f(t) (dt)^{ \alpha }= \frac{1}{\varGamma (1+\alpha )}\lim_{\Delta t\to 0}\sum_{j=0} ^{N-1}f(t_{j}) (\Delta t_{j})^{\alpha }. $$ The above definition implies that \({}_{a}I_{b}^{\alpha }f(x)=0\) if \(a=b\) and \({}_{a}I_{b}^{\alpha }f(x)=-{_{b}I_{a}^{\alpha }}f(x)\) if \(a< b\). Similar to the Riemann integral, we have the following analogue of the Newton–Leibnitz formula on the fractal space. Namely, if \(f=g^{( \alpha )}\in C_{\alpha }[a,b]\), then $$ {}_{a}I_{b}^{\alpha }f(x)=g(b)-g(a). $$ In particular, if \(f(x)=x^{k\alpha }\), \(k\in \mathbb{R}\), then $$ \frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} x^{k\alpha }(dx)^{\alpha }= \frac{ \varGamma (1+k\alpha )}{\varGamma (1+(k+1)\alpha )}\bigl(b^{(k+1)\alpha }-a^{(k+1) \alpha }\bigr). $$ In order to conclude our discussion regarding fractional integrals, we give a variant of the change of variables theorem in the present setting. Namely, if \(g\in D_{\alpha }[a,b ]\) and \((f\circ g)\in C_{ \alpha }[g(a),g(b)]\), then the following relation holds: $$ {}_{a}I_{b}^{\alpha }(f\circ g) (s) \bigl[g'(s)\bigr]^{\alpha }={{}_{g(a)}I_{g(b)} ^{\alpha }}f(x). $$ It should be noticed here that if \(\alpha =1\), then the local fractional calculus reduces to the classical real calculus. For more details about the above presented concept of fractional differentiability and integrability, the reader is referred to [19] and the references therein. The crucial step in establishing Hilbert-type inequalities is the well-known Hölder inequality. A fractal version of the Hölder inequality asserts that if \(\frac{1}{p}+\frac{1}{q}=1\), \(p>1\), then the inequality $$\begin{aligned} \frac{1}{\varGamma (1+\alpha )} \int _{a}^{b}f(x)g(x) (dx)^{\alpha } \leq & \biggl[\frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} f^{p}(x) (dx)^{\alpha } \biggr]^{\frac{1}{p}} \\ &{}\times \biggl[\frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} g^{q}(x) (dx)^{ \alpha } \biggr]^{\frac{1}{q}} \end{aligned}$$ holds for all \(f,g\in C_{\alpha }(a,b)\). However, we will utilize a two-variable version of the fractal Hölder inequality which claims that $$\begin{aligned}& \frac{1}{\varGamma ^{2}(1+\alpha )} \iint _{S^{(\beta )}}h(x,y)F(x,y)G(x, y) (dx)^{\alpha }(dy)^{\alpha } \\& \quad \leq \biggl[\frac{1}{\varGamma ^{2}(1+\alpha )} \iint _{S^{(\beta )}}h(x,y)F^{p}(x,y) (dx)^{\alpha }(dy)^{\alpha } \biggr] ^{\frac{1}{p}} \\& \qquad {} \times \biggl[\frac{1}{\varGamma ^{2}(1+\alpha )} \iint _{S^{(\beta )}}h(x,y)G ^{q}(x, y) (dx)^{\alpha }(dy)^{\alpha } \biggr]^{\frac{1}{q}} \end{aligned}$$ holds for all \(F,G,h\in C_{\alpha }(S^{(\beta )})\), where \(S^{(\beta )}\) is a fractal surface. For the proofs of the above inequalities, the reader is also referred to [19]. Finally, to conclude this section we give a definition of a fractal beta function. Recall that the usual beta function is defined by \(B(a,b)=\int _{0}^{1} t^{a-1}(1-t)^{b-1}\,dt\), \(a,b>0\). On the other hand, the fractal beta function (see [9]) is defined by $$ B_{\alpha }(a,b)=\frac{1}{\varGamma (1+\alpha )} \int _{0}^{1} t^{\alpha (a-1)}(1-t)^{ \alpha (b-1)}(dt)^{\alpha }. $$ Utilizing the substitution \(t=1/(x+1)\), the above formula can be rewritten as $$ B_{\alpha }(a,b)=\frac{1}{\varGamma (1+\alpha )} \int _{0}^{\infty }\frac{x ^{\alpha (b-1)}}{(1^{\alpha }+x^{\alpha })^{a+b}}(dx)^{\alpha }, $$ which will be a more suitable form for our further investigation. In this section, we develop a unified treatment of fractal Hilbert-type inequalities. In other words, we will establish a pair of general Hilbert-type inequalities that covers particular fractal inequalities presented in the introduction. Our main result refers to a general kernel which is local fractional continuous on the fractal surface \((a,b)^{2}:=(a,b)\times (a,b)\). Theorem 1 Let \(\frac{1}{p}+\frac{1}{q}=1\), \(p>1\), \(0<\alpha \leq 1\), and let \(K\in C_{\alpha }(a,b)^{2}\), \(\varphi ,\psi \in C_{\alpha }(a,b)\) be non-negative functions. If the functions F and G are defined by $$ F^{p} (x)= {}_{a}I_{b}^{\alpha } \bigl(K(x,y)\psi ^{-p}(y)\bigr), \qquad G^{q} (y)= {}_{a}I_{b}^{\alpha }\bigl(K(x,y)\phi ^{-q}(x)\bigr), $$ then, for all non-negative functions \(f,g\in C_{\alpha }(a,b)\), the inequalities $$ {}_{a}I_{b}^{\alpha } \bigl({}_{a}I_{b}^{\alpha }\bigl(K(x,y)f(x)g(y)\bigr) \bigr)\leq \bigl[ {}_{a}I_{b}^{\alpha }({\varphi }Ff)^{p} (x) \bigr]^{\frac{1}{p}} \bigl[{}_{a}I_{b}^{\alpha } \bigl( ({\psi }Gg)^{q} (y)\bigr) \bigr]^{\frac{1}{q}} $$ $$ {}_{a}I_{b}^{\alpha } \bigl( (G {\psi })^{-p}(y) \bigl[{}_{a}I_{b}^{ \alpha } \bigl(K(x,y)f(x)\bigr) \bigr]^{p} \bigr)\leq {{}_{a}I_{b}^{\alpha }} ( {\varphi }Ff)^{p} (x) $$ hold and are equivalent. The left-hand side of inequality (10) can be rewritten as $$\begin{aligned} &\frac{1}{\varGamma ^{2} (1+\alpha )} \int _{a}^{b} \int _{a}^{b} K(x,y)f(x)g(y) (dx)^{ \alpha }(dy)^{\alpha } \\ &\quad =\frac{1}{\varGamma ^{2} (1+\alpha )} \int _{a}^{b} \int _{a}^{b} K(x,y)f(x)\frac{ \varphi (x)}{\psi (y)}g(y) \frac{\psi (y)}{\varphi (x)}(dx)^{\alpha }(dy)^{ \alpha }, \end{aligned}$$ which is now suitable for the application of the Hölder inequality (7). Hence, we obtain $$\begin{aligned} &\frac{1}{\varGamma ^{2} (1+\alpha )} \int _{a}^{b} \int _{a}^{b} K(x,y)f(x)g(y) (dx)^{ \alpha }(dy)^{\alpha } \\ &\quad \leq \biggl[\frac{1}{\varGamma ^{2} (1+\alpha )} \int _{a}^{b} \int _{a} ^{b} K(x,y)f^{p}(x) \frac{\varphi ^{p} (x)}{\psi ^{p} (y)}(dx)^{\alpha }(dy)^{ \alpha } \biggr]^{\frac{1}{p}} \\ &\qquad {}\times \biggl[\frac{1}{\varGamma ^{2} (1+\alpha )} \int _{a}^{b} \int _{a}^{b} K(x,y)g^{q}(y) \frac{\psi ^{q} (y)}{\varphi ^{q} (x)}(dx)^{ \alpha }(dy)^{\alpha } \biggr]^{\frac{1}{q}}. \end{aligned}$$ Now, by virtue of the Fubini theorem (see, e.g., [14]), we can switch the order of integration in the double integral, so by taking into account the definitions of functions F and G, we obtain (10), as claimed. Our next step is to show the equivalence of inequalities (10) and (11). Therefore, suppose that inequality (10) holds and define the function g by $$ g(y)= G^{-p}(y){\psi }^{-p}(y) \biggl[\frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} K(x,y)f(x) (dx)^{\alpha } \biggr]^{p-1}. $$ Now, since \(\frac{1}{p}+\frac{1}{q}=1\), relation (10) implies the inequality $$\begin{aligned}& \frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} G^{-p}(y){\psi }^{-p}(y) \biggl[\frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} K(x,y)f(x) (dx)^{ \alpha } \biggr]^{p} (dy)^{\alpha } \\& \quad = \frac{1}{\varGamma ^{2} (1+\alpha )} \int _{a}^{b} \int _{a}^{b} K(x,y)f(x)g(y) (dx)^{ \alpha }(dy)^{\alpha } \\& \quad \leq \biggl[\frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} ({\varphi }Ff)^{p} (x) (dx)^{\alpha } \biggr]^{\frac{1}{p}} \biggl[ \frac{1}{ \varGamma (1+\alpha )} \int _{a}^{b} ({\psi }Gg)^{q} (y) (dy)^{\alpha } \biggr] ^{\frac{1}{q}} \\& \quad = \biggl[ \frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} ({\varphi }Ff)^{p} (x) (dx)^{\alpha } \biggr]^{\frac{1}{p}} \\& \qquad {}\times \biggl[\frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} (G \psi )^{q-pq}(y) \biggl[ \frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} K(x,y)f(x) (dx)^{ \alpha } \biggr]^{q(p-1)} (dy)^{\alpha } \biggr]^{\frac{1}{q}}, \end{aligned}$$ which reduces to (11). On the other hand, suppose that inequality (11) holds. Then yet another application of the Hölder inequality yields $$\begin{aligned}& \frac{1}{\varGamma ^{2} (1+\alpha )} \int _{a}^{b} \int _{a}^{b} K(x,y)f(x)g(y) (dx)^{ \alpha }(dy)^{\alpha } \\& \quad = \frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} \biggl[{\psi }^{-1}(y)G ^{-1}(y) \int _{a}^{b} K(x,y)f(x) (dx)^{\alpha } \biggr] \psi (y)G(y)g(y) (dy)^{ \alpha } \\& \quad \leq \biggl[\frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} {\psi }^{-p}(y) G^{-p}(y) \biggl(\frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} K(x,y)f(x) (dx)^{ \alpha } \biggr)^{p} (dy)^{\alpha } \biggr]^{\frac{1}{p}} \\& \qquad {} \times \biggl[\frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} (\psi Gg)^{q}(y) (dy)^{\alpha } \biggr]^{\frac{1}{q}} \\& \quad \leq \biggl[\frac{1}{\varGamma (1+\alpha )} \int _{a}^{b} ({\varphi }Ff)^{p} (x) (dx)^{\alpha } \biggr]^{\frac{1}{p}} \biggl[ \frac{1}{ \varGamma (1+\alpha )} \int _{a}^{b} ({\psi }Gg)^{q} (y) (dy)^{\alpha } \biggr] ^{\frac{1}{q}}, \end{aligned}$$ which provides (10). Consequently, inequalities (10) and (11) are equivalent. □ It is not hard to see that our Theorem 1 covers fractal Hilbert-type inequalities (3) and (4) presented in the introduction. This follows by choosing a suitable power functions φ, ψ appearing in relations (10) and (11). However, this will not be done at this moment. Namely, considering kernels \(K_{1}(x,y)=1/\max \{x^{\alpha }, y^{\alpha }\}\) and appearing in (3) and (4), we see that they possess a common property, they are both homogeneous functions. Therefore, our next step is to derive consequence of Theorem 1 which refers to homogeneous kernels. The fractal Hilbert-type inequalities presented in the introduction will then follow as simple consequences of our next result. Recall that the function \(K\in C_{\alpha }(0,\infty )^{2}\) is said to be homogeneous of degree \(-\alpha \lambda \), \(\lambda >0\), if \(K(tx,ty)=t^{-\alpha \lambda }K(x,y)\) for all \(t>0\). In order to formulate and prove the corresponding result, we need the following definition. For a non-negative function \(K\in C_{\alpha }(0,\infty )^{2}\), we define $$ k_{\alpha }(\eta )= {}_{0}I_{\infty }^{\alpha } K(1,t)t^{-\alpha \eta }. $$ If nothing else is explicitly stated, we assume that the integral \(k_{\alpha }(\eta )\) converges for considered values of η. Now, we are ready to establish a pair of fractal Hilbert-type inequalities that correspond to a class of homogeneous kernels. Let \(\frac{1}{p}+\frac{1}{q}=1\), \(p>1\), and let \(f,g\in C_{\alpha }(0, \infty )\) be non-negative functions. If \(K\in C_{\alpha }(0,\infty )^{2}\) is a non-negative homogeneous function of degree \(-\alpha \lambda \), \(\lambda >0\), then the following inequalities hold: $$\begin{aligned}& {{}_{0}I_{\infty }^{\alpha }} \bigl( {{}_{0}I_{\infty }^{\alpha }} \bigl(K(x,y)f(x)g(y)\bigr)\bigr) \\& \quad \leq L \bigl[ {{}_{0}I_{\infty }^{\alpha }} \bigl( x^{\alpha -\alpha \lambda +\alpha p(A_{1}-A_{2})} f^{p} (x) \bigr) \bigr]^{ \frac{1}{p}} \bigl[ {{}_{0}I_{\infty }^{\alpha }} \bigl( y^{\alpha - \alpha \lambda +\alpha q(A_{2}-A_{1})} g^{q}(y) \bigr) \bigr]^{ \frac{1}{q}} \end{aligned}$$ $$\begin{aligned}& {{}_{0}I_{\infty }^{\alpha }} \bigl[ y^{\alpha (p-1)(\lambda -1)+ \alpha p(A_{1}-A_{2})} \bigl({{}_{0}I_{\infty }^{\alpha }}\bigl(K(x,y)f(x)\bigr) \bigr) ^{p} \bigr] \\& \quad \leq L^{p} \bigl[ {{}_{0}I_{\infty }^{\alpha }} x^{\alpha -\alpha \lambda +\alpha p(A_{1}-A_{2})} f^{p} (x) \bigr], \end{aligned}$$ where \(L=k_{\alpha }^{1/p}(pA_{2})k_{\alpha }^{1/q}(2-\lambda -qA_{1})\). In addition, relations (13) and (14) are equivalent. We employ inequalities (10) and (11) with power functions \(\varphi (x)=x^{\alpha A_{1}}\) and \(\psi (y)=y^{\alpha A _{2}}\). Furthermore, making use of (9), it follows that $$ F^{p} (x)=\frac{1}{\varGamma (1+\alpha )} \int _{0}^{\infty }K(x,y)y^{- \alpha pA_{2}}(dy)^{\alpha } $$ $$ G^{q} (x)=\frac{1}{\varGamma (1+\alpha )} \int _{0}^{\infty }K(x,y)x^{- \alpha qA_{1}}(dx)^{\alpha }. $$ In addition, since K is a homogeneous function of degree \(-\alpha \lambda \), \(\lambda >0\), a change of variables \(t=y/x\) provides $$\begin{aligned} F^{p} (x)&=x^{\alpha -\alpha \lambda -\alpha pA_{2}} \frac{1}{\varGamma (1+\alpha )} \int _{0}^{\infty }K \biggl(1,\frac{y}{x} \biggr) \biggl(\frac{y}{x} \biggr) ^{-\alpha pA_{2}}\frac{1}{x^{\alpha }} (dy)^{\alpha } \\ &= x^{\alpha -\alpha \lambda -\alpha pA_{2}} \frac{1}{\varGamma (1+ \alpha )} \int _{0}^{\infty }K(1,t)t^{-\alpha pA_{2}}(dt)^{\alpha } \\ &=x^{\alpha -\alpha \lambda -\alpha pA_{2}}k_{\alpha }(pA_{2}) \end{aligned}$$ due to (6). Following the lines as in the previous step, we also obtain $$ G^{q} (y)=y^{\alpha -\alpha \lambda -\alpha qA_{1}} \frac{1}{\varGamma (1+ \alpha )} \int _{0}^{\infty }K (t,1 )t^{-\alpha qA_{1}}(dt)^{ \alpha }. $$ Now, yet another application of the change of variables rule (6) with \(u=t^{-1}\), gives $$\begin{aligned} G^{q} (y)&=-y^{\alpha -\alpha \lambda -\alpha qA_{1}}\frac{1}{\varGamma (1+\alpha )} \int _{0}^{\infty }K \biggl( 1,\frac{1}{t} \biggr) \biggl(\frac{1}{t} \biggr) ^{\alpha \lambda +\alpha qA_{1}-2\alpha } \bigl(-t^{-2\alpha }\bigr) (dt)^{ \alpha } \\ &=y^{\alpha -\alpha \lambda -\alpha qA_{1}}\frac{1}{\varGamma (1+\alpha )} \int _{0}^{\infty }K( 1,u) u^{\alpha \lambda +\alpha qA_{1}-2\alpha }(du)^{\alpha } \\ &= y^{\alpha -\alpha \lambda -\alpha qA_{1}}k_{\alpha }(2-\lambda -qA _{1}). \end{aligned}$$ Finally, inequalities (13) and (14) follow from relations (10), (11), (15), and (16). □ It should be noticed here that Theorem 2 holds for arbitrary parameters \(A_{1}\) and \(A_{2}\) such that the constant L and the integrals on the right-hand sides of (13) and (14) are convergent. Generally speaking, we are not able to prove whether or not the constants L and \(L^{p}\) appearing on the right-hand sides of (13) and (14) are the best possible. However, it turns out that these constants are the best possible for a wide set of parameters \(A_{1}\), \(A_{2}\) and a weak condition on the kernel K. In order to establish the corresponding result, we first need the following lemma. Lemma 1 Let \(\lambda >0\), and let \(\frac{1}{p}+\frac{1}{q}=1\), \(p>1\). If \(K\in C_{\alpha }(0,\infty )^{2}\) is a non-negative function such that \(K(1,t)\) is bounded on \((0,1)\), then the following relation holds: $$ {{}_{1}I_{\infty }^{\alpha }} \bigl[ x^{-\alpha (1+\varepsilon )} {{{}_{0}I_{1/x}^{\alpha }}} \bigl(t^{-\alpha pA_{2}-\frac{\varepsilon \alpha }{q}}K(1,t) \bigr) \bigr]\leq O(1),\quad \varepsilon \rightarrow 0^{+}, $$ where \(A_{2}\leq \frac{1}{2p}\). From the hypotheses, we have \(K(1,t)\leq C\) for some \(C>0\) and every \(t\in (0,1)\). Then it follows that $$\begin{aligned} &{{}_{1}I_{\infty }^{\alpha }} \bigl[x^{-\alpha (1+\varepsilon )} {{{}_{0}I_{1/x}^{\alpha }}} \bigl(t^{-\alpha pA_{2}-\frac{\varepsilon \alpha }{q}}K(1,t) \bigr) \bigr] \\ &\quad \leq C {{}_{1}I_{\infty }^{\alpha }} \bigl[x^{-\alpha } {{{}_{0}I _{1/x}^{\alpha }} } \bigl(t^{-\frac{\alpha }{2}-\frac{\varepsilon \alpha }{q}} \bigr) \bigr]. \end{aligned}$$ Furthermore, utilizing the change of variables rule (6) with \(g(t)=t^{\frac{1}{2}-\frac{\varepsilon }{q}}\), \([g'(t)]^{\alpha }= (\frac{1}{2}-\frac{\varepsilon }{q} )^{\alpha }t^{-\frac{ \alpha }{2}-\frac{\varepsilon \alpha }{q}}\), we obtain $$\begin{aligned} &{{}_{1}I_{\infty }^{\alpha }} \bigl[x^{-\alpha } \bigl({{{}_{0}I_{1/x} ^{\alpha }}} \bigl(t^{-\frac{\alpha }{2}- \frac{\varepsilon \alpha }{q}} \bigr) \bigr) \bigr] \\ &\quad =\frac{1}{\varGamma ^{2} (1+\alpha )} \int _{1}^{\infty }x^{-\alpha } \biggl( \int _{0}^{1/x}t^{-\frac{\alpha }{2}- \frac{\varepsilon \alpha }{q}} (dt)^{\alpha } \biggr) (dx)^{\alpha } \\ &\quad =\frac{1}{\varGamma ^{2} (1+\alpha )} \int _{1}^{\infty }x^{-\alpha } \biggl( \frac{1}{ (\frac{1}{2}-\frac{\varepsilon }{q} ) ^{\alpha }} \int _{0}^{x^{-\frac{1}{2}+\frac{\varepsilon }{q}}}(du)^{ \alpha } \biggr) (dx)^{\alpha } \\ &\quad =\frac{1}{\varGamma ^{2} (1+\alpha ) (\frac{1}{2}-\frac{\varepsilon }{q} )^{\alpha }} \int _{1}^{\infty }x^{-\frac{3\alpha }{2}+\frac{ \varepsilon \alpha }{q}}(dx)^{\alpha }. \end{aligned}$$ Now, if \(g(x)=x^{-\frac{1}{2}+\frac{\varepsilon }{q}}\), then \([g'(x)]^{\alpha }=- (\frac{1}{2}-\frac{\varepsilon }{q} ) ^{-\alpha } x^{-\frac{3\alpha }{2}+\frac{\varepsilon \alpha }{q}}\), so by (6), we have $$ {{}_{1}I_{\infty }^{\alpha }} \bigl[x^{-\alpha } {{{}_{0}I_{1/x}^{\alpha }} } \bigl(t^{-\frac{\alpha }{2}-\frac{\varepsilon \alpha }{q}} \bigr) \bigr] =\frac{1}{ \varGamma ^{2} (1+\alpha ) (\frac{1}{2}-\frac{\varepsilon }{q} ) ^{2\alpha }}. $$ Finally, combining (18) and (19), we obtain (17), as claimed. □ Our next intention is to impose the condition on parameters \(A_{1}\) and \(A_{2}\) for which the constants appearing on the right-hand sides of inequalities (13) and (14) are the best possible. It should be noticed here that if $$ pA_{2}+qA_{1}=2-\lambda , $$ then the constant L from Theorem 2 reduces to the form without exponents, i.e., $$ L^{*}=k_{\alpha }(pA_{2}). $$ We will show that if the parameters \(A_{1}\) and \(A_{2}\) are related by (20), then the constants appearing on the right-hand sides of (13) and (14) are the best possible. In fact, if (20) holds, inequalities (13) and (14) reduce to $$\begin{aligned} &{{}_{0}I_{\infty }^{\alpha }} \bigl( {{}_{0}I_{\infty }^{\alpha }} \bigl(K(x,y)f(x)g(y)\bigr)\bigr) \\ &\quad \leq L^{*} \bigl[ {{}_{0}I_{\infty }^{\alpha }} \bigl( x^{\alpha pqA _{1}-\alpha } f^{p} (x) \bigr) \bigr]^{\frac{1}{p}} \bigl[ {{}_{0}I _{\infty }^{\alpha }} \bigl( y^{\alpha pqA_{2}-\alpha } g^{q}(y) \bigr) \bigr]^{\frac{1}{q}} \end{aligned}$$ $$\begin{aligned} &{{}_{0}I_{\infty }^{\alpha }} \bigl[ y^{\alpha p(\lambda -1)+\alpha pqA_{1}} \bigl({{}_{0}I_{\infty }^{\alpha }} \bigl(K(x,y)f(x)\bigr) \bigr)^{p} \bigr] \\ &\quad \leq \bigl(L^{*}\bigr)^{p} \bigl[ {{}_{0}I_{\infty }^{\alpha }} \bigl(x^{\alpha pqA _{1}-\alpha } f^{p} (x)\bigr) \bigr], \end{aligned}$$ where \(L^{*}\) is defined by (21). Suppose that \(\frac{1}{p}+\frac{1}{q}=1\), \(p>1\), and let \(f,g\in C _{\alpha }(0,\infty )\) be non-negative functions. Further, let \(K\in C_{\alpha }(0,\infty )^{2}\) be a non-negative homogeneous function of degree \(-\alpha \lambda \), \(\lambda >0\), such that \(K(1,t)\) is bounded on \((0,1)\). If the parameters \(A_{1}\) and \(A_{2}\) satisfy relation \(pA_{2}+qA_{1}=2-\lambda \), then the constants \(L^{*}\) and \((L^{*})^{p}\) appearing on the right-hand sides of (22) and (23) are the best possible. Let \(f(x)=x^{-\alpha qA_{1}-\frac{\varepsilon \alpha }{p}} \chi _{[1,\infty )}(x)\) and \(g(y)=y^{-\alpha pA_{2}-\frac{\varepsilon \alpha }{q}}\chi _{[1,\infty )}(y)\), where \(\chi _{A}\) stands for a characteristic function of a set A. Now, let us suppose that there exists a smaller constant \(0< M< L^{\ast }\) such that inequality (22) holds. Denote by J the right-hand side of inequality (22). Then, with the above defined functions f and g, we have $$\begin{aligned} J&=M \biggl(\frac{1}{\varGamma (1+\alpha )} \int _{1}^{\infty }x^{-\alpha \varepsilon -\alpha }(dx)^{\alpha } \biggr)^{\frac{1}{p}} \biggl(\frac{1}{ \varGamma (1+\alpha )} \int _{1}^{\infty }y^{-\alpha \varepsilon -\alpha }(dy)^{ \alpha } \biggr)^{\frac{1}{q}} \\ &=\frac{M}{\varepsilon ^{\alpha }\varGamma (1+\alpha )}. \end{aligned}$$ Further, utilizing substitution \(t=\frac{y}{x}\) and taking into account Lemma 1, we obtain the following estimate: $$\begin{aligned} &{{}_{0}I_{\infty }^{\alpha }} \bigl( {{}_{0}I_{\infty }^{\alpha }} \bigl(K(x,y)f(x)g(y)\bigr)\bigr) \\ &\quad = {{}_{1}I_{\infty }^{\alpha }} \bigl[ x^{-\alpha qA_{1}-\frac{ \alpha \varepsilon }{p}} {}_{1}I_{\infty }^{\alpha } \bigl(y^{-\alpha pA_{2}-\frac{\alpha \varepsilon }{q}}K(x,y)\bigr) \bigr] \\ &\quad = {{}_{1}I_{\infty }^{\alpha }} \bigl[ x^{-\alpha (1+\varepsilon )} \bigl( {{}_{0}I_{\infty }^{\alpha }} \bigl(t^{-\alpha pA_{2}-\frac{\alpha \varepsilon }{q}}K(1,t)\bigr) - {} _{0}I_{1/x}^{\alpha } \bigl(t^{-\alpha pA_{2}-\frac{\alpha \varepsilon }{q}}K(1,t)\bigr) \bigr) \bigr] \\ &\quad \geq \frac{1}{\varepsilon ^{\alpha }\varGamma (1+\alpha )} \biggl( k _{\alpha } \biggl(pA_{2}+ \frac{\varepsilon }{q} \biggr)+o(1) \biggr). \end{aligned}$$ Moreover, from (22), (24), and (25), we get $$ k_{\alpha } \biggl(pA_{2}+\frac{\varepsilon }{q} \biggr)+o(1)\leq M. $$ Now, by letting \(\varepsilon \rightarrow 0^{+}\), it follows that relation (26) contradicts with our assumption \(M< L^{*}=k_{ \alpha }(pA_{2})\). Finally, equivalence of inequalities (22) and (23) means that the constant \((L^{\ast })^{p} = [k_{\alpha }(pA_{2}) ] ^{p}\) is also the best possible in (23). The proof is now completed. □ In this section, we apply our Theorems 2 and 3 to some particular settings. More precisely, we will consider derived fractal Hilbert-type inequalities for some particular choices of homogeneous kernels and parameters \(A_{1}\) and \(A_{2}\) related by (20). Our first example refers to the kernel \(K_{1}(x,y)=1/\max \{x^{\alpha \lambda }, y^{\alpha \lambda }\}\), \(\lambda >0\), and the parameters \(A_{1}=\frac{2-\lambda }{2q}\), \(A_{2}=\frac{2-\lambda }{2p}\). Obviously, \(K_{1}\) is a homogeneous function of degree \(-\alpha \lambda \) and \(K_{1}(1,t)\) is bounded on \((0,1)\). Moreover, the parameters \(A_{1}\) and \(A_{2}\) satisfy condition (20), so the hypotheses of Theorem 3 are satisfied. In addition, the constant in inequality (22), denoted here by \(L_{1}^{*}\), reduces to $$ L_{1}^{*}=k_{\alpha } \biggl(1-\frac{\lambda }{2} \biggr)=\frac{2^{ \alpha +1}}{\lambda ^{\alpha }\varGamma (1+\alpha )}. $$ This follows by virtue of (5), after a straightforward calculation. The corresponding result covers fractal Hilbert-type inequalities presented in the introduction. Corollary 1 Let \(\frac{1}{p}+\frac{1}{q}=1\), \(p>1\), \(\lambda >0\), and let \(f,g\in C_{\alpha }(0,\infty )\) be non-negative functions. Then the inequalities $$\begin{aligned} &{{}_{0}I_{\infty }^{\alpha }} \biggl( {{}_{0}I_{\infty }^{\alpha }} \biggl(\frac{f(x)g(y)}{\max \{x^{\alpha \lambda }, y^{\alpha \lambda }\}} \biggr) \biggr) \\ &\quad \leq \frac{2^{\alpha +1}}{\lambda ^{\alpha }\varGamma (1+\alpha )} \bigl[ {{}_{0}I_{\infty }^{\alpha }} \bigl( x^{\alpha (p-1)-\frac{ \alpha \lambda p}{2}} f^{p} (x) \bigr) \bigr]^{\frac{1}{p}} \bigl[ {{}_{0}I_{\infty }^{\alpha }} \bigl( y^{\alpha (q-1)-\frac{\alpha \lambda q}{2}} g^{q}(y) \bigr) \bigr]^{\frac{1}{q}} \end{aligned}$$ $$\begin{aligned} &{{}_{0}I_{\infty }^{\alpha }} \biggl( y^{\frac{\alpha p\lambda }{2}} \biggl[{{}_{0}I_{\infty }^{\alpha }} \biggl(\frac{f(x)}{\max \{x^{ \alpha \lambda }, y^{\alpha \lambda }\}} \biggr) \biggr]^{p} \biggr) \\ &\quad \leq \biggl(\frac{2^{\alpha +1}}{\lambda ^{\alpha }\varGamma (1+\alpha )} \biggr)^{p} \bigl[ {{}_{0}I_{\infty }^{\alpha }} \bigl( x^{\alpha (p-1)-\frac{ \alpha \lambda p}{2}} f^{p} (x) \bigr) \bigr] \end{aligned}$$ hold and the constants appearing on their right-hand sides are the best possible. It should be noticed here that our Corollary 1 is an extension of fractal Hilbert-type inequalities discussed in the introduction. More precisely, by substituting \(\lambda =1\) in inequalities (27) and (28), we obtain relations (3) and (4). Our next example deals with a homogeneous kernel \(K_{2}(x,y)=1/(x^{ \alpha }+y^{\alpha })^{\lambda }\), \(\lambda >0\), and the parameters \(A_{1}=\frac{2-\lambda }{2q}\), \(A_{2}=\frac{2-\lambda }{2p}\). Clearly, \(K_{2}\) is a homogeneous function of degree \(-\alpha \lambda \) and \(K_{2}(1,t)\) is bounded on \((0,1)\). Now, if the parameters \(A_{1}\) and \(A_{2}\) are related by (20), then the constant in inequality (22), denoted here by \(L_{2}^{*}\), can be expressed in terms of the fractal beta function \(B_{\alpha }\). More precisely, utilizing the integral representation (8), it follows that $$ L_{2}^{*}=B_{\alpha } \biggl(\frac{\lambda }{2}, \frac{\lambda }{2} \biggr), $$ so we have the following consequence. Let \(\frac{1}{p}+\frac{1}{q}=1\), \(p>1\), and let \(f,g\in C_{\alpha }(0, \infty )\) be non-negative functions. Then the inequalities $$\begin{aligned} &{{}_{0}I_{\infty }^{\alpha }} \biggl( {{}_{0}I_{\infty }^{\alpha }} \biggl(\frac{f(x)g(y)}{(x^{\alpha }+y^{\alpha })^{\lambda }} \biggr) \biggr) \\ &\quad \leq B_{\alpha } \biggl(\frac{\lambda }{2},\frac{\lambda }{2} \biggr) \bigl[ {{}_{0}I_{\infty }^{\alpha }} \bigl( x^{\alpha (p-1)-\frac{ \alpha \lambda p}{2}} f^{p} (x) \bigr) \bigr]^{\frac{1}{p}} \bigl[ {{}_{0}I_{\infty }^{\alpha }} \bigl( y^{\alpha (q-1)-\frac{\alpha \lambda q}{2}} g^{q}(y) \bigr) \bigr]^{\frac{1}{q}} \end{aligned}$$ $$\begin{aligned} &{{}_{0}I_{\infty }^{\alpha }} \biggl( y^{\frac{\alpha p\lambda }{2}} \biggl[{{}_{0}I_{\infty }^{\alpha }} \biggl(\frac{f(x)}{(x^{\alpha }+y ^{\alpha })^{\lambda }} \biggr) \biggr]^{p} \biggr) \\ &\quad \leq \biggl(B_{\alpha } \biggl(\frac{\lambda }{2}, \frac{\lambda }{2} \biggr) \biggr)^{p} \bigl[ {{}_{0}I_{\infty }^{ \alpha }} \bigl( x^{\alpha (p-1)-\frac{\alpha \lambda p}{2}} f^{p} (x) \bigr) \bigr] \end{aligned}$$ At the end of the paper, we discuss the case of the kernel \(K_{3}(x,y)=1/(x ^{\alpha }+y^{\alpha })\). In this setting, the constant in inequality (22), denoted here by \(L_{3}^{*}\), becomes $$ L_{3}^{*}=B_{\alpha } (pA_{2},1-pA_{2} ), $$ provided that \(pA_{2}+qA_{1}=1\). The resulting pair of relations is given in the following result. Let \(\frac{1}{p}+\frac{1}{q}=1\), \(p>1\), and let \(f,g\in C_{\alpha }(0, \infty )\) be non-negative functions. If \(A_{1}\) and \(A_{2}\) are real parameters such that \(pA_{2}+qA_{1}=2\alpha -1\), then the inequalities $$\begin{aligned} &{{}_{0}I_{\infty }^{\alpha }} \biggl( {{}_{0}I_{\infty }^{\alpha }} \biggl(\frac{f(x)g(y)}{x^{\alpha }+y^{\alpha }} \biggr) \biggr) \\ &\quad \leq B_{\alpha } (pA_{2},1-pA_{2} ) \bigl[ {{}_{0}I_{ \infty }^{\alpha }} \bigl( x^{\alpha pqA_{1}-\alpha } f^{p} (x) \bigr) \bigr] ^{\frac{1}{p}} \bigl[ {{}_{0}I_{\infty }^{\alpha }} \bigl( y^{\alpha pqA _{2}-\alpha } g^{q}(y) \bigr) \bigr]^{\frac{1}{q}} \end{aligned}$$ $$\begin{aligned} &{{}_{0}I_{\infty }^{\alpha }} \biggl( y^{\alpha pqA_{1}-\alpha } \biggl[{{}_{0}I _{\infty }^{\alpha }} \biggl( \frac{f(x)}{x^{\alpha }+y^{\alpha }} \biggr) \biggr]^{p} \biggr) \\ &\quad \leq \bigl(B_{\alpha } (pA_{2},1-pA_{2} ) \bigr)^{p} \bigl[ {{}_{0}I_{\infty }^{\alpha }} \bigl( x^{\alpha pqA_{1}-\alpha } f^{p} (x) \bigr) \bigr] \end{aligned}$$ In particular, if \(A_{1}=A_{2}=\frac{1}{pq}\), then the constant \(L_{2}^{*}\) from Corollary 2 reduces to the form \(L_{2}^{*}=B _{\alpha } (\frac{1}{q}, \frac{1}{p} )\). In this case, the weight functions \(x\rightarrow x^{\alpha pqA_{1}-\alpha }\) and \(y\rightarrow y^{\alpha pqA_{2}-\alpha }\) in inequalities (29) and (30) disappear. The resulting pair of non-weighted inequalities is an immediate fractal extension of the initial Hilbert-type inequalities (1) and (2). It should be noticed here that if \(\alpha =1\), then \(B_{\alpha } ( \frac{1}{q}, \frac{1}{p} )=B (\frac{1}{q}, \frac{1}{p} )=\frac{ \pi }{\sin \frac{\pi }{p}}\). In the present study, we have established a unified treatment of fractal Hilbert-type inequalities. First, we have derived a pair of equivalent Hilbert-type inequalities with a general kernel and weight functions. A particular emphasis has been devoted to a class of homogeneous kernels. In addition, we have established conditions under which the constants appearing in the corresponding inequalities are the best possible. As an application, our results have been compared with some previously known ones from the literature. Abdeldaim, A., El-Deeb, A.A.: On generalized of certain retarded nonlinear integral inequalities and its applications in retarded integro-differential equations. Appl. Math. Comput. 256, 375–380 (2015) MathSciNet MATH Google Scholar Adiyasuren, V., Batbold, T., Krnić, M.: Multiple Hilbert-type inequalities involving some differential operators. Banach J. Math. Anal. 10(2), 320–337 (2016) Article MathSciNet Google Scholar Batbold, T., Krnić, M., Pečarić, J., Vuković, P.: Further Development of Hilbert-Type Inequalities. Element, Zagreb (2017) MATH Google Scholar Budak, H., Sarikaya, M.Z., Yildirim, H.: New inequalities for local fractional integrals. Iran. J. Sci. Technol., Trans. A, Sci. 41(4), 1039–1046 (2017) Chen, G.-S.: Generalizations of Hölder's and some related integral inequalities on fractal space. J. Funct. Spaces Appl. 2013, Article ID 198405 (2013) Chen, G.-S., Srivastava, J.M., Wang, P., Wei, W.: Some further generalizations of Hölder's inequality and related results on fractal space. Abstr. Appl. Anal. 2014, Article ID 832802 (2014) El-Deeb, A.A., Cheung, W.-S.: Some reverse Hölder inequalities with Specht's ratio on time scales. J. Nonlinear Sci. Appl. 11, 444–455 (2018) Hardy, G.H., Littlewood, J.E., Pólya, G.: Inequalities, 2nd edn. Cambridge University Press, Cambridge (1967) Jumarie, G.: Fractional Euler's integral of first and second kinds. Application to fractional Hermite's polynomials and to probability density of fractional order. J. Appl. Math. Inform. 28(1–2), 257–273 (2010) Krnić, M., Pečarić, J., Perić, I., Vuković, P.: Recent Advances in Hilbert-Type Inequalities. Element, Zagreb (2012) Liu, Q.: A Hilbert-type fractional integral inequality with the kernel of Mittag-Leffler function and its applications. Math. Inequal. Appl. 21(3), 729–737 (2018) Liu, Q., Chen, D.: A Hilbert-type integral inequality on the fractal spaces. Integral Transforms Spec. Funct. 28(10), 772–780 (2017) Liu, Q., Sun, W.: A Hilbert-type fractal integral inequality and its applications. J. Inequal. Appl. 2017, 83 (2017) Rudin, W.: Real and Complex Analysis, 3rd edn. McGraw-Hill, New York (1987) Saker, S.H., El-Deeb, A.A., Rezk, H.M., Agarwal, R.P.: On Hilbert's inequalities on time scales. Appl. Anal. Discrete Math. 11, 399–423 (2017) Sarikaya, M.Z., Budak, H.: Generalized Ostrowski type inequalities for local fractional integrals. Proc. Am. Math. Soc. 145(4), 1527–1538 (2017) Sarikaya, M.Z., Tunc, T., Budak, H.: On generalized some integral inequalities for local fractional integrals. Appl. Math. Comput. 276, 316–323 (2016) Yang, X.J.: Local Fractional Functional Analysis and Its Applications. Asian Academic Publisher Limited, Hong Kong (2011) Yang, X.J.: Advanced Local Fractional Calculus and Its Applications. Word Science Publishers, New York (2012) The first author would like to thank the Asia Research Center at the National University of Mongolia and the Korea Foundation of Advanced Studies for supporting this research (Project No. P2017–2479, 2018). Department of Mathematics, National University of Mongolia, Ulaanbaatar, Mongolia Tserendorj Batbold Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia Mario Krnić Faculty of Teacher Education, University of Zagreb, Zagreb, Croatia Predrag Vuković All authors read and approved the final version of the manuscript. Correspondence to Tserendorj Batbold. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Batbold, T., Krnić, M. & Vuković, P. A unified approach to fractal Hilbert-type inequalities. J Inequal Appl 2019, 117 (2019). https://doi.org/10.1186/s13660-019-2076-9 Hilbert inequality Conjugate parameters Local fractional integral Homogeneous function The best possible constant
CommonCrawl
STELLAR POPULATIONS IN EXTERNAL GALAXIES Whang, Yun-Oh;Lee, Sang-Gak 1 By applying population synthesis method, stellar populations in the nuclei of M31 and M32 are studied. We obtained five and four models for M31 and M32 respectively, for different main sequence turn-offs and keeping the astrophysical constraints as loose as possible. The best models for M31 and M32 are thought to have G0-5 and F5-8 main sequence trun-offs respectively. These models show that the main sequence stars outnumber the giants, which indicates the dwarf-dominance in external galactic nuclei. Even though there are some computational difficulties because of non-uniqueness in solution, two major points can be pointed out when compared to the previous papers. First, the ultraviolet deficiency expected from the conventional metal rich population models is not detected in our models, Instead ultraviolet radiation turns out to be somewhat higher than that of observation. Second one is the minor contribution from the Super Metal Rich (SMR) K giants to the integrated light of the program galaxies. That is, in our models, the SMR contribution is at best the same level as normal giants contrary to the SMR dominance of previous models. Since the loose astrophysical constraints are the major difference of our study from the previous ones, one should re-examine carefully for their validity further. CO OBSERVATIONS OF A HIGH VELOCITY CLOUD Kim, K.T.;Mihn, Y.C.;Hasegawa, T.I. 25 We report a null detection of $^{12}CO$ emission from a sub-condensation in a High Velocity Cloud (HVC). As a consequence of this, an upper limit of $n(H_2)\frac{X(CO)}{DV/DR}{\leq}2{\times}10^{-5}$ was set. This implies that $^{12}CO$ abundance is deficient by at least a factor of 10 if the HVC is predominantly molecular, otherwise the CO abundance of the HVC might be normal. SURFACE BRIGHTNESS AND MASS DISTRIBUTION OF THE LATE TYPE SPIRAL GALAXY NGC 2403 Lee, Yoo-Mi;Chun, Mun-Suk 31 Luminosity profile of the late type spiral galaxy NGC 2403 was obtained using the PDS scan of the plate. Some physical parameters (scale length, total magnitude, central brightness, disk to bulge ratio and concentric indices) were calculated from the brightness distribution. Total mass and the mass to luminosity ratio were estimated from the fitting of various mass models. THE AGE-METALLICITY RELATION FOR FIELD DISK STARS IN THE SOLAR NEIGHBORHOOD Lee, See-Woo;Ann, Hong-Bae;Sung, Hwan-Kyung 43 The ages of field stars given in the catalogue of Cayrel de Strobel et al. (1985) are derived by the five different methods with combination of theoretical isochrones. By using these ages and metal abundances homogenized by Lee and Choe (1988), the age-metallicity relations are obtained. For disk stars of [Fe/H] > -0.9, the present age-metallicity relations are nearly consistent with those given by Twarog (1980) and Carlberg et al. (1985). DETERMINATION OF THE DISTANCE TO B 361 BY A MODIFIED VERSION OF THE WOLF DIAGRAM Hong, S.S.;Sohn, D.S. 63 Current estimates, based on the same star-count analysis, of the distance to the globule Bamard 361 range from 300 pc to 650 pc. All the problems associated with the estimates have been fully rectified in this study, and a modification has been made to the classical Wolf diagram to improve the accuracy in the distance determination. A reference field was carefully selected close to the globule but well outside the globule boundary, and star counts for this field were performed on the blue POSS plate in order to set up the reference magnitude sequence appropriate to the general area of B 361. From the reference sequence, the stellar density function has been derived specifically for the direction toward the globule. Correction was made for the general interstellar extinction, and the luminosity function with the Wielen's dip was adopted. The resulting density function clearly reveals the existence of the local Cygnus-Orion arm in the direction of B 361 at about 700 pc away from the Sun. Analysis of the star-count data for the program field locates the globule at distance $600{\pm}50$ pc ; thus, the globule is an object located in the Cygnus-Orion arm, residing somewhat toward its leading edge.
CommonCrawl
Computing travel times from filtered traffic states Efficient robust control of first order scalar conservation laws using semi-analytical solutions June 2014, 7(3): 543-556. doi: 10.3934/dcdss.2014.7.543 Free-congested and micro-macro descriptions of traffic flow Francesca Marcellini 1, Università di Milano-Bicocca, Dipartimento di Matematica e Applicazioni, Via Cozzi 53, 20125 Milano, Italy Received May 2013 Revised July 2013 Published January 2014 We present two frameworks for the description of traffic, both consisting in the coupling of systems of different types. First, we consider the Free--Congested model [7,11], where a scalar conservation law is coupled with a $2\times2$ system. Then, we present the coupling of a micro- and a macroscopic models, the former consisting in a system of ordinary differential equations and the latter in the usual LWR conservation law, see [10]. A comparison between the two different frameworks is also provided. Keywords: macroscopic traffic models, microscopic traffic models., Traffic models, hyperbolic systems of conservation laws, ordinary differential equations. Mathematics Subject Classification: 35L65, 90B2. Citation: Francesca Marcellini. Free-congested and micro-macro descriptions of traffic flow. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : 543-556. doi: 10.3934/dcdss.2014.7.543 A. Aw, A. Klar, T. Materne and M. Rascle, Derivation of continuum traffic flow models from microscopic follow-the-leader models,, SIAM J. Appl. Math., 63 (2002), 259. doi: 10.1137/S0036139900380955. Google Scholar A. Aw and M. Rascle, Resurrection of "second order'' models of traffic flow,, SIAM J. Appl. Math., 60 (2000), 916. doi: 10.1137/S0036139997332099. Google Scholar P. Bagnerini, R. M. Colombo and A. Corli, On the role of source terms in continuum traffic flow models,, Math. Comput. Modelling, 44 (2006), 917. doi: 10.1016/j.mcm.2006.02.019. Google Scholar P. Bagnerini and M. Rascle, A multiclass homogenized hyperbolic model of traffic flow,, SIAM J. Math. Anal., 35 (2003), 949. doi: 10.1137/S0036141002411490. Google Scholar S. Benzoni Gavage and R. M. Colombo, An $n$-populations model for traffic flow,, Europ. J. Appl. Math., 14 (2003), 587. doi: 10.1017/S0956792503005266. Google Scholar S. Benzoni-Gavage, R. M. Colombo and P. Gwiazda, Measure valued solutions to conservation laws motivated by traffic modelling,, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 462 (2006), 1791. doi: 10.1098/rspa.2005.1649. Google Scholar S. Blandin, D. Work, P. Goatin, B. Piccoli and A. Bayen, A general phase transition model for vehicular traffic,, SIAM J. Appl. Math., 71 (2011), 107. doi: 10.1137/090754467. Google Scholar R. M. Colombo, Hyperbolic phase transitions in traffic flow,, SIAM J. Appl. Math., 63 (2002), 708. doi: 10.1137/S0036139901393184. Google Scholar R. M. Colombo, P. Goatin and F. S. Priuli, Global well posedness of traffic flow models with phase transitions,, Nonlinear Anal., 66 (2007), 2413. doi: 10.1016/j.na.2006.03.029. Google Scholar R. M. Colombo and F. Marcellini, A mixed ode-pde model for vehicular traffic,, {preprint}, (2013). Google Scholar R. M. Colombo, F. Marcellini and M. Rascle, A 2-phase traffic model based on a speed bound,, SIAM J. Appl. Math., 70 (2010), 2652. doi: 10.1137/090752468. Google Scholar R. M. Colombo and A. Marson, A Hölder continuous ODE related to traffic flow,, Proc. Roy. Soc. Edinburgh Sect. A, 133 (2003), 759. doi: 10.1017/S0308210500002663. Google Scholar L. C. Edie, Car-following and steady-state theory for noncongested traffic,, Operations Res., 9 (1961), 66. doi: 10.1287/opre.9.1.66. Google Scholar P. Goatin, The Aw-Rascle vehicular traffic flow model with phase transitions,, Math. Comput. Modelling, 44 (2006), 287. doi: 10.1016/j.mcm.2006.01.016. Google Scholar M. Godvik and H. Hanche-Olsen, Existence of solutions for the Aw-Rascle traffic flow model with vacuum,, J. Hyperbolic Differ. Equ., 5 (2008), 45. doi: 10.1142/S0219891608001428. Google Scholar D. Helbing and M. Treiber, Critical discussion of synchronized flow,, Cooper@tive Tr@nsport@tion Dyn@mics, 1 (2002). Google Scholar B. S. Kerner, Phase transitions in traffic flow,, in Traffic and Granular Flow '99, (2000), 253. doi: 10.1007/978-3-642-59751-0_25. Google Scholar B. L. Keyfitz and H. C. Kranzer, A system of nonstrictly hyperbolic conservation laws arising in elasticity theory,, Arch. Rational Mech. Anal., 72 (): 219. doi: 10.1007/BF00281590. Google Scholar K. M. Kockelman, Modeling traffics flow-density relation: Accommodation of multiple flow regimes and traveler types,, Transportation, 28 (2001), 363. Google Scholar C. Lattanzio and B. Piccoli, Coupling of microscopic and macroscopic traffic models at boundaries,, Math. Models Methods Appl. Sci., 20 (2010), 2349. doi: 10.1142/S0218202510004945. Google Scholar J. P. Lebacque, S. Mammar and H. Haj-Salem, Generic second order traffic flow modelling,, in Transportation and Traffic Theory: Proceedings of the 17th International Symposium on Transportation and Traffic Theory, (2007). Google Scholar R. J. LeVeque, Numerical Methods for Conservation Laws,, Second edition, (1992). doi: 10.1007/978-3-0348-8629-1. Google Scholar M. J. Lighthill and G. B. Whitham, On kinematic waves. II. A theory of traffic flow on long crowded roads,, Proc. Roy. Soc. London. Ser. A., 229 (1955), 317. doi: 10.1098/rspa.1955.0089. Google Scholar P. I. Richards, Shock waves on the highway,, Operations Res., 4 (1956), 42. doi: 10.1287/opre.4.1.42. Google Scholar B. Temple, Systems of conservation laws with invariant submanifolds,, Trans. Amer. Math. Soc., 280 (1983), 781. doi: 10.1090/S0002-9947-1983-0716850-2. Google Scholar H. Zhang, A non-equilibrium traffic model devoid of gas-like behavior,, Transportation Research Part B: Methodological, 36 (2002), 275. doi: 10.1016/S0191-2615(00)00050-3. Google Scholar Alberto Bressan, Khai T. Nguyen. Conservation law models for traffic flow on a network of roads. Networks & Heterogeneous Media, 2015, 10 (2) : 255-293. doi: 10.3934/nhm.2015.10.255 Michael Herty, Reinhard Illner. Analytical and numerical investigations of refined macroscopic traffic flow models. Kinetic & Related Models, 2010, 3 (2) : 311-333. doi: 10.3934/krm.2010.3.311 Gabriella Puppo, Matteo Semplice, Andrea Tosin, Giuseppe Visconti. Kinetic models for traffic flow resulting in a reduced space of microscopic velocities. Kinetic & Related Models, 2017, 10 (3) : 823-854. doi: 10.3934/krm.2017033 Michael Herty, Gabriella Puppo, Sebastiano Roncoroni, Giuseppe Visconti. The BGK approximation of kinetic models for traffic. Kinetic & Related Models, 2020, 13 (2) : 279-307. doi: 10.3934/krm.2020010 Tong Li. Qualitative analysis of some PDE models of traffic flow. Networks & Heterogeneous Media, 2013, 8 (3) : 773-781. doi: 10.3934/nhm.2013.8.773 Paola Goatin. Traffic flow models with phase transitions on road networks. Networks & Heterogeneous Media, 2009, 4 (2) : 287-301. doi: 10.3934/nhm.2009.4.287 Mauro Garavello, Benedetto Piccoli. On fluido-dynamic models for urban traffic. Networks & Heterogeneous Media, 2009, 4 (1) : 107-126. doi: 10.3934/nhm.2009.4.107 Michael Herty, Lorenzo Pareschi. Fokker-Planck asymptotics for traffic flow models. Kinetic & Related Models, 2010, 3 (1) : 165-179. doi: 10.3934/krm.2010.3.165 Johanna Ridder, Wen Shen. Traveling waves for nonlocal models of traffic flow. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4001-4040. doi: 10.3934/dcds.2019161 Bertrand Haut, Georges Bastin. A second order model of road junctions in fluid models of traffic networks. Networks & Heterogeneous Media, 2007, 2 (2) : 227-253. doi: 10.3934/nhm.2007.2.227 Michael Herty, Lorenzo Pareschi, Mohammed Seaïd. Enskog-like discrete velocity models for vehicular traffic flow. Networks & Heterogeneous Media, 2007, 2 (3) : 481-496. doi: 10.3934/nhm.2007.2.481 Simone Göttlich, Oliver Kolb, Sebastian Kühn. Optimization for a special class of traffic flow models: Combinatorial and continuous approaches. Networks & Heterogeneous Media, 2014, 9 (2) : 315-334. doi: 10.3934/nhm.2014.9.315 Sharif E. Guseynov, Shirmail G. Bagirov. Distributed mathematical models of undetermined "without preference" motion of traffic flow. Conference Publications, 2011, 2011 (Special) : 589-600. doi: 10.3934/proc.2011.2011.589 Felisia Angela Chiarello, Paola Goatin. Non-local multi-class traffic flow models. Networks & Heterogeneous Media, 2019, 14 (2) : 371-387. doi: 10.3934/nhm.2019015 Michael Burger, Simone Göttlich, Thomas Jung. Derivation of second order traffic flow models with time delays. Networks & Heterogeneous Media, 2019, 14 (2) : 265-288. doi: 10.3934/nhm.2019011 Guillaume Costeseque, Jean-Patrick Lebacque. Discussion about traffic junction modelling: Conservation laws VS Hamilton-Jacobi equations. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : 411-433. doi: 10.3934/dcdss.2014.7.411 Tadahisa Funaki, Hirofumi Izuhara, Masayasu Mimura, Chiyori Urabe. A link between microscopic and macroscopic models of self-organized aggregation. Networks & Heterogeneous Media, 2012, 7 (4) : 705-740. doi: 10.3934/nhm.2012.7.705 N. Bellomo, A. Bellouquid. From a class of kinetic models to the macroscopic equations for multicellular systems in biology. Discrete & Continuous Dynamical Systems - B, 2004, 4 (1) : 59-80. doi: 10.3934/dcdsb.2004.4.59 Wen Shen, Karim Shikh-Khalil. Traveling waves for a microscopic model of traffic flow. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2571-2589. doi: 10.3934/dcds.2018108 Francesca Marcellini
CommonCrawl
Color sensors and their applications based on real-time color image segmentation for cyber physical systems Neal N. Xiong1,2, Yang Shen2, Kangye Yang2, Changhoon Lee3 & Chunxue Wu ORCID: orcid.org/0000-0001-6358-94202 Color information plays an important role in the color image segmentation and real-time color sensor, which affects the result of video image segmentation and correct real-time temperature value. In this paper, a novel real-time color image segmentation method is proposed, which is based on color similarity in RGB color space. According to the color and luminance information in RGB color space, the dominant color is determined at first, and then color similarity can be calculated with the proposed calculation method of color component, which creates a color-class map. Next, the information of the corresponding color-class map is utilized to classify the pixels. Due to the characteristic that thermal inks feature color values that change in real time as the temperature changes, the segmentation results of thermal ink can be used as a real-time color sensor. Then, we also propose a method of color correction and light source compensation for the sake of potential inaccuracy of its measures. We discuss the proposed segmentation method application combining with color sensor (thermal ink) in real-time color image segmentation for Cyber physical system (CPS) by the application in fire detection and summarize a new method in identifying fire in a video based on these characteristics. The experiments showed that the proposed method in vision-based fire detection and identification in videos was effective; the results were accurate and can be used in real-time analysis. Cyber physical systems (CPS), as a computing process and the physical process of unity, is integrated computing, communication, and control in one of the next generation of intelligent systems. It interacts with the physical process through the human-computer interaction interface and uses the network space to manipulate a physical entity in a remote, reliable, real-time, secure, and cooperative way. CPS includes future ubiquitous environment awareness, embedded computing, network communication, and network control system engineering that enable physical system with computing, communication, precise control, remote collaboration, and autonomous capabilities. It focuses on computing resources and physical resources in close integration and coordination, mainly for some intelligent systems such as robots, and intelligent navigation. At present, the information physics system is still a relatively new research field. With the continuous development of computer technology, network technology, and mathematical theory, the research of digital image processing in real-time systems which has become an important component of CPS has been widely applied in various fields, such as biomedicine, satellite remote sensing, and image communication. As a part of the image processing, image segmentation plays an important role. Image segmentation in real-time systems is a technology and process of which divides image into a number of specific and unique section and extracts the interested section. Video image segmentation is an important issue in the field of computer vision and also a classic puzzle [1, 2]. Its researches have been applied in face identification system, fingerprint identification system, fire detection and identification system, machine vision, and medical imaging, and so on. As is known to all, CT (computed tomography) is widely used in hospitals, which uses the result of image segmentation to help diagnose patients effectively and rapidly in real time. Now, face identification system and machine vision are the most concern of scholars. There are many video image segmentation algorithms of different applications. But so far, there is no uniform solution or standard for video image segmentation, and there is also no complete theory for guidance on how to select the appropriate segmentation method based on image characteristics. Under normal conditions, in order to more effectively solve a specific problem in the field of image segmentation, it is combined with the knowledge of the relevant areas. According to the image gray level, image segmentation can be divided into gray scale image segmentation and color image segmentation. Compared to the gray scale images, color images include not only the brightness but also the color information, such as hue and saturation. In many cases, we can not extract the target information from the image by simply using the gray information while the human eye can recognize thousands of colors, so we can quickly obtain the segmentation with color information [3, 4]. Therefore, it is essential to study the color image segmentation which has broad prospects. There are many splitting methods in color image segmentation, such as histogram threshold-based method, region-based methods, and fuzzy clustering segmentation and edge detection methods [5–8]. These methods also combine different color spaces according to the needs of segmentation [9, 10]. Therefore, in the process of color image segmentation, we should firstly determine the color space, then select the appropriate segmentation method. The image is very vulnerable to the effects of light and noise, so not only the noise but also the light changes should be taken into consideration when segmenting. Image color appearance would change when the light was changed. It will lead to inaccuracy of segmentation by only using the color information, regardless of the brightness information. In order to obtain the good segmentation, it uses the color information and the brightness information concurrently. In this paper, a novel image segmentation method is proposed which can segment the foreground and background in RGB color space by using the color information and the brightness information. The segmentation result of this method is better. With modern industrial production develops toward the high speed and automatic direction, color recognition has been widely used in various industrial detection and automatic control field. And the work of color identification which is led by the human eye in the long-term production has been replaced by more and more color sensors. Color sensor detects color with comparison the object color with the reference color, and if they are consistent in a certain error range, then output the detection results. Color sensor can be applied in many fields, such as monitoring the production process and product quality in the industry [11]; the realization of the true color copy without affected by environmental temperature, humidity, paper and toner influence in the electronic reproduction aspects; a disease indicator to study a sickness in the Medical; and automatic control in detection two adjacent label colors of a paper and automatically count the number of all sorts of color by auto-counter in the commodity packaging [12]. There are many kinds of color sensors so far. The typical color sensor is TCS230 color sensor [13], the latest color sensor of TAOS Company. It can recognize and detect colors and has many good new features in comparison with other color sensors. It is adequate for colorimeter measurement applications, such as medical diagnosis, color printing, computer color monitor calibration, and cosmetics, paint, textile and the process control of printing materials. According to the working principle of color sensor and image segmentation method, we will design a similar color sensor function by using the thermal ink characteristic in this paper. The characteristic of thermal ink is that its color value will change in real time as the temperature changes. Therefore, the segmentation results of thermal ink can be used as a real-time color sensor. We get the color information from the segmentation area (thermal ink); through the ink color value of correction and the comparison of the standard color, the right color value is concluded, and finally, temperature is output by the relevance of the thermal ink color and temperature. This design can be used for measuring the indoor, outdoor temperature, and food labels, etc. It can also be used to control the temperature of greenhouse plants. We identify the change of temperature through real-time monitoring of the color change, thereby adjust the temperature to obtain a better yield. This method is real time and fast and without multiple sensor nodes. The rest of the paper is organized as follows. Section 2 analyzes the color space, video image capture, recognition and segmentation, existing algorithms, and color sensor. Section 3 describes the proposed method and color correction. Section 4 analyzes the experimental results of the proposed method and put forward its application. Section 5 draws the conclusion and finally gives the suggestions. The analysis work In the color image segmentation, the first step is to choose a color space. The color model we know contains RGB, HSI, HSV, CMYK, CIE, YUV, and so on. RGB model is the most commonly used for hardware color model while the HSI model is the most commonly used color model for color processing. They are often used in image processing technology [14, 15]. RGB space is represented by the three primary colors of red, green, blue; other colors are made up with the three primary colors. The RGB model is represented by the Cartesian coordinate system, as shown in Fig. 1. The three axes stand for R, G, B, respectively, and every point in the three-dimensional space means the three components of brightness value. The brightness value is between one and zero. RGB color model In Fig. 1, the origin is black, which value is (0,0,0); while the farthest vertex with a value of (1,1,1) from the origin is white. The straight line between black and white called gray line means that the gray value changes from black to white. The remaining three corners represent the complementary color of the three primary colors - yellow, cyan, magenta. The three components in the RGB color space, which is highly relevant. And it will be changed accordingly as long as the brightness is changed. RGB is a non-uniform color space, so the perception of differences (color) between the two colors cannot stand fort the distance that between two points in the color space. Thus, the RGB color space is often converted to the other color spaces, such as HSI, HSV, the CIE, and Lab, by using linear or nonlinear transform in image processing. However, the original image we have collected usually is the RGB space, color space conversion will increase the amount of computation. And there are many segmentation methods using RGB color space, for example, license location [16] gets the license plate area accurately by calculating the contrast in the RGB components, reducing the calculated amount. HSI color model is put forward by Munsell, which is suitable for human visual characteristics. The H (hue) means the different colors, S (saturation) means the depth of color, and I (brightness) mean the light and shade of color. This model has two important characteristics: (1) I component has nothing to do with the color information of the image and (2) H and S component are closely linked to the feelings. They are suitable for image processing with the visual system to perceive the color characteristics, and we often take advantage of the H component to segment the color image. The model shows in Fig. 2. HSI color model To deal with the image in the HSI space, image must be converted to the HSI mode. The conversion formula (geometric derivation method) as follows Eq. (1): $$ \begin{aligned} H & =\left.\begin{cases} \theta, & G \geq B \\ 2\pi - \theta, & G < B \end{cases}\right.\\ &\text{when}~ \theta\,=\,\cos^{-1}\left(\frac{(R-G)+(R-B)}{2\sqrt{(R-B)(G-B)+(R-G)^{2}}}\right). \\ I & =\frac{R+G+B}{3}, \\ S & =1-\frac{3\min(R,G,B)}{R+G+B}=1-\frac{\min(R,G,B)}{I}. \end{aligned} $$ In the conversion Eq. (1), transformation from the RGB model to the HSI model needs more computation. When brightness was zero, saturation was meaningless and when the saturation was zero, hue made no sense. In the conversion, the hue will generate a singularity that cannot be eliminated [17]. The singularity may lead to the discontinuous of the nearby tonal value in value, which will ignore the low saturation pixels in the image processing and lead to the incorrect segmentation [18]. As is known to us, HSI is suitable for human visual characteristics. Therefore, many scholars have put forth a lot of research for color image segmentation in the HSI model. Reference [19] used the saturation and brightness information of HSI model to get texture image segmentation, which is a combination of fractal theory and BP neural network. Video image capture Generally, there are two ways in video image capture: (1) the use of video capture card with the SDK development tools. This method relies on the Video capture card and the type of camera, not flexible and universal and (2) the use of Microsoft's Windows operating system and VFW (Video For Window) software Development Kit of Visual C++. It is a pure software way to realize the collection of video streaming, input, and output. This method does not depend on the type of vision sensors, with better flexibility and versatility [20, 21]. This paper uses OpenCV's CVCAM technology to realize the collection of video stream of visual sensor, processing, and playback (display) at the same time, and realize the file streaming reading, processing, and broadcasting (display). The introduction of the open source Computer Vision Library (OpenCV) OpenCV is an open source computer vision library that was funded by Intel, composed of a series of C functions and the C++ class, and provides easy-to-use computer vision framework and rich library. The functions include the field of image processing, computer vision, pattern recognition, and artificial intelligence. With the realization of image processing, signal processing, structure analysis, motion detection, camera calibration, computer graphics, 3D reconstruction, and machine learning, a large number of generic algorithms have higher efficiency. OpenCV library has the following advantages: The cross-platform: Windows, Linux, Mac OS, iOS, Android, independent of the operating system, hardware and graphics manager; Free: open source, does not matter if for business applications or for non-commercial applications; The high speed: uses the C/C++, suitable for the development of real-time applications; Easy to use: has a general image/video to load and a save and retrieve module; Flexible: has good scalability, with low-level and high-level application development kit. OpenCV 1.0 version consists of the following six modules: The CXCORE module: basic data structures and algorithmsfunction; The CV module: main OpenCV functions; CVAUX module: experimental auxiliary functions; The HighGUI module: graphics interface functions; The ML module: machine learning function; CVCAM modules: camera interface function. Because the OpenCV library functions by optimizing the C code, not only is the code simple and efficient but also can make full use of the advantages of multi-core processors. Therefore, this paper uses Visual C++ development environment and OpenCV technology for video image capture, processing and display. Video recognition Video recognition mainly includes three links: front-end video information collection and transmission, video retrieval, and back-end analysis processing. Video recognition requires front-end video capture camera to provide a clear and stable video signal as video signal quality will directly affect the effect of video identification, then through embedded intelligent analysis module to detect, analyze, identify the video screen, and filter out interference, then make targets and track marks to the video screen in abnormal situations. In which, the intelligent video analysis module is based on the principles of artificial intelligence and pattern recognition algorithms. Its researches have been applied in fire recognition system [22]. Segment algorithms of a flame object is a key problem in fire recognition based on video sequences applications and have a direct impaction improving fire recognition accuracy [23]. In segmentation of flame object, its procedure is precisely based on analyzing fire image characteristic. This paper introduces a new segmentation method of a flame goal based on threshold value of the area using digital image processing technology and pattern recognition technology. Further, it can judge whether fire occurs from the characteristic information such as the fire color, spreading area, the similarity change, and fire smoke. Experiments prove that the method has better robustness. It can segment the image of flame effectively from a sequence of images and reduce the false and missing alarms of the fire surveillance system. So it is very effective to the complex large outdoors occasion. Using video recognition technology, through effective analysis of surveillance video images of discrimination, may well detect a fire and treated as early as possible to reduce the economic losses, safeguard people's life, and property safety! Either economically or technically video, fire recognition technology has a distinct advantage. It will also be an important research direction for future identification of fire. Currently, due to different research directions of hardware devices, video fire recognition technology is divided into the following several research ideas: only analysis of static characteristics of the flame, such as the shape, color, and texture of the flame, analysis of the dynamic characteristics such as similarity, spread trend, edge changes, the whole mobile, layered changes, or in the process of dynamic analysis with some simple area characterized criterion [24]. Dynamic characteristics are focused on by comparing two or more adjacent images in the video to judge the fire flame. An analysis of the properties of a single image of flame is relatively lacking; static characteristics are focused on single picture by precise analysis of the geometric properties of flame to arrive at a determination result. This analysis is faster, but ignoring the analysis of trend of the flame between several consecutive frame pictures; judgment result is inevitable errors. In order to improve the defects and based on the analysis of the fire and the image features, this paper proposes a new segmentation method of flame goal based on threshold value of the area. The method can not only remove noise but also rapidly and accurately extract the target object. Further, it can judge whether fire occurs from the characteristic information, such as the fire flame color, spreading area and the similarity change, and fire smoke. Experimental results show that the method greatly improves the reliability of the fire judging and accuracy and reduces the false alarm and the omission of the fire recognition, shortening the recognition time of fire. Video segmentation The so-called video segmentation is to separate the object or objects in video sequences that are important or people are interested in (Video Object; VO) from the background, or that is to draw respectively consistent attributes of each area and, at the same time, to distinguish the background and foreground regions. Video images can be regarded as a kind of 3D image. In other words, the video image is composed of a series of time-continuous 2D images. From the perspective of spatial segmentation, video image segmentation is mainly the use of both the spatial and temporal information to pick out the independent motion regions of the video image in a frame by frame detection [25]. Video segmentation is the premise and foundation of other video image processing, such as video coding, video retrieval, and video database operation. The segmentation quality has a direct impact on the work of the late. So, the research of video segmentation technology is important and challenging. The main purpose of video segmentation is to segment the moving foreground that people are interested in from the background. At present, there are many splitting methods in video segmentation, such as image difference method, time difference method, and optical flow method. Image difference method is the use of the original image and the reconstructed background image to make differences to realize video segmentation. Time difference method is based on the different images, introducing the relationship between hot and cold time-space domain. Optical flow method is based on the moving object optical flow characteristics with time's change to efficiently extract and track the moving object [26]. Comparing these methods, image difference method with low computational complexity, less affected by the light and low requirement to the hardware, detected better in most cases. The key of image difference method is how to reconstruct a complete video image background. Background reconstruction method mentioned in the literature requires at least 25 video images of unified coordinate pixel values to reconstruct the background image. This method takes a long time and is not conducive to the implementation of segmentation. Since each frame video image of moving foreground region in the same coordinate point have different gray value in general, i.e., frame difference should be a large difference in the foreground area than in the stationary background area. Therefore, by calculating the gray scale value between successive frames can be obtained foreground motion region. At present, the general steps of video segmentation are the following: first, the original video image data is simplified and eliminated the noise in order to facilitate the segmentation, which can be accomplished by low-pass filtering, median filtering, and morphological filtering; next, extract the features of the video image, which including color, texture, motion, frame difference, and so on; then, based on certain standards of uniformity, determine the split decision according to the feature extraction to classify the video image, and finally, the post-treatment to achieve filtering noise and accurately extract the boundary, getting accurate segmentation results. The analysis of segmentation algorithms Threshold segmentation method [27] is one of the most commonly used parallel regional technologies; it is one of the largest number used in image segmentation. Actually threshold segmentation method is that transform image G to the output image F as follow: $$ F(i,j)=\begin{cases} 1, & G(i,j) \geq T \\ 0, & G(i,j) < T. \end{cases} $$ T is the threshold value. If it is the object, then image element G(i,j)=1 or image element G(i,j)=0. Thus, the key of threshold segmentation algorithm is to determine the threshold value. When threshold is determined, we compare the threshold with the gray value of the pixel and divide every pixel concurrently; segmentation result will output the image area directly. Threshold segmentation has the advantage of simple calculation, high efficiency operation, and high speed. It has been widely used in applications that focus on operation efficiency, such as hardware implementation. Scholars have studied all kinds of threshold processing technologies, including global threshold value, adaptive threshold value, and the best threshold value. In the color image segmentation, we also consider the color value of pixels, i.e., the color information and brightness, which influence the segmentation result. And many scholars have made a lot of research of this problem. Cheng and Quan [18] puts forward a model color image background difference method based on HSI. According to chromaticity (H), saturation (S), and brightness (I), independent characteristics of the HSI model, it creates the brightness information by the H component and the S component and extracts the precise prospects with using a dynamic threshold of the brightness information. The change of the light will influence accuracy of detection of moving objects, so this paper eliminates it with HSI. The results show that this method is robust for noise and light changes and can well solve the problems of brightness changes. This method can well solve the influence of light, but it would increase the amount of computation when the color space was transformed to HSI space. Huang et al. [28] describes an algorithm in traffic sign segmentation. It considers the influence of light and the transformations in the color space and analysis of a lot of traffic sign pictures and researches the relationship between the color pixels in the RGB color space; the paper puts forward a traffic sign segmentation method based on an RGB model. This method can be very good in dealing with traffic sign segmentation of the impact of the noise and light; the segmentation result is precise and can be real-time processed, but it needs a lot of research of traffic to get the experience threshold. In this paper, we seriously discussed the influence factors of the image segmentation, including light, noise, and color space. An algorithm of color image segmentation base on color similarity in the RGB color space is presented; we calculate the pixels' similarity by color similarity and form classification map, and obtain the segmentation finally. Color sensor and color correction Color has always played an important role in our life and production activities. The color of an object contains a lot of information, so it is easily affected by many factors, such as radiation light and reflections, light source azimuth, observation orientation, and the performance of the sensor [29]; the change of any parameter will lead to a change in the observed color. The standard method of color measurement is that measures the sample tristimulus values by making use of spectrophotometric color measurement instrument and obtains the color of the sample. At present, there are two basic types sensor based on the principle of all kinds of color identification: RGB color sensor (red, green, blue) mainly detects tristimulus values; Chromatic aberration sensor detects the chromatic aberration of the object to be tested and the standard color. This kind of device contains diffuse type, beam type, and optical fiber type, and is encapsulated in various metals and polycarbonate shells. RGB color sensor has two kinds of measurement modes: one is to analyze the proportion of red, green, blue. No matter how detection distance changes, it just only cause the change of light intensity but not the proportion of the three kinds of color light. Therefore, it can be used even in the target mechanical vibration occasions. The other mode is to use the reflected light intensity of the primary colors of red, green, and blue to detect. It can detect the tiny color discrimination, but the sensor will be affected by the impact of the target mechanical position. Most RGB color sensors have a guide function that makes it very easy to set up. This kind of sensor mostly has a built-in chart and a threshold value which can determine the operating characteristics. It can more accurately measured color using panchromatic color sensitive devices and means of correlation analysis. Typically, in order to obtain the color tristimulus values, it requires at least three photodiodes as well as three corresponding filters [30], so the structure and circuits are complicated. Partial color detection In the color sensor, the main point is how to detect a color. We know that there is a disparity between the real color of the object surface and the acquisition image color by imaging device. This is a partial color, which is caused by the surrounding environment, such as light and noise. And the degree of color cast has a deal with the color temperature of the outside light. Color temperature [31] to the color of the light source is the description of a color measurement. When a light color from a light source and the radiation color of a black body in a certain temperature phase is the same, we call it light color temperature. Under the different light sources, such as natural light, tungsten filament lamp, and halogen lamp, the same kind of color is not the same. The difference is caused by different sources of the "color temperature." Generally, the image color shows slanting blue when the light color temperature is higher. And the image color shows slanting red when the light color temperature is lower. So how to make the collected images to correctly reflect the real color is a key of research. Before correcting the color, we should know if the image exists a partial color and how to detect it and its degree. At present, there are some representative partial color detection methods, including histogram statistics [32], gray balance method [33], and white balance method [33]. They can detect images whether there are partial colors. Histogram statistics can show the whole color performance of the image. It will give the average brightness of three channel of RGB color space. We can judge whether the color of initial image is partial by the average brightness of R, G, and B channels. If the brightness of any component is the highest value, then the whole image color will be the color of this component representative. That is, if the brightness value of component G is the biggest, the whole image displays red. But the cause of the partial color is complex for different applications, so this method is difficult to get comprehensive and accurate judgment. Gray balance method assumes the mean of the R, G, and B is equal in the whole image, which embodies as neutral "ash." It uses statistics to average the brightness of every channel, converts it into Lab color space, obtains the homogeneous Lab coordinates relatively, calculates the color lengths to the neutral point, and judges whether there is partial color. But when the environment is lighter or darker, or the color of the image is more single, the mean of the R, G, and B is not equal. White balance method deals with the existing mirror reflection image; it considers that the specular part of the mirror or the white area reflection can reflect the light color of light source. We count the max brightness value of every channel, convert it into Lab color space, obtain the homogeneous Lab coordinates relatively, calculate the color lengths to the neutral point, and judge whether there is partial color. But the result is distorted when the shooting objects has no white or specular part. All these methods are just only suitable for a certain scope but not all. Therefore, it is limited just to the average image color or brightness max value to measure partial color degree. So, people develop other detection methods for well detection. After color cast detection, the next step is color correction. Color correction is how to describe object intrinsic color under different lighting conditions, and it has been applied in medical image, remote sensing images, mural images, licenses, and many other images. There are some classic methods for color correction, such as gray world color correction [34] and perfect reflection color correction [35]. Gray world color correction meets a hypothesis of the film image which is colorful, namely the statistics mean value of every channel should be equal and the color shows gray scale. We calculate the mean average of the filmed image, keep component G unchanged, and let the mean value of component R and B as the basis of color correction. But this method cannot be used in an image with a large single color. Perfect reflection color correction. The object itself has no color; it shows color through a different wavelength of light absorption, reflection, and projection. If the object is white, all the light is reflected. The white object or area is called the perfect reflector. Perfect reflection theory is based on the hypothesis that it consider the perfect reflector as a standard white in an image. No matter what light it is, a white object, the R, G, and B of its image are of great value. Based on the perfect reflector, it corrects other colors. The two kinds of color correction method are suitable for most color corrections, and the calculation is relatively simple, but sometimes can not come back to the real object color. With various application scenarios of color correction, many scholars have proposed novel methods for color correction. Luz et al. propose a method based on Markov Random Field (MRF) which is used to represent the relationship between color depleted and color image to enhances the color of the image for the application of underwater image [36]. The parameters of the MRF model are learned from the training data and then the most likely color distribution for each pixel in the given color-depleted image is inferred by using belief propagation (BP). This allows the system to adapt the color restoration algorithm to the current environmental conditions and also to the task requirements. Colin et al. propose a method for correcting the color of multiview video sets as a preprocessing step to compression [37]. Distinguished from a previous work, where one of the captured views is used as the color reference, they correct all views to match the average color of the set of views. Block-based disparity estimation is used to find matching points between all views in the video set, and the average color is calculated for these matching points. A least-squares regression is performed for each view to find a function that will make the view most closely match the average color. Rizzi et al. propose a new algorithm for digital images unsupervised enhancement with simultaneous global and local effects, called ACE for Automatic Color Equalization [38]. It is based on a computational model of the human visual system that merges the two basic "Gray World" and "White Patch" global equalization mechanisms. Similar with the human visual system, ACE adapts to a wide range of lighting conditions and effectively extracts visual information from the environment. It has shown promising results in achieving different equalization tasks, e.g., performing color and lightness constancy, realizing image dynamic data driven stretching, and controlling the contrast. Yoon et al. use the temporal difference ratio of HSV color channels to compensate of color distortion between consecutive frames [39]. Experimental results show that the proposed method can be applied to consumer video surveillance systems for removing atmospheric artifacts without color distortion. In this section, we firstly introduce the calculation method of the color similarity traditionally and put forward an improve method for this method, then give the way of extraction of flame target and judgment of fire. Finally, the paper will describe the realization of the proposed algorithm. We also describe the measures for the fill light of the image and color correction and draw the correction model. The calculation of the color similarity We introduce a scale invariance and semantics of mathematical model, called SIMILATION [40], which calculates color similarity. Given a set, the SIMILATION is defined as the harmonic mean and arithmetic mean of the proportion of the set. The harmonic mean (3), the arithmetic mean (4), and the SIMILATION (5) are defined as follows [40]: $$ \mathrm{harmonic~mean} = \frac{n}{\frac{1}{V_{1}}+\frac{1}{V_{2}}+\frac{1}{V_{3}}+\cdots+\frac{1}{V_{n}}} $$ $$ \mathrm{arithmetric~mean} = \frac{V_{1}+V_{2}+V_{3}+\cdots+V_{n}}{n} $$ $$ {{} \begin{aligned} &SIMILATION = \frac{\mathrm{harmonic~mean}}{\mathrm{arithmetric~mean}} \\ & = \frac{\frac{n}{\frac{1}{V_{1}}+\frac{1}{V_{2}}+\frac{1}{V_{3}}+ \cdots+\frac{1}{V_{n}}}}{\frac{V_{1}+V_{2}+V_{3}+\cdots+V_{n}}{n}} \\ & = \frac{n^{2}}{\left(V_{1}\,+\,V_{2}\,+\,V_{3}\,+\,\cdots\,+\,V_{n}\right) \!\times\! \left(\frac{1}{V_{1}}+\frac{1}{V_{2}}+\frac{1}{V_{3}}+\cdots+\frac{1}{V_{n}}\right)} \end{aligned}} $$ In the concept, SIMILATION represents the similarity level of a set of values, and its range is from the positive infinitesimal to one. When the value of SIMILATION equals one, it means that each value of the set is equal. When the value of SIMILATION is positive infinitesimal (note: according to the Eqs. (3) and (4), SIMILATION value could not be 0), it means that each value of the set is variety. So, we can describe the similarity of a set of values by SIMILATION, because a low similarity is equivalent to a high diversity. The SIMILATION is scale invariance, and it reflects the diversity from the proportional relationship. In Table 1, the SIMILATION still is a constant, as long as the proportional relationship between the data of a set has not changed. The standard deviation is a description of a set of data similarity. Although the ratio between data is invariant, the standard deviation has changed. More specifically, scaling of R, G, B values simultaneously with the same degree is equivalent to brightness changing in color spaces. Thus, the scale invariant property is fit for achieving brightness invariance in color segmentation. Table 1 Scale Invariance of SIMILATION The color similarity between two colors (R1,G1,B1) and (R2,G2,B2) is measured as below: Compute (R0,G0,B0) as shown in Eq. (6). Substitute (V1,V2,V3) with (R0,G0,B0) as in Eq. (4) to calculate the SIMILATION. $$ (R_{0},G_{0},B_{0})=\left(\frac{R_{1}}{R_{2}},\frac{G_{1}}{G_{2}},\frac{B_{1}}{B_{2}}\right). $$ The Eq. (5) shows that any one could not be zero, so any component of the two sets of color does not equal to zero in the Eq. (6). Therefore, this measure could not deal with some color value which equal to 0, such as (255,0,0). In the coordinate system of RGB model, there is a lot of color value, for example, yellow (255,255,0) or black (0,0,0). In the RGB color space, when the color shows red, it declares the component red is a bigger number than the other components relatively. While the color shows yellow, it declares the component red is a smaller number than the other components relatively and the gap of the other two components is small. Thus, we modify the color similarity method as follows: Determine a reference color according to a certain rule (it will be described in the next section), the component value of this color does not contain a value of 0; Firstly, check the consulted color by comparing the three components for 0, if it does not, please calculate the color similarity with the SIMILATION; If the three components contain 0, we calculate the color similarity as follows: Only one component equals to 0, such as (R,G,0). Check the value of (R−B) for a positive number; if the value is a positive number, it indicates that the color is rendered as red; otherwise, it is rendered as green. Similarly, other color combinations can also be calculated on the basis of the method. Only two components equal to 0, such as (R,0,0). The (R,0,0) means the color is rendered as red. Similarly, the colors (0,G,0) and (0,0,B) are rendered as green and blue respectively. The black (0,0,0) remains without any further processing. Finally, the results are compared with the reference color; if they belong to the same color, then the two colors are similar. In Table 2, the SIMILATION measures the similarity of the two sets of color well. The first row shows that the SIMILATION equals to one, only the brightness is different in two colors, and it is equal in the value of two sets of R0,G0,B0. On the other hand, if two colors are not equal in hue (the brightness is different), we also calculate their similarity coefficient by SIMILATION, (i.e., either second or third row of Table 2); the similarity of the two colors is 90%. The fourth line of Table 2, although we could not directly calculate the similarity by SIMILATION, we draw the similarity of two colors information by comparing the color components. The rest of the lines in Table 2 describe the coefficient of similarity of a pixel value and other pixel values. Table 2 Brightness and color similarities Extraction of flame target Extraction and segmentation of the flame object is the key technology of fire recognition; the accuracy of flame segmentation and extraction is prerequisite to improve accuracy and robustness of the whole detection system. In the ideal image, one can use hollow out method combined with an edge tracking technology to design the algorithm to achieve, but in the practical engineering application, where there are a lot of noise in the captured image, the existing edge detection algorithm, usually with the aid of Roberts Cross [41], Prewitt, and Sobel edge detection operators, is according to the gray value jump or not to detect the image edge, and these methods to measure outline are usually irregular and edge discontinuity, will cost a lot of time to refine the outline and to connect these discontinuous outline, which cannot be allowed in the practical application. This paper proposes a flame target contour extraction algorithm based on area threshold. The algorithm idea is that at first use, the difference method judges whether there is a target object, and if so, get the area of the target object and the image of the region through a 2D maximum entropy threshold binarization processing, which can get the block of the connected regions in the image. These regions are part of some object, while the others are noise, then put each connected white area as a set, and for a concrete analysis of each set, eliminate the noise and get the outline of the object; the algorithm process is as follows: A reference image is f0(x,y), sequence image of digital image is f i (x,y),i=0,1,2,⋯,N. (x,y) is the coordinates of the pixel in the each image. N is the number of frames in consecutive image sequences. $$ {\displaystyle \begin{array}{cc}\varDelta {f}_i\left(x,y\right)& ={f}_i\left(x,y\right)-{f}_0\left(x,y\right)\\ {}=\left\{\begin{array}{ll}0,\kern1em & \varDelta {f}_i\left(x,y\right)< Th1\kern1em \mathrm{no}\ \mathrm{fire}\\ {}0,\kern1em & \varDelta {f}_i\left(x,y\right)\ge Th1\kern1em \mathrm{on}\ \mathrm{fire}.\end{array}\right.\end{array}} $$ Δf i (x,y) is the difference of the two images and f i (x,y) is a current image; f0(x,y) is a reference image. In order to highlight the target (fire), Th1 selects the 2D maximum entropy threshold of the image; it can separate the target and the surrounding background points as far as possible, to facilitate the next step for extraction of the flame and to eliminate the noise points. Scan Δf i (x,y) binary image, all white pixels in this binary image will be added to the linked list that take PixelLink as the head node. To classify the pixels in the PixelLink list, produce a set corresponding to each connected regions (for each set to create a linked list). Begin from a certain point, plus the similar neighboring points forming a region. The similarity criterion can be in gray scale, color, and shape or other characteristics. The test of similarity can be determined by the threshold. It means that start from the point that meet the detection standards, growing area in all directions; if the proximal point meet the detection criterion, add it into the small area, and when the new points are merged, repeat the process to a new region, Until there is no acceptable adjacent point, generation process will to come to an end. Calculate the area of each connected region, which represented a list, then select the appropriate value of area as threshold for image filtering. The connected regions that exceed the area threshold will remain intact, and the smaller ones as noise are eliminated. Using the method of hollow out can get a single pixel width continuous contour of the object, and there will be no outline of the cross. Suppose m is the target contour in the image f i (x,y), denoted as Ai,1,Ai,2,⋯,Ai,m. After finding flame-suspicious areas, then according to the fire's features such as color of fire, the size of spread area, similarity, and smoke, make a judgment, to further test whether the suspicious area is the flame. The steps of proposed algorithm In this paper, in order to reduce the calculation amount color space conversion, we choose the RGB model. The proposed method that based on the RGB model of color image segmentation is shown in Table 3. Table 3 Color image segmentation algorithm The process is done in the following steps: Given a color image (it is RGB space), determine the dominant color and quantity. The dominant color (i.e., the reference color) is determined on the basis of segmentation need. If we just split the foreground and background, then we need to choose two dominant colors; or determine a dominant color if we only need to split the image of a region of color consistency, such as leaves or traffic signs. This paper focuses on the segmentation of foreground and background, so two dominant colors are enough. Read a color image (the size of the image size is m×n×3); the color space is RGB space. Calculate the probability of each color in this image. We know that the foreground color and background of every image are made up of a lot of the same or similar color. In the RGB space, every color is composed of components R, G, and B. Let the number of each appearing color as a function value and RGB component as a variable and find the two of the largest probability of the appearing color as the dominant color. Calculation is as follows: Scan the image according to row m, save the color value that is firstly scanned with the format (R,G,B) and set the number as 1. Continue to scan, compare the color value that meets with the saved color value, and test the RGB components for equality. If equal, add number one; or save it and set the number as 1. We can get the number of each color in the image in accordance with the above approach and would determine the dominant color by comparing the number of every color. The dominant color is the reference of the SIMILATION, so any component of the dominant color cannot be zero, and if its value is zero, plus one. Calculate the SIMILATION value and form color information map. After we have determined the dominant colors (two), we calculate the similarity between each color and the two dominants respectively by the modified computing method. There are two cases: When every RGB component is not zero, the SIMILATION that we calculate has two values, which stand for the similarity with the two dominant colors respectively. Comparing the two similarity coefficients, the similarity coefficient that is bigger will divide into the collection of the dominant color. If any one of RGB component is zero, we will judge the similarity of the color component of the final show between each color and the two dominant colors and divide into the corresponding collection when they are similarity. Ultimately, a color-class map is formed. Divide the image pixels and output results. Pixels are divided into the one collection of these two types based on pixel color and color information map, so foreground and background are segmented. The extracted section will be clear and the boundaries will not be fuzzy, if the colors of the image are obvious. However, the division of pixels refers to the standard of the color similarity measure, which will lead to inaccurate segmentation in some images. For example, some sections belonging to the background may be divided into the foreground region while others belonging to the foreground is divided into the background region. Therefore, it needs other ways to divide the foreground or background for the poor segmentation results. The step of judgment of fire The fire in the formation process, follow certain characteristics and laws.Familiar with the fire formation rules, select and use some unique characteristics of fire flame, it is a vital role to identify fire as soon as possible. In general, the fire in the video has the following characteristics. For the continuous acquisition of the two images, there must be some regional similarity between the previous frame and the next frame, but this does not cause a similar region turn into a complete overlap. This feature is particularly obvious at the beginning of the fire. The gray of the flame core is greater than the other parts in the image. And after the infrared attenuation, in the video signal, the performance mode of the interference signal is mainly fixed fast moving spot, and the large area of infrared illumination changes. Therefore, in recognition of the flame, firstly, the flame can be divide into interference pattern and non-interference patterns, and then in the non-interference image pattern to identify the characteristics of the flame to judge is the flame or not. Thus, the video fire recognition includes two aspects, the flame object extraction and according to the flame color and size and other characteristics to judge. The main process is shown in Fig. 3. The flow chart of segmentation algorithm The judgment of the fire colors In a real-world environment, a fire may have two possible phenomena: the initial stage had a lot of smoke and the direct flame. In these two phenomena, the color model is completely different and should be dealt separately. The flame model Generally, the distribution of the color between the flame and the illuminator is different. To flame, from the flame core to the outer flame, a general trend is to move its color from white to red, according to the characteristics. You can identify the flame. The smoke model In reality, the fire does not necessarily have to produce flame, it can be a billowing smoke. Therefore, smoke is also one of the main characteristics of fire, but smoke is not like flame that have attributes which is relatively easy to distinguish with other objects.But this is not to say that the smoke would have no effect in a fire recognition; on the contrary, the features of smoke can be combined and be of good use that can greatly improve the accuracy of the system alarms, which naturally improves the usability of the system. At the time of the fire, the most obvious characteristic of smoke is that the area is constantly expanding. Therefore, according to the color and shape features of a smoke, one can identify a suspected smoke area. Given the following definition: $$ r=\frac{R}{R+G+B} $$ $$ g=\frac{G}{R+G+B} $$ $$ Y=0.30R+0.59G+0.11B $$ In the above equations, R, G, and B are the original pixels. After extracting a lot of information about the characteristics of smoke, statistical analysis showed that the following condition is satisfied: $$ \begin{cases} 0.304264<r<0.335354, \\ 0.318907<g<0.337374, \\ r<g. \end{cases} $$ It is a flame region. Calculate the area of the region, then extract the adjacent image, using the same method for processing and analysis. Finally, compare the area of the two processed images, if the area is, change the alarm. The judgment of fire area feature Flame appears as a bright region in the image acquisition, but if only on the basis of this bright region one can determine fire, it is easy to take some light, sunlight, and other high-temperature objects mistakenly as fire. When a fire happened in the early stage, there is a very significant feature is that of spread, and performance in the area is the area of the expanded, so is the size of the same suspicious areas of adjacent frames, which means the size of the area will change. To further verify that the suspicious region is a flame, calculate for the object contour of five consecutive frames separately. fi+1(x,y),fi+2(x,y),fi+3(x,y),fi+4(x,y),fi+5(x,y) and then calculate the area difference and overlap degree of the area between A i,p , P and suspicious region Ai,1, and area, denoted as Δ i,p and ε,ε i,p , respectively. $$ \Delta_{i,p}=\sum_{(x,y)\in A_{n}}^{}A_{i+p}(x,y)-A_{i}(x,y) $$ $$ \varepsilon_{i,p}=\frac{\sum_{(x,y)\in A_{n}}^{}A_{i+p}(x,y)\bigcap A_{i}(x,y)}{\sum_{(x,y)\in A_{n}}^{}A_{i+p}(x,y)} $$ In the above equations, p=i+1,⋯,i+5. Use statistics for the mean of each Δ i,p and ε i,p respectively, denoted as \(\Delta =\sum _{p=1}^{5}\Delta _{i,p}/5\) and \(\varepsilon =\sum _{p=1}^{5}\varepsilon _{i,p}/5\), then given the threshold Th2 and Th3, if Δ≥Th2 and 1>ε≥Th3, the change area is a suspicious fire area. The measure of fill light When without or short of natural light, light output from the thermal ink of the drawing board area can not move toward the camera, so it is difficult to capture the image of this area. If more shades are placed next to the drawing board, clearer images can be obtained on a rainy day or evening. Light source compensation is applicable to many situations, such as VIN recognition and mixed color detection system of glass bottles. The light source in this article is in connection with thermal ink, i.e., the light source refers to light intensity of the normal fluorescent light. It is shown in Fig. 4. Schematic diagram of compensation light Color block means thermal ink (herein referred to as the sensing zone) and the center part is the standard color area. The sensing area is surrounded by the same fluorescent lamps; power is provided by solar cells and windmills. The camera is unable to capture where fluorescent lights are, and the intensity of the fluorescent light can not cause a color change of the thermal ink. Therefore, the intensity of fluorescent lights should be determined by several tests. In daytime, solar panels convert light to electricity; wind power also generates electricity, which is stored in the battery. The battery becomes a power supply to the fluorescent lamps, which can be lit continuously. It is an environmentally friendly way, because both solar energy and wind power are renewable, with neither pollution nor drying up. The method of color correction The color value would be changed by light, temperature, and any other environmental factors, so the image should be color corrected after shooting to get accurate color values. In this paper, we will discuss the correction of standard color and thermal ink color. The correction of standard color The standard color is mainly influenced by light and has nothing with temperature. But the color will change after a long time, this is a common problem. We do not consider it. Assuming that the size of standard color card is N pixels and the value of every pixel P i is f(i)=(R i ,G i ,B i ),i=1,2,⋯,N, so the average color intensity of the whole standard color card is \(I=\sum _{i=1}^{N}f(i)/N\). Maybe the value of some pixels in the standard color card is too big or too small by the influence of light. These pixel points are not so many but it will affect the overall color intensity values. So, we calculate an accurate color intensity value as follows: For every color intensity value f(i), we judged this condition |f(i)−I|≥βI, where β is a coefficient. If (a) is true, the "white spots" (a pixel point whose color value is too big or too small) plus 1 and the total number of pixels minus 1. The color intensity values of the white points after scanning complete f(j)=(R j ,G j ,B j ),j=1,2,⋯,n, n is the sum of the white points, f(i) the average value of the color intensity of the white point. The final color intensity \(\bar {I}=\frac {\sum _{i=1}^{N}f(i)-\sum _{j=1}^{n}f(j)}{N-n}\). We can infer the model of calculating the average intensity value of the colors in the standard color card by a different light \(f(L,\bar {I})\) and the average value of the color intensity in the standard color card f(I), so the influence of light is \(f(L,\bar {I})-f(I)\). The correction of thermal ink The size of the thermal ink sensing area is thought as M pixels, the value of each pixel point P i is g(i)=(R i ,G i ,B i ),i=1,2,⋯,M. So, the average value of the color intensity in the entire sensing area \(R=\sum _{i=1}^{M}g(i)/M\), which is the color intensity value of the thermal ink region affected by light. We know that an essential color change of the thermal ink is a temperature change of environment. Thermal ink features non-correspondent value of color intensity and temperature, which is not a linear relation. When temperature changes in a specific period, the value of color intensity does not increase linearly, nor increase steadily, so it is necessary to identify a functional relation between temperature and the value of color intensity. In general, T=f(R) can stand for the relation between temperature and the value of color intensity, but it is an exceptional. Figure 5 shows that the curve is not a straight line but segmentations. A char of function T=f(R) It indicates that the change of color intensity and temperature is not the same. The function T=f(R) is adjusted as below: $$ T=\begin{cases} \partial 1 f(R), \\ \partial 2 f(R), \\ \partial 3 f(R). \end{cases} $$ ∂1 is a relation coefficient of the temperature and the color intensity value from temperature 0 to T i , so ∂2 from temperature T i to Tn−i and ∂3 from temperature Tn−i to T n . The three coefficients should be obtained via experimental data. Since the value of color intensity is easily affected by temperature and light, the relationship of the three can be denoted as R=S(T,L), where T is symbolic of temperature, R typical value of color, and L the effect of light. The calculation of temperature area In order to calculate the value of sensing area, firstly, it should be borne in mind that standard color card and thermal ink are easily affected by light and temperature. The effect of light on standard color card and sensing area is the same as well as the few influences of temperature on a standard color card and, finally, the typical value of color intensity in the sensing area at a given temperature \((S(T,L)-(f(L,\bar {I})-f(I)))\). The result of color similarity segmentation In this experiment, we choose Microsoft visual studio 2010 platform and a software library Opencv (Open Source Computer Vision Library) for image segmentation. Microsoft visual studio 2010 is a development platform with a powerful function, which can improve the working efficiency and flexibility of the programmer and support for multiple development application. Opencv is a computer vision library based on open source issue of the cross-platform and run on several operating systems of Linux, Windows, and Mac OS. It is lightweight and has high efficiency that is composed of a series of C function and a small amount of C + + class structure. It also provides an interface for Python, Ruby, and MATLAB language to implement many generic algorithms of image processing and computer vision. We choose several color images to do an experiment, pictures a–d, the experimental results as shown in Fig. 6. a–d Fire segmentation results (from left to right is the original image, the foreground, and the background) will have a corresponding change; it can be easy for recognition and segmentation, so it can be used for fire protection systems in real-time analysis The results of fire segmentation In this experiment, we simulate an indoor fire happening, and the camera to capture the video then have a series of processing. We choose several images in the same scene to do the experiment, pictures a–d, the experimental results as shown in Fig. 6. Video image segmentation results (from left to right is original image, the foreground and the background), the background is almost the same. From the experimental results, these four sets of images can well separate foreground and background in accordance with the proposed algorithm. In Fig. 6a, b, the foreground is smoke. The only difference is the area of the smoke is constantly expanding. Though in (a) and (b), a small portion of background is divided into the foreground, it does not affect the segmentation results. In Fig. 6c, d, where the foreground is smoke and flame, it is easy to see the area of fire and smoke as both are changing from (c) to (d), the flame area was expanded and the smoke area was reduced. In Fig. 6d, the background is similar to the flame. But the flame is divided into the foreground well. This work presents a new color image segmentation algorithm based on color similarity in real-time color image segmentation for cyber physical systems. We firstly determine the dominant color. And then, we use a mathematical model called SIMILATION, which takes the hue and the brightness into account at the same time to calculate the color similarity in the RGB color space. After that, we combine the proposed methods of calculation of the image color components to form a color map. Furthermore, pixels are divided based on color map and the segmentation is completed. Besides, this paper also discuss its application in fire detection and propose a new method in identify fire in video based on these characteristics. First, analyze the characteristics of colors of the fire regions and extract the potential fire regions. Then, analyze its features such as fire flame color, spreading area, the similarity change, and fire smoke. They are very commonly observed in fire; so, that achieves automatic feature extraction. The results were accurate and can be used in real-time analysis. The experimental results show that the proposed method has better robustness for the brightness variations and lower computational complexity in real-time systems. However, there is a lack in this algorithm; for instance, some section belonging to the background may be divided into the foreground region. Therefore, we should use other methods to deal with a certain type of video image for accurate segmentation. In this paper, we have described the characteristics of thermal ink and fire, then do some experiments. Although the real-time system has not been coming out, we can use thermal ink characteristics and image processing technology to identify the temperature by the method in this paper and use fire characteristics and video processing technology to identify the fire. It can be applied in food temperature control and tag identification, fire detection, and identification system, etc. NR Pal, SK Pal, A review on image segmentation techniques. Pattern Recog. 26(9), 1277–1294 (1993). VA Shapiro, PK Veleva, VS Sgurev, in Proceedings., 11th IAPR International Conference on Pattern Recognition. Vol. III. Conference C: Image, Speech and Signal Analysis. An adaptive method for image thresholding. (IEEEThe Hague, 1992), pp. 696–699. QT Luong, in eds. by, CH Chen, LF Pau, PS Wang, Color in Computer Vision (World Scientific, Singapore, 1993). A Trémeau, S Tominaga, K Plataniotis, Color in image and video processing: most recent trends and future research directions. EURASIP J. Image Video Process. 2008(1), 581371 (2008). K Lin, LJ Wu, LH Xu, A survey on color image segmentation techniques. J. Image Graph.10:, 1–10 (2005). A Mishra, Y Aloimonos, Active segmentation. Int. J. HR.6(3), 361–386 (2009). CH Lin, CC Chen, Image segmentation based on edge detection and region growing for thinprep-cervical smear. Int. J. Pattern Recognit. Artif. Intell.24(7), 1061–1089 (2010). T Chaira, AK Ray, O Salvetti, in Proceedings of the Sixth International Conference on Advances in Pattern Recognition. Intuitionistic fuzzy c means clustering in medical image segmentation (Springer SingaporeKolkata, 2007), pp. 226–230. S Jay, S Schmugge, MC Shin, in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Effect of colorspace transformation, the illuminance component, and color modeling on skin detection (IEEEWashington, 2004), pp. 813–818. DT Sang, DM Woo, DC Park, in Eds. by, J Lei, FL Wang, H Deng, D Miao, Color Image Segmentation Using Centroid Neural Network (Springer, Germany, 2012). JW Oestreich, WK Tolley, DA Rice, The development of a color sensor system to measure mineral compositions. Miner. Eng.8:, 31–39 (1995). Z Liu, Research development of color sensor technique. J. Transducer Technol.22:, 1–4 (2003). X Lu, Y Xu, Y Li, M Lu, Development of a new double-path color sensor based on TCS230. Appl. Electron. Tech.8:, 89–91 (2007). M Mignotte, Segmentation by fusion of histogram-based k-means clusters in different color spaces. IEEE Trans. Image Process.17:, 780–787 (2008). S Li, J Xu, J Ren, T Xu, A Color Image Segmentation Algorithm by Integrating Watershed with Region Merging (Springer, Germany, 2012). C Zheng, A novel license plate location method on rgb color space. journal of image and graphics. J. Image Graph.11:, 1623–1628 (2010). M Chapron, in Proceedings of 11th IAPR International Conference on Pattern Recognition. A new chromatic edge detector used for color image segmentation (IEEELos Alamitos, 1992), pp. 311–314. X Cheng, Y Quan, Color image background difference based on his model. J. Comput. Appl.S1:, 231–232235 (2009). H Li, Z Liu, S Zhan, A segmentation method of color texture image. Chin. J. Comput.9:, 965–971 (2001). B Gary, The opencv library. Dr. Dobb's J. Softw. Tools Prof. Programmer. 25:, 120–123 (2000). K Pulli, A Baksheev, K Kornyakov, K Kornyakov, V Eruhimov, Real-time computer vision with opencv. Commun. ACM. 55:, 61–69 (2012). W Phillips, M Shah, N Da Vitoria Lobo, Flame recognition in video. Pattern Recogn. Lett.23:, 319–327 (2002). G Qiu, S Liu, D Cao, J Bao, Flame recognition based on video image. Appl. Mech. Mater.687-691:, 3604–3607 (2014). Y Yao, R Chellappa, Tracking a dynamic set of feature points. IEEE Trans. Image Process.4:, 1382–1395 (1995). Z Lei, W Chou, J zhong, CH Lee, in Proceedings of 2000 IEEE International Conference on Multimedia and Expo. Video segmentation using spatial and temporal statistical analysis method (IEEENew York, 2000), pp. 1527–1530. J Chen, G Zhao, M Salo, E Rahtu, M Pietikainen, Automatic dynamic texture segmentation using local descriptors and optical flow. IEEE Trans. Image Process.22:, 326–339 (2012). OJ Tobias, R Seara, Image segmentation by histogram threshing using fuzzy sets. IEEE Trans. Image Process.11:, 1457–1465 (2002). Z Huang, G Sun, F Li, Traffic sign segment based on RGB vision model. Microelectron. Comput.10:, 147–148152 (2004). SD Buluswar, BA Draper, Color machine vision for autonomous vehicles. Eng. Appl. Artif. Intell.11:, 245–256 (1998). H Stiebig, D Knipp, P Hapke, F Finger, Three color piiin-detector using microcrystalline silicon. J. Non-Cryst. Solids.227-230:, 1330–1334 (1998). S Tang, Colorimetry (Beijing Institute of Technology Press, Beijing, 1990). J Zheng, C Hao, F Lei, Y Fan, Automatic illuminations detection and color correction of image using chromatic histogram characters. J. Image Graph.9:, 1001–1007 (2003). Z Li, Color reconstruction theory and practice in color image. Master's thesis, Wuhan University, Wuhan, China (2005). F Gasparini, R Schettini, in Proceedings of 12th International Conference on Image Analysis and Processing. Color correction for digital photographs (IEEEMantova, 2003), pp. 646–651. F Gasparini, R Schettini, Color balancing of digital photos using simple image statistics. Pattern Recognit.37:, 1201–1217 (2004). A Luz, T Mendez, G Dudek, in Energy Minimization Methods in Computer Vision and Pattern Recognition, ed. by A Rangarajan, B Vemuri, and AL Yuille. Color correction of underwater images for aquatic robot inspection eds. by (SpringerBerlin, 2005), pp. 60–73. C Doutre, P Nasiopoulos, Color correction preprocessing for multiview video coding. IEEE Trans. Circ. Syst. Video Technol.19(9), 1400–1406 (2009). A Rizzi, C Gatta, D Marini, A new algorithm for unsupervised global and local color correction. Pattern Recogn. Lett.24(11), 1663–1677 (2003). I Yoon, S Kim, D Kim, MH Hayes, J Paik, Adaptive defogging with color correction in the HSV color space for consumer surveillance system. IEEE Trans. Consum. Electron.58(1), 111–116 (2012). S Wang, in Proceedings of 2009 International Conference on Computational Intelligence and Software Engineering. Color image segmentation based on color similarity. (IEEEWuhan, 2009), pp. 1–4. H Gong, L Hao, Robert edge detection algorithm based on GPU. J. Chem. Pharm. Res.6:, 1308–1314 (2014). The authors would like to appreciate all anonymous reviewers for their insightful comments and constructive suggestions to polish this paper in high quality. This research was supported by Shanghai Universities Distinguished Professor Foundation (Eastern scholar) in 2014 with Project number 10-15-302-014 and the Youth Science Foundation of Jiangxi Province: Dependable and Automatic Resource Management Research in Cloud computing Networks (ID: 20122BAB211022). School of Computer Science and Technology, Tianjin University, Tianjin, 300350, China Neal N. Xiong School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China Neal N. Xiong, Yang Shen, Kangye Yang & Chunxue Wu Department of Computer Science and Engineering, Seoul National University of Science and Technology(SeoulTech), Seoul, Korea Changhoon Lee Yang Shen Kangye Yang Chunxue Wu For this article with several authors, CW and NX conceived and designed the experiments and the theory analysis. YS performed the experiments. KY analyzed the data. CL contributed reagents, materials, and analysis tools. NX wrote the paper. All authors agree with the above contribution details. All authors read and approved the final manuscript. Correspondence to Chunxue Wu. Xiong, N., Shen, Y., Yang, K. et al. Color sensors and their applications based on real-time color image segmentation for cyber physical systems. J Image Video Proc. 2018, 23 (2018). https://doi.org/10.1186/s13640-018-0258-x Video image segmentation Color similarity Thermal ink Real-time systems Fire detection and identification Real-time Image and Video Processing in Embedded Systems for Smart Surveillance Applications
CommonCrawl
Radiative flow and heat transfer of a fluid along an expandable-stretching horizontal cylinder Tarek G. Emam ORCID: orcid.org/0000-0002-6360-83501,2 Journal of the Egyptian Mathematical Society volume 27, Article number: 2 (2019) Cite this article The effect of thermal radiation and suction/injection on heat transfer characteristics of an unsteady expandable-stretched horizontal cylinder has been investigated. Similarity equations are obtained through the application of similarity transformation techniques. The governing boundary layer equations are reduced to a system of ordinary differential equations. Mathematica has been used to solve such system after obtaining the missed initial conditions. The fluid velocity and temperature, within the boundary layer, are plotted and discussed in details for various values of the different parameters such as the thermal radiation parameter, suction/injection parameter, and unsteadiness parameter. Comparison of obtained numerical results is made with previously published results in some special cases and found to be in a good agreement. The obtained results show that the fluid velocity and temperature are affected by the variation of the parameters included in the study such as the radiation parameter, the unsteadiness parameter, and the suction/injection parameter. Several applications in engineering and industrial processes arise from the study of the flow of either Newtonian fluid or non-Newtonian fluid. Such fields have been interesting for many authors for the last few decades. The fields of plastic and metallurgy industries, the drawing of wires, and glass fiber production are good examples for the applications of the problem of the flow over a stretching/shrinking cylinder. The problem of the flow inside a tube that has a time-dependent diameter was first presented by Uchida and Aoki [1] and Shalak and Wang [2]. Wang [3] have studied the steady flow of incompressible viscous flow outside a hollow stretching cylinder. Elbashbeshy et al. [4] have investigated the effect of magnetic field on flow and heat transfer over a stretching horizontal cylinder in the presence of a heat source/sink with suction/injection. Hayat et al. [5] have examined the effects of variable thermal conductivity in mixed convection flow of viscoelastic nanofluid due to a stretching cylinder with a heat source/sink. Ishak et al. [6] have studied the MHD flow and heat transfer outside a stretching cylinder. They have got numerical solutions to the problem using the Keller-box method. The thermal radiation effect is considerable when the difference between the surface temperature and the ambient temperature is big. Mabood et al. [7] have presented a theoretical investigation of flow and heat transfer of a Casson fluid from a horizontal circular cylinder in a non-Darcy porous medium under the action of slips and thermal radiation parameters. Zaimi et al. [8] have studied the unsteady flow due to a contracting cylinder in a nanofluid using Buongiorno's model. Elbashbeshy et al. [9] have studied the effects of thermal radiation, heat generation, and suction/injection on the mechanical properties of the unsteady continuous moving cylinder in a nanofluid. Fang et al. [10] have recently studied the problem of unsteady viscous flow over an expanding stretching cylinder which gives exact similarity solution to the Navier-Stokes equations. They found that the reversal flow fluid is strongly affected by the Reynolds number and the unsteadiness parameter. The numerical solution of the unsteady viscous flow outside of an expanding or contracting cylinder has been reported by Fang et al. [11]. The unsteady nature of the fluid flow is very important from a practical point of view. Some unsteady effects arise due to non-uniformities in the surrounding fluid. Other effects arise due to the self-induction of the body. In fact, there are some devices are designed to execute time-dependent motion in order to perform desired functions [12]. The understanding of unsteady flow and hence applying such knowledge to new design techniques enable scientists and engineers to make important improvements in reliability and costs of several fluid dynamics devices. The problem introduced in this work involves such concept of unsteadiness. In fact, we investigate the case of unsteady viscous flow over a stretching horizontal cylinder with variable radius where the thermal radiation is considered. The understanding of unsteady flow and hence applying such knowledge to new design techniques enable scientists and engineers to make important improvements in reliability and costs of several fluid dynamics devices. Mathematica is used to solve the problem numerically. The obtained results show how the fluid velocity and temperature are affected by the variation of the parameters included in the study such as the radiation parameter, the unsteadiness parameter, and the suction/injection parameter. Mathematical formulation of the problem Consider an unsteady axisymmetric boundary layer flow of an incompressible viscous fluid along a horizontal cylinder which is considered to be continuously stretching. The cylinder is contracting or expanding according to the relation \( a(t)={a}_0\sqrt{1-\beta t} \), where a(t) is the radius of the cylinder at any time t, a0 is the initial value of the cylinder radius, and β is a constant which indicates to contraction (β > 0) or expansion (β < 0). The stretching time-dependent velocity of the surface of the cylinder is assumed to be \( {U}_w\left(x,t\right)=\frac{4\ \nu\ {U}_0x}{a^2(t)} \), and the fluid is assumed to move along the axial direction x while the radial coordinate r is perpendicular to the axis of the cylinder. The temperature of the cylinder surface is assumed to be time dependent in the form \( {T}_w\left(x,t\right)={T}_{\infty }+\frac{a_0\ {T}_0x}{a(t)} \). Figure 1 shows the considered model. The physical model The governing equations [13, 14] are: $$ \frac{\partial }{\ \partial\ x}\left(r\ u\right)+\frac{\partial }{\partial\ r}\left(r\ v\right)=0 $$ $$ \frac{\partial\ u}{\partial\ t}+u\ \frac{\partial\ u}{\partial\ x}+v\ \frac{\partial\ u}{\partial\ r}=\frac{\nu }{r}\ \frac{\partial }{\partial\ r}\left(r\ \frac{\partial\ u}{\partial\ r}\right) $$ $$ \frac{\partial\ T}{\partial\ t}+u\ \frac{\partial\ T}{\partial\ x}+v\ \frac{\partial\ T}{\partial\ r}=\frac{\alpha }{r}\ \frac{\partial }{\partial\ r}\left(r\ \frac{\partial\ u}{\partial\ r}\right)-\frac{\alpha }{\kappa }\ \frac{1}{r}\frac{\partial }{\partial\ r}\left(r\ {q}_r\right) $$ the boundary conditions are: $$ u={U}_w\left(x,t\right),v=\frac{a_{0\ V}}{a(t)},T={T}_w\left(x,t\right),\kern0.5em \mathrm{at}\ r=a(t)\kern0.5em $$ $$ u=0,T={T}_{\infty },\kern0.5em \mathrm{as}\ r\to \infty $$ where u and v are the components of the fluid velocity along x axis and r axis respectively. ν is the fluid kinematic viscosity, α is the fluid thermal diffusivity, κ is the thermal conductivity, V is the constant of suction (V < 0) or injection (V > 0), and \( {q}_r=-\frac{4\ \sigma }{3\ {\alpha}^{\ast }}\ \frac{\partial\ {T}^4}{\partial\ r} \) is the radiation heat flux such that σ and α∗ are the Stefan-Boltzman constant and the mean absorption coefficient, respectively. The temperature differences within the flow are assumed to be sufficiently small such that T4 is expressed as a linear function of T; hence, the Taylor expansion of T4 about T∞, neglecting higher order terms, is given by $$ {T}^4=4\ {T}_{\infty}^3T-3{T}_{\infty}^4 $$ Considering the similarity transformations $$ \eta =\frac{r^2}{a^2(t)}-1,u=\frac{U_w}{U_0}\ {f}^{\prime}\left(\eta \right),v=-\frac{2\ \nu }{r}\ f\left(\eta \right),\theta =\frac{T-{T}_{\infty }}{T_w-{T}_{\infty }} $$ along with Eq. (6), the system of partial differential Eqs. (1)–(3) with the boundary conditions (4)–(5) is transformed into the following system of ordinary differential equation $$ \left(1+\eta \right){f}^{\prime \prime \prime }+{f}^{\prime \prime }+f{f}^{\prime \prime }-f{\prime}^2-A\left[\left(1+\eta \right){f}^{\prime \prime }+{f}^{\prime}\right]=0 $$ $$ \frac{1}{\mathit{\Pr}}\left[\left(1+{N}_R\right)\left(\left(1+\eta \right){\theta}^{\prime \prime }+{\theta}^{\prime}\right)\right]+f{\theta}^{\prime }-{f}^{\prime}\theta -A\left[\left(1+\eta \right){\theta}^{\prime }+\frac{\theta }{2}\right]=0 $$ subject to the boundary conditions: $$ f(0)=-{f}_0,{f}^{\prime }(0)={U}_0,\theta (0)=1 $$ $$ {f}^{\prime}\left(\infty \right)\to 0,\theta \left(\infty \right)\to 0 $$ while primes denote differentiation with respect to η, \( A=\frac{\beta\ {a}_0^2}{4\ \nu } \) is the unsteadiness parameter. Where the negative values of A correspond to contraction and the positive values of A correspond to expansion, \( \mathit{\Pr}=\frac{\nu }{\alpha } \) is the Prandtl number, \( {f}_0=\frac{a_0V}{2\ \nu } \) is the suction (f0 < 0) or injection (f0 > 0) parameter, and \( {N}_R=\frac{16\ \sigma {T}_{\infty}^3}{3\ \kappa\ {\alpha}^{\ast }} \) is the thermal radiation parameter. Two important physical quantities of interest are the skin friction, Cf, and the local Nusselt number, Nux, which are defined as: $$ {C}_f=\frac{2\ {\tau}_w}{\rho\ {U}_w^2},N{u}_x=\frac{x\ \left({q}_w+{q}_r\right)}{\kappa \left({T}_w-{T}_{\infty}\right)} $$ where \( {\tau}_w=\mu {\left(\frac{\partial u}{\partial r}\right)}_{r=a(t)} \) is the cylinder surface sheer stress and \( {q}_w=-\kappa {\left(\frac{\partial T}{\partial r}\right)}_{r=a(t)} \) is the cylinder surface heat flux. Using the dimensionless similarity transformations (7), we get: $$ \frac{U_0}{2}{C}_f\sqrt{U_0\ R{e}_x}={f}^{\prime \prime }(0),N{u}_x\ \sqrt{U_0/R{e}_x}=-\left(1+{N}_R\right){\theta}^{\prime }(0) $$ where \( R{e}_x=\frac{x\ {U}_w}{\nu } \) is the Reynolds number. Method of solutions Equations (8)–(9) subject to the boundary conditions (10)–(11) are transformed into the following system of first-order differential equations: $$ y{\prime}_1={y}_2 $$ $$ y{\prime}_3=\frac{1}{\left(1+\eta \right)}\left[A\left(\left(1+\eta \right){y}_3+{y}_2\right)+{y}_2^2-{y}_3-{y}_1{y}_3\right] $$ $$ y{\hbox{'}}_5=\frac{\mathit{\Pr}}{\left(1+{N}_R\right)\left(1+\eta \right)}\left[{y}_2{y}_4+A\left(\frac{y_4}{2}+\left(1+\eta \right){y}_5\right)-{y}_1{y}_5\right]-\frac{y_5}{\left(1+\eta \right)} $$ where y1 = f, y2 = f′, y3 = f′′, y4 = θ, y5 = θ′ and the initial conditions are then: $$ {y}_1(0)=-{f}_0,{y}_2(0)={U}_0,{y}_4(0)=1,{y}_3(0)=m,{y}_5(0)=n $$ numerical values are given to U0 and f0 . m and n are priori unknown to be determined as part of the solution. Mathematica is used to define the function F[m, n] ≕ NDSolve[System(14) − (19)]. The values of m and n are found upon solving the equations y2(ηmax) = 0, y4(ηmax) = 0. A suitable value of η is taken and then increased to reach ηmax such that the difference between successive values of m and those of n is less than 10−7. The problem now is an initial value problem which in turn is solved using the Mathematica function NDSolve. The accuracy of the numerical method is checked out by comparing results in some special cases with previously published results in the literature as shown in Table 1. Table 1 Comparison of −f′′(0) for various values of A and f0 given that Pr = 0.7, U0 = − 1, NR = 0 From the table, one can find a comparison of the obtained values of −f′′(0) with previously published results in the literature. The comparison is made for various values of A and f0 given Pr = 0.7, U0 = − 1, NR = 0. The obtained results show good amendment which gives rise to the validation of the used method. Results and discussions This section is devoted to the analysis of the behavior of the parameters included in the problem on the fluid velocity f′(η), the fluid temperature θ(η), modified skin friction −f′′(0) and the modified Nusselt number −θ′(0). Table 2 shows the effects of A, Pr , f0, and NR on both of −f′′(0) and −θ′(0) for U0 = 1. It can be observed that the values of −f′′(0) are positive for all values of the considered parameters. The physical interpretation of this behavior is that the cylinder surface exerts a drag force on the fluid which is understood in view of the role of the stretching cylinder to induce the flow when U0 = 1. Table 2 Values of −f′′(0) and −θ′(0) for various values of A, Pr , f0, and NR for U0 = 1 Prandtl number As the Prandtl number increases the thermal conductivity of the fluid decreases so the surface heat flux (Nusselt number) increases which is consistent with the results shown in Table 2. The increasing of the Nussselt number results in decreasing the temperature of the fluid as shown in Fig. 2. At a distance far enough from the surface of the cylinder, the fluid temperature becomes the same as that of the ambient fluid. Variation of fluid temperature with Pr, for A = − 1, NR = 1, U0 = 1, f0 = − 1 Thermal radiation The effect of thermal radiation on the fluid temperature is easily recognized from Table 2 and Fig. 3. Increasing NR leads to the decrease of the surface heat flux (Nusselt number) and hence the fluid temperature increases. The physical interpretation of this behavior is that as the value of NR increases the Rosseland radiation absorptivity α∗ decreases and hence the radiative heat flux \( \frac{\partial\ {q}_r}{\partial\ r}=-\frac{4\ \sigma }{3\ {\alpha}^{\ast }}\ \frac{\partial\ {T}^4}{\partial\ r} \) increases and consecutively increases the rate of radiative heat transferred to the fluid and hence the fluid temperature elevates. Variation of fluid temperature with NR, for A = − 1, Pr = 0.7, U0 = 1, f0 = − 1 The equations governing the fluid velocity does not include NR so there is no effect on NR on the skin friction or the fluid velocity as recognized from Table 2. Expandable unsteadiness parameter For negative values of A, as A decreases, the cylinder expands. Such expansion forces the cylinder surface to be closer to the fluid. So, the frictional forces between the fluid particles and the surface of the cylinder elevate which results in increasing the skin friction as observed in Table 2 and decreasing the fluid velocity as figured out in Fig. 4. It can also be noted from Table 2 that the decreasing of A increases the Nusselt number so the rate of heat transfer elevates which in turn decreases the fluid temperatures as depicted in Fig. 5. Variation of fluid velocity with A, for Pr = 0.7, NR = 1, U0 = 1, f0 = − 1 Variation of fluid temperature with A, for Pr = 0.7, NR = 1, U0 = 1, f0 = − 1 Suction/injection velocity The value of f0, the suction/injection parameter, plays an important role in controlling the friction between the fluid and the surface of the cylinder which in turn affects the heat transfer rate at the cylinder surface. Suction causes the streamlines gets closer to the cylinder surface so the skin friction increases by increasing the value of the suction parameter as shown in Table 2. Consequently, the friction between the fluid layers increases which enforce the fluid to slow down and the fluid velocity gradient decreases as shown in Fig. 6. From Table 2, we can also find that the values of −f′′(0) are bigger for suction in comparison to injection. Such observations imply the conclusion that introducing the injection may help to reduce the friction at the surface of the cylinder. However, the velocity gradient vanishes at some distance large enough from the surface of the cylinder. Variation of fluid velocity with f0, for Pr = 0.7, NR = 1, U0 = 1, A = − 1 The value of Nusselt number in the injection case is smaller than that of suction as recognized from Table 2, and consequently, the fluid temperature is higher as depicted in Fig. 7. This behavior can be justified physically as follows: the lateral mass flux in case of injection enhances the thermal conductivity of the fluid, and hence, the amount of temperature that poured from the cylinder surface to the fluid increases which results in decreasing the Nusselt number. The inverse behavior takes place in case of suction. Variation of fluid temperature with f0 for Pr = 0.7, NR = 1, U0 = 1, A = − 1 The problem of radiative fluid flow and heat transfer of a fluid along an expandable-stretching horizontal cylinder has been studied. The study results are the following: With the decrease of the value of the contraction parameter, the skin friction increases while both of the fluid velocity and temperature decrease. Increasing the Prandtl number leads to a decrease of the Nusselt number and the fluid temperature as well. The fluid velocity decreases as suction increases while the increase of injection leads to increasing the fluid velocity The increase of injection enhances the fluid temperature while the inverse behavior of takes place in the case of suction The surface flux (Nusselt number) and consequently the fluid temperature increase as the thermal radiation parameter increases. Introducing the injection may help to reduce the friction at the surface of the cylinder t : Time [s] a(t) : Radius of the cylinder [m] U w : Stretching time-dependent velocity [m s−1] x : Axial direction coordinate [m] r : Perpendicular to the axis coordinate [m] u : Velocity component in the x-direction [m s−1] v : Velocity component in the r-direction [m s−1] T w : Temperature of the cylinder surface [K] Constant of suction [−] A : Unsteadiness parameter [−] f 0 : Suction (injection) parameter [−] Cf : Local skin friction coefficient [−] f : Dimensionless stream function [−] N R : Radiation parameter [−] Nu x : The local Nusselt number coefficient [−] Pr : Prandtl number [−] Rex : Reynolds number [−] q r : Cylinder surface heat flux [kg s−3] Uchida, S., Aoki, H.: Unsteady flows in a semi-infinite contracting or expanding pipe. J. Fluid Mech. 82(2), 371–387 (1977) Skalak, F.M., Wang, C.Y.: On the unsteady squeezing of viscous fluid from a tube. J. Aust. Math. Soc. B. 21, 65–74 (1979) Wang, C.Y.: Fluid flow due to a stretching cylinder. Phys. Fluids. 31, 466–468 (1988) Elbashbeshy, E.M.A., Emam, T.G., Elazab, M.S., Abdelgaber, K.M.: Effect of magnetic field on flow and heat transfer over a stretching horizontal cylinder in the presence of a heat source/sink with suction/injection. J. Appl. Mech. Eng. 1(1), 1–5 (2012) Hayat, T., Waqas, M., Shehzad, S.A., Alsaedi, A.: Mixed convection flow of viscoelastic nanofluid by a cylinder with variable thermal conductivity and heat source/sink. Int. J. Numer. Methods Heat Fluid Flow. 26(1), 214–234 (2016) Ishak, A., Nazar, R., Pop, I.: Magnetohydrodynamic (MHD) flow and heat transfer due to a stretching cylinder. Energ. Convers. Manage. 49(11), 3265–3269 (2008) Mabood, F., Shateyi, S., Khan, W.A.: Effect of thermal radiation on Casson flow heat and mass transfer around a circular cylinder in a porus medium. Eur. Phys. J. Plus. 130, 188 (2015) Zaimi, K., Ishak, A., Pop, I.: Unsteady flow due to a contracting cylinder in a nanofluid using Buongiorno's model. Int. J. Heat Mass Transf. 68, 509–513 (2014) El-Bashbeshy, E.M.A., Emam, T.G., Abdelwahed, M.S.: The effect of thermal radiation, heat generation, and suction/injection on the mechanical properties of unsteady continuous moving cylinder in a nanofluid. Therm. Sci. 19(5), 1591–1601 (2015) Fang, T., Zhang, J., Zhong, Y., Tao, H.: Unsteady viscous flow over an expanding stretching cylinder. Chin. Phys. Lett. 28, 124707 (2011) Fang, T., Zhang, J., Zhong, Y.: Note on unsteady viscous flow on the outside of an expanding or contracting cylinder. Commun. Nonlinear Sc. Num. Simul. 17, 3124–3128 (2012) McCrosky, W.J.: Some current research in unsteady fluid dynamics - the 1976 Freeman scholarship lecture. ASME J. Fluid Eng. 99, 8–39 (1977) Marinca, V., Ene, R.D.: Dual approximate solutions of the unsteady viscous flow over a shrinking cylinder with optimal homotopy asymptotic method. Adv. Math. Phys. 2014, 417643, 11 (2014) E.M.A. Elbashbeshy, T.G. Emam, and K.M. Abdelgaber, Semi-analytic and numerical solutions of unsteady flow and heat transfer of a fluid over an expandable-stretching horizontal cylinder in the presence of suction/injection, preprint. (2014) Greek symbols α The thermal diffusivity [m2 s− 1] α* Mean absorption coefficient [−] β Contraction (expansion) constant [s− 1] η The dimensionless similarity variable [−] υ Kinematic viscocity [m2 s−1] μ Dynamic viscocity [m2 s−1] κ Thermal conductivity of the fluid [kg m s−3 K−1] The author want to thank the reviewers for their valuable comments which enabled the author to improve the manuscript. This work has been accomplished in the faculty of science and arts- Khulais, university of Jeddah where I work as associate professor. All data generated or analysed during this study are included in this published article. Department of Mathematics, Faculty of Science and Arts - Khulais, University of Jeddah, Jeddah, Kingdom of Saudi Arabia Tarek G. Emam Department of Mathematics, Faculty of Science, Ain Shams University, Cairo, Egypt The author read and approved the final manuscript. Correspondence to Tarek G. Emam. The author declares that he has no competing interests. Emam, T.G. Radiative flow and heat transfer of a fluid along an expandable-stretching horizontal cylinder. J Egypt Math Soc 27, 2 (2019). https://doi.org/10.1186/s42787-019-0005-1 Stretching cylinder Suction and injection 44.40.+a44.90.+c44.05.+e47.15.-x
CommonCrawl
Characterization of scotopic and mesopic rod signaling pathways in dogs using the On–Off electroretinogram Nate Pasmanter ORCID: orcid.org/0000-0002-3170-70551 & Simon M. Petersen-Jones ORCID: orcid.org/0000-0002-7410-23041 The On–Off, or long flash, full field electroretinogram (ERG) separates retinal responses to flash onset and offset. Depending on degree of dark-adaptation and stimulus strength the On and Off ERG can be shaped by rod and cone photoreceptors and postreceptoral cells, including ON and OFF bipolar cells. Interspecies differences have been shown, with predominantly positive Off-response in humans and other primates and a negative Off-response in rodents and dogs. However, the rod signaling pathways that contribute to these differential responses have not been characterized. In this study, we designed a long flash protocol in the dog that varied in background luminance and stimulus strength allowing for some rod components to be present to better characterize how rod pathways vary from scotopic to mesopic conditions. With low background light the rod a-wave remains while the b-wave is significantly reduced resulting in a predominantly negative waveform in mesopic conditions. Through modeling and subtraction of the rod-driven response, we show that rod bipolar cells saturate with dimmer backgrounds than rod photoreceptors, resulting in rod hyperpolarization contributing to a large underlying negativity with mesopic backgrounds. Reduction in rod bipolar cell responses in mesopic conditions prior to suppression of rod photoreceptor responses may reflect the changes in signaling pathway of rod-driven responses needed to extend the range of lighting conditions over which the retina functions. The mammalian retina is uniquely equipped to process visual signals across a substantial range of luminances. In the dark, photoreceptors in the outer retina maintain a relatively depolarized state, with passive and active transport of cations in the outer segments causing an electrical current to flow along the length of the photoreceptor. In rods, this is known as the dark current [1,2,3]. In the dark-adapted retina, both rods and cones can respond to light stimulus; with weak light stimuli, the response is rod-driven – with stronger flashes, there is a mixed rod-cone response. Progressive increases in background light desensitize and suppress the rod response, such that the light-adapted retinal response is cone-driven [4,5,6,7]. The visual signal is shaped by complex retinal processing that divides into two parallel pathways – ON and OFF. The separation of these pathways begins with ON and OFF bipolar cells, second order neurons in the retina that synapse with rod and cone photoreceptors [8,9,10]. Bipolar cells are classified based on their response to light stimulus of the photoreceptors – ON bipolar cells, including rod bipolar cells (RBCs), depolarize, whereas OFF bipolar cells hyperpolarize, in response to a light stimulus driven decrease in glutamate release from photoreceptor synaptic terminals [11, 12]. The bipolar cell response is further shaped by photoreceptor pathways; cones synapse with both ON and OFF cone bipolar cells, whereas rods primarily interact with RBCs when responding to weak stimuli, but have additionally been shown to signal via gap junctions with cone photoreceptors as well as by direct connections with OFF cone bipolar cells [13,14,15,16,17]. The alternative rod pathways are more prominent in mesopic conditions as well as in response to higher frequency flickering light stimuli [18,19,20]. Horizontal cells are also involved in processing the visual signal but because of their orientation in the retina do not make a significant contribution to the electroretinogram as recorded on the corneal surface. The separation of flash On- and Off-responses with the full-field ERG provides a useful tool for the characterization of postreceptoral responses. The flash On- and Off '-responses' are separate from ON and OFF pathways. The On-response is the retinal response to flash onset, beginning with photoreceptor hyperpolarization (generating the major portion of the a-wave) and leading to the depolarization of ON bipolar cells (which is the driver of the positive b-wave) as well as hyperpolarization of OFF bipolar cells (which has contributions to both the shape and amplitude of the a- and b-waves, particularly the early portion of the light-adapted a-wave in primates) [3, 15, 21,22,23,24]. In contrast, the Off-response is the retinal response to stimulus offset, and is generated by several components – in humans, an initial rapid positive deflection (the d-wave) is generated primarily by OFF bipolar cells, but there are additional contributions from photoreceptors (which return to a relatively depolarized state resulting in a slow cornea-positive response) and ON bipolar cells (hyperpolarize, resulting in a fast cornea-negative response) [25,26,27,28,29]. With short-duration flashes these responses are merged in the ERG, and the recorded waveform reflects the combined contribution of these components. Additionally, short-duration flash cone responses exhibit a 'photopic hill' effect whereby with increasing stimulus strength the cone-driven b-wave reach a maximal amplitude and then decreases with widening of the b-wave and lengthening of the peak time. This occurs as the Off pathway response slows and separates from that of the On pathway meaning the two responses are temporally separated rather than being superimposed [13,14,15]. The component waveforms of the ERG are shaped by 'processes' with contributions from different retinal cells (named PI/PII/PIII by Granit based on the order of disappearance under anesthesia) [30]. A major focus of this paper are the changes in response of PIII, which is driven by photoreceptors and is the primary contributor to the cornea-negative a-wave, and PII, driven mainly by ON bipolar cells (but additionally shaped by OFF bipolar cells) that heavily influences the cornea-positive b-wave. Note that PIII is present for the duration of a sustained flash and returns to baseline at flash offset, whereas PII differs at flash onset or offset based on the relative contributions of the ON and OFF pathways [11, 12, 31]. These processes have been shown to further differ in humans and rats based on background luminance, with a greater decline in the amplitude of PII relative to the rod-driven PIII with increasing background luminance [23, 32]. There are interspecies differences in the ERG Off-response (at flash offset). Two broadly different types of flash Off-responses have been identified in mammals as first described by Granit and Therman in 1935—the E-type retina in species such as the rat and mouse, and I-type retina of humans and other primates [25, 27, 33]. The I-type retina has a primarily positive d-wave at termination of the stimulus (Off response) coupled with a relatively large photopic a-wave amplitude at stimulus onset. In contrast, the E type retina has a primarily negative response at the termination of the stimulus and a relatively small a-wave as part of the On-response. Isolation of the receptor response (PIII) by administration of aspartate shows that there is the PIII waveform is maintained during stimulation with a gradual return to baseline, in response to sustained flashes, and thus current generated by photoreceptors are unlikely to explain this difference in the Off-response [34,35,36,37]. Few studies have addressed canine responses to the On–Off ERG, although the technique has been used in the study of some canine inherited retinal degenerations. The dog has emerging importance as a model for development of translational therapy for human conditions. Inherited retinal disease models are more common in dogs than cats—therefore there is a need to fully understand the components of the dog ERG with conditions such as CSNB being recognized in the dog [38,39,40,41,42,43,44]. The dog exhibits a predominantly negative Off-response of the E-type retina [45]. The purpose of this study was to determine baseline features of the On- and Off-response in phenotypically normal dogs as well as to assess postreceptoral pathways and changes with increasing background luminance in the canine retina. Characterization of the canine On–Off ERG Our protocol was designed to examine rod-only, cone-only, and mixed rod-cone contributions to the On–Off ERG using increasing background luminance. Representative tracings of a series of 5 stimulus strengths (250 ms stimuli of 2.5, 25, 180, 500 and 1250 cd/m2) superimposed on 4 background white light luminances (no light, 0.01, 0.1, 1, 10 and 42 cd/m2) are shown in Fig. 1. In the presence of none or low background light levels the ERG response was predominantly rod-driven – with increasing background luminance the rod contribution was sequentially decreased. We consider the response on a background luminance of 42 cd/m2 to be a cone only response (typically 30 cd/m2 is considered to be a rod-suppressing background). Although preceded by a small positive deflection, the Off-response in the dog was predominantly negative in all stimulus and background conditions. Representative On–Off series ERG tracings with a range of background luminances. In each instance responses to 2.5, 25, 180, 500 and 1,250 cd/m2 stimuli presented for 250 ms are shown (red line above tracing indicates duration of stimulus). A responses with 0 (no background) and 0.01 cd/m2 background. B responses with 0.1 and 1 cd/m2 background. The small b-wave superimposed on a longer negative deflection is denoted by arrows in the 1 cd/m2 background. C responses to 10 and 42 cd/m2 background. Note the amplitude scale difference between the three panels The shape and amplitudes of the waveforms differed considerably with both stimulus strength and background luminance. Under none or low background light the b-wave was prominent as was the a-wave in response to stronger stimuli. In response to weaker stimuli the downslope following b-wave peak was prolonged, but this became faster with increasing stimuli. With dimmer background light levels there was minimal change in the waveform at stimulus cessation (the Off-response). With increasing background light levels the b-wave amplitude was reduced, for example, in the presence of a 1 cd/m2 background light the waveform had an initial negative component with a small positive b-wave component (indicated by arrows in Fig. 1B) superimposed on the down slope. With stronger stimuli and increasing background luminance an Off-response became more prominent. The Off-response had a small positive component (which was most apparent for the three strongest flash stimuli in the 10 and 42 cd/m2 background recordings likely reflecting predominance of the cone-driven contributions) followed by a larger negative component. With increasing stimulus strength there was less of a negative post b-wave component. A-wave amplitudes increased with increasing stimulus strength showing semi-saturation kinetics and declined with increasing background luminance (Fig. 2A). In contrast, the b-wave amplitudes were relatively constant with increasing stimulus strength (Fig. 2B). The b-wave amplitudes showed a substantially greater decline with increasing background luminance compared to the a-wave – this led to large decreases in the b:a ratio between 0 and 1 cd/m2 background luminance (Fig. 2C). This suggests that the postreceptoral components of the rod On pathway are suppressed at dimmer background luminances than the negative waveform (PIII response which originates directly from rod photoreceptors). Variation in mean (± standard deviation) On-response amplitudes with stimulus strength and background luminance. A Mean a-wave amplitude. B Mean b-wave amplitude. C Ratio of B:A-wave amplitudes. Different colors and symbols are used to denote the different background luminances There was a similar phenomenon with change in the underlying negativity to the waveform. We measured the 'drift' amplitude, which we defined as the absolute change between the peak of the b-wave and amplitude at flash cessation prior to the d-wave (see Fig. 3A). The increase in drift amplitudes coupled with a decline in b-wave amplitudes resulted in an increase in the drift:b-wave ratio between 0 and 1 cd/m2 background luminance (Figs. 3B-C & 4). There was also a striking difference between the drift:b-wave ratio between two brightest background luminances (Fig. 3C; 10 cd/m2 background in yellow and 42 cd/m2 background in cyan), which may indicate continued rod contributions to the 10 cd/m2 background ERG. Furthermore, the drift:b ratio peaked between 1 and 10 cd/m2 background luminance, depending on stimulus strength, and declined with the strongest background luminance (Fig. 4). Variation in drift with stimulus strength. A Demonstration of the measurement of 'drift'—change in amplitude between the peak of the b-wave and amplitude at flash cessation, immediately prior to the Off-response component. See the inset for another example with a stronger background luminance condition. B Mean (± standard deviation) of drift amplitude. C Mean (± standard deviation) drift:b-wave ratio. Different colors and symbols are used to denote the different background luminances Variation in drift with background luminance for stimulus luminances of 25 and 1,250 cd/m2. A Mean (± standard deviation) drift amplitude. B Mean (± standard deviation) drift:b-wave ratio. Different colors and symbols are used to denote stimulus strength To better characterize photoreceptor contributions to the On-response, we fit an equation described by Birch & Hood to the leading edge of the rod a-wave for background luminances of no background, 0.01, 0.1 and 1 cd/m2. For these calculations, the model first subtracted the response at 42 cd/m2 to remove cone components (essentially the same photopic subtraction used when modeling the a-wave of the short flash ERG). Model parameters are also included from short-flash ERGs for comparison in Fig. 5. The model demonstrated similar changes in the receptor maximum response, Rmax (Fig. 5 – first row) and sensitivity, S (Fig. 5 – second row) parameters with increasing background luminance, up to 0.1 cd/m2 although there was a reduction in Rmax in the 1 cd/m2 background. Rmax and S parameters of the short flash and On–Off scotopic ERGs were similar. However, we did find a substantial difference in the time delay parameter, (td) which was higher for the On–Off compared to short flash ERG (Fig. 5 – third row). This suggests that there is a part of the flash-offset response that contributes to the a-wave that is not present with the longer flash duration. Changes in a-wave model parameters for short flash and On–Off response to no background (dark-adapted) and 0.01, 0.1 and 1 cd/m2 background luminance. A Mean (± standard deviation) Rmax. B Mean (± standard deviation) sensitivity (S). C Mean (± standard deviation) time delay (td). 'SF' and 'LF' are used as abbreviations for short flash and long (On–Off) flash, respectively, and the corresponding background luminance is provided after these. Parameters are derived using the Birch and Hood formula following subtraction of fully light-adapted responses to reveal the rod response We further analyzed the differences in a-wave model parameters using a repeated-measures ANOVA to compare across the different luminance conditions. Between group comparisons for all three parameters (Rmax, S, and td) were significantly different (at a p < 1 × 10–4 level). We then performed a post-hoc Tukey HSD test on each parameter. For the Rmax parameter, the amplitude of the 1 cd/m2 luminance was significantly lower than the other four luminance conditions (p < 0.05). For the S parameter, the sensitivity of both the 0.1 and 1 cd/m2 luminance were significantly lower than the other three luminance conditions (p < 0.05). For the td parameter, the time of the short flash condition was significantly faster than the other four luminance conditions (p < 0.05). To further interrogate postreceptoral rod pathways, particularly the large negative response seen most obviously in the responses with a 1 cd/m2 background, we isolated the PII response by subtracting the modeled PIII process (as shown in Fig. 6A). With increasing background the isolated PII response decreased (Figs. 6B-F). This demonstrated that the negative shape to the waveform with the 1 cd/m2 background luminance (which was also present to a lesser extent with the 0.1 cd/m2 background) was mainly attributable to the rod-driven PIII component. Furthermore, the negative 'drift' (as defined above) present at all backgrounds was essentially eliminated in the isolated PII response, which further supports that the negativity present with sustained flash (as compared to short flash stimuli) is driven by sustained rod activation. Isolating the PII response. A The rod modeled PIII response (black dashed line) is subtracted from the rod response (following subtraction of the cone (photopic) response) (black line) to give the PII response (shown in A for the 180 cd/m2 flash stimulus on the 0 cd/m2 background). The method is applied to the ERG at 0 (no background) 0.01, 0.1, 1 cd/m2 backgrounds with stimuli strengths of B 2.5 cd/m2. C 25 cd/m2. D 180 cd/m2. E 500 cd/m2. F 1,250 cd/m2. Representative results are shown, and background luminance is denoted by different colors and line styles as indicated in the legend in B Examination of the isolated PII component revealed further differences with increases in both stimulus strength and background luminance. Although the peak amplitude of both the 0 (no background) and 0.01 cd/m2 background recordings were similar across all tested stimuli, there was an evident shift to a shorter peak time with the brighter background. This was also apparent in the 0.1 and 1 cd/m2 background recordings in addition to substantial declines in amplitude. We also observed a narrowing (time between the beginning of the leading slope and return to baseline) of the isolated PII response with increasing background luminance, which may reflect a shift in rod signaling pathways. The light-adapted Off-response in the dog has a small positive component (d-wave) but is predominantly negative. The amplitude of this response scales with stimulus strength and background luminance (see Fig. 1). This shape of Off response is similar to that of the rat and in contrast to that of primates where a more prominent positive d-wave is present. This difference is probably due to differences between the species in the relative contributions of ON and OFF pathways. The cessation of ON bipolar cell driven responses as they hyperpolarize with cessation of the light stimulus to the photoreceptors probably drives the negative Off-response in the On–Off ERG. The relatively small positive d-wave in the dog Off-response suggests that the OFF bipolar cell contributions towards shaping the waveform are relatively small in this species. When extrapolated to the On-response it also seems likely that OFF bipolar cells make less of a contribution to the photopic a-wave of the dog than they do in primates [29]. These findings are further supported by a previously characterized canine model of complete CSNB, wherein the light-adapted On–Off ERG in affected dogs demonstrated a substantially more positive Off-response compared to control dogs (thus indicating a relatively greater role of the ON pathway in the normal canine retina) [46]. The amplitude and shape of the On–Off ERG changes with increases in background luminance. When moving from dark-adapted to partial light adaptation the photoreceptor-driven a-wave has a significantly slower decline in amplitude compared to the postreceptoral b-wave of the On-response. This disparity results in a 'negative type' ERG appearance (a negative ERG being one where the b-wave is smaller than the a-wave). Similar findings have been reported in human studies of the short-flash ERG and have been posited to reflect a mechanism for maintenance of retinal sensitivity across a wide range of luminance [47,48,49,50]. The findings reported here suggest that there is a similar occurrence in dogs. We applied the Birch & Hood model of the rod-driven a-wave to parameterize the PIII response in the no background and low luminance backgrounds. The most significant difference in model parameters compared to the regular flash ERG was seen in the 'time delay' parameter td which was greater in the On–Off ERG. There were smaller decreases in the amplitude parameter Rmax that mirrored the changes in a-wave amplitude with increasing background luminance. As the dog ERG has relatively small photopic amplitudes (compared to humans), our results support the conclusion that the responses are primarily rod-driven and a reduction in rod responses leads to a commensurate decrease in a-wave amplitudes. Additionally, the time delay parameter is effectively the time from flash onset to beginning of the a-wave – so while there is likely some component of the Off-response that drives the initial slope of the a-wave, it appears to have relatively small contributions to the amplitude of this response. Using the calculated a-wave model parameters, we subtracted the PIII component from the waveform to isolate the PII component for the no background, 0.01, 0.1, 1 and 10 cd/m2 backgrounds. The PII component predominantly results from activity in the ON pathway. This calculation eliminated the large negativity of the waveform in mesopic background conditions. This suggests that the decline in amplitude of the isolated PII component (which was relatively much greater than the decline in PIII amplitude) is likely attributable at least in part to saturation kinetics to maintain retinal sensitivity across increasing background luminance (as discussed above). However, both the waveform narrowing and move to earlier peak times suggest changes in rod signaling pathways with shifts from scotopic to mesopic luminance conditions. The changes in the isolated PII response may indicate rod-driven contributions to the 'push–pull' mechanism describing the factors that affect the b-wave (in scotopic and mesopic conditions, mainly driven by ON bipolar cell responses but with influences to the amplitude and shape by OFF bipolar cell responses) [15, 21]. The reduction in activity in the rod BC pathway while there is still a robust rod PIII response may also represent the switching of rod signaling from rod bipolar cells to direct contact with cones or cone bipolar cells in mesopic light levels. This process is thought to be important to prevent saturation of inner retinal pathways by rod responses thus expanding the range of luminances the retina can respond to [18,19,20]. Although not performed here, drug dissection studies could be used to further assess the contributions from different pathways to the On–Off ERG in the dog, particularly in determining the generators of the drift component. Evidence from drug dissection studies of the On–Off ERG in primates suggests that the positive b-wave is mainly driven by ON bipolar cells whereas the positive d-wave is driven predominantly by the cessation of OFF bipolar cell activity (using L-2-amino-4-phosphonobutyric acid and cis-2,3-piperidine dicarboxylic acid (PDA) to block the activity of the ON and OFF bipolar cells, respectively) [21]. From a comparison of primate and rodent drug dissection studies, it is plausible that the difference in the shape of the Off-response between the species is due to differences in the relative contributions of the ON and OFF pathways. In fact, the On-response appears to be largely similar in both monkey and rat (albeit with some difference in the response shape), whereas the PDA-sensitive component appears to drive the difference between the ERGs of these species, with a very strong corneal-negative component at flash onset and corneal positive component at flash offset in the primate that is not seen in the rodent [27]. In this study, we designed a protocol with increasing background luminance using long-duration flashes to characterize changes in rod contributions to the On-response of the canine ERG. We showed that the positive PII response saturates at dimmer background luminance than the rod-driven PIII, this may be needed to maintain retinal sensitivity with shifts from scotopic to mesopic lighting. Furthermore, we demonstrated that the rod-driven PIII is responsible for the large negativity present in the On–Off ERG waveforms recorded with mesopic background conditions. This suggests that the shape of the isolated PII indicates potential changes in rod signaling pathways with increasing background luminance. Overall, this study suggests a significant role, and possible changes in signaling, of rod pathways in retinal responses in mesopic background conditions that merit future investigation in dogs and other species. Phenotypically normal adults (between 8 months to 2 years of age) that were laboratory beagle crossbreeds from a colony of dogs maintained at Michigan State University were used in this study. The 6 animals (2 males, 4 females) were from a breeding colony used in other unrelated studies. They were housed under 12 h:12 h light:dark cycles. The dogs were tested on a single occasion and subsequently returned to the breeding colony for other unrelated studies. No dogs were excluded from the study. General anesthesia was induced by intravenous propofol (4–6 mg/kg, PropoFlo, Abbott Laboratories, North Chicago, IL, USA). The animals were intubated and subsequently maintained under anesthesia with isoflurane (IsoFlo, Abbott Laboratories, North Chicago, IL, USA) [between 2–3.5% in a 1-2L/min oxygen flow via a rebreathing circle system for dogs over 10 kg and via a Bain system for dogs under 10 kg]. Electroretinography (ERG) General procedures for ERGs were described previously [51]. Pupils were dilated with tropicamide (Tropicamide Ophthalmic Solution UPS 1%, Falcon Pharmaceuticals Ltd., Fort Worth, TX, USA). A monopolar gold-ringed electrode contact lens (ERG-Jet electrode, Fabrinal Eye Care, La Chaux-De-Fonds, CH) was used, and for reference and grounding platinum needle skin electrodes (Grass Technologies, Warwick, RI, USA) were placed 5 mm lateral to the lateral canthus and over the occiput, respectively. ERGs were recorded using an Espion E2 Electrophysiology system with ColorDome Ganzfeld (Diagnosys LLC, Lowell, MA). ERG protocol The ERG protocol was designed prior to study onset. A constant flash duration of 250 ms was used with progressively stronger white light background luminance (0, or scotopic, 0.01, 0.1, 1, 10, and 42 cd/m2), with 5 different white light stimuli tested at each background (2.5, 25, 180, 500, and 1,250 cd/m2) giving a total of 30 steps. Each flash was presented at one second intervals on a dark background and repeated to generate an averaged response detectable against background electrical noise. Dogs were dark adapted for 1 h prior to initiating the protocol, and for 5 min to each subsequent increase in background luminance. A standard short-duration flash (< 4 ms flashes) protocol was also performed on a separate day, with flash stimuli ranging from 0.0002 to 23 cd.s/m2 for the dark-adapted ERG and 0.01 to 23 cd.s/m2 for the light-adapted (42 cd/m2 white background light) ERG – dogs were dark-adapted for 1 h and light-adapted for 10 min, respectively. Rod-driven a-wave model We calculated parameters for the rod-driven a-wave after subtracting photopically matched ERG waveforms [52]. We fit the following equation described by Birch & Hood to the leading edge of the rod a-wave [53, 54]: $$R(I,t)=(1-exp\lbrack-I\cdot S\cdot{(t-td)}^2\rbrack)\cdot R_{max}\;for\;t>t_d$$ The amplitude R is a function of the retinal luminance I and time t after the flash onset and td is a brief delay. S is a sensitivity factor and Rmax is the maximum amplitude of the response. A-wave model parameters were analyzed using a repeated-measures ANOVA with a post-hoc Tukey HSD performed to determine significant between-group comparisons. Curve fitting We calculated parameters using the lmfit curve-fitting program in the Python 3.6 environment [55], using the Levenberg–Marquardt algorithm to calculate optimal parameter values via least squares minimization [56]: $$f({{\varvec{X}}}_{i},{\varvec{\beta}}+{\varvec{\delta}}) \approx f({{\varvec{X}}}_{i},{\varvec{\beta}}) + {{\varvec{J}}}_{{\varvec{i}}}\boldsymbol{ }{\varvec{\delta}}$$ where Ji is the gradient of f with respect to β. Successive calculation of the parameter δ that minimizes the sum of square of the residuals S is performed computationally until final model parameters are obtained [57, 58]. We determined model goodness-of-fit with the least-squares parameter, with values less than 0.25 considered a good fit [54]: $$lsq= \frac{\sum_{i=1}^{n}{({y}_{i}-f\left({X}_{I},{\varvec{\beta}}\right))}^{2}}{\sum_{i=1}^{n}{({y}_{i}-mean\left(y\right))}^{2}}$$ Isolating rod-driven PII The parameters calculated from the a-wave model were used to define the rod-driven PIII. This modeled response was then subtracted from the photopically subtracted waveforms described above to isolate the postreceptoral PII process for responses with a measurable a-wave (0, 0.01, 0.1, and 1 cd/m2 background light levels). All data generated or analyzed during this study are included in this published article. The datasets for each individual dog during the current study are available from the corresponding author on reasonable request. Burns ME, Baylor DA. Activation, deactivation, and adaptation in vertebrate photoreceptor cells. Annu Rev Neurosci. 2001;24:779–805. Arshavsky VY, Lamb TD, Pugh EN. G Proteins and Phototransduction. Annu Rev Physiol. 2002;64(1):153–87. Hagins WA, Penn RD, Yoshikami S. Dark Current and Photocurrent in Retinal Rods. Biophys J. 1970;10(5):380–412. Fain GL, Matthews HR, Cornwall MC, Koutalos Y. Adaptation in vertebrate photoreceptors. Physiol Rev. 2001;81(1):117–51. Hood DC, Birch DG. Phototransduction in human cones measured using the a-wave of the ERG. Vision Res. 1995;35(20):2801–10. Kraft TW, Schneeweis DM, Schnapf JL. Visual transduction in human rod photoreceptors. J Physiol. 1993;464:747–65. Thomas MM, Lamb TD. Light adaptation and dark adaptation of human rod photoreceptors measured from the a-wave of the electroretinogram. J Physiol. 1999;518(Pt 2):479–96. Euler T, Haverkamp S, Schubert T, Baden T. Retinal bipolar cells: elementary building blocks of vision. Nat Rev Neurosci. 2014;15(8):507–19. Nelson R, Kolb H. ON and OFF pathways in the vertebrate retina and visual system. The visual neurosciences 2004;1:260-278. Wässle H. Parallel processing in the mammalian retina. Nat Rev Neurosci. 2004;5(10):747–57. Duvoisin RM, Morgans C, Taylor W. The mGluR6 receptors in the retina: Analysis of a unique G-protein signaling pathway. Cellscience Rev. 2005;2(2):18. Thoreson WB. Kinetics of synaptic transmission at ribbon synapses of rods and cones. Mol Neurobiol. 2007;36(3):205–23. Rufiange M, Rousseau S, Dembinska O, Lachapelle P. Cone-dominated ERG luminance–response function: the Photopic Hill revisited. Doc Ophthalmol. 2002;104(3):231–48. Wali N, Leguire LE. The photopic hill: A new phenomenon of the light adapted electroretinogram. Doc Ophthalmol. 1992;80(4):335–42. Sieving PA, Murayama K, Naarendorp F. Push-pull model of the primate photopic electroretinogram: a role for hyperpolarizing neurons in shaping the b-wave. Vis Neurosci. 1994;11(3):519–32. Bloomfield SA, Dacheux RF. Rod vision: pathways and processing in the mammalian retina. Prog Retin Eye Res. 2001;20(3):351–84. Deans MR, Volgyi B, Goodenough DA, Bloomfield SA, Paul DL. Connexin36 is essential for transmission of rod-mediated visual signals in the mammalian retina. Neuron. 2002;36(4):703–12. Hornstein EP, Verweij J, Li PH, Schnapf JL. Gap-junctional coupling and absolute sensitivity of photoreceptors in macaque retina. J Neurosci Off J Soc Neurosci. 2005;25(48):11201–9. Schneeweis DM, Schnapf JL. Photovoltage of rods and cones in the macaque retina. Science. 1995;268(5213):1053–6. Verweij J, Dacey DM, Peterson BB, Buck SL. Sensitivity and dynamics of rod signals in H1 horizontal cells of the macaque monkey retina. Vision Res. 1999;39(22):3662–72. Bush RA, Sieving PA. A proximal retinal component in the primate photopic ERG a-wave. Invest Ophthalmol Vis Sci. 1994;35(2):635–45. PubMed CAS Google Scholar Robson JG, Saszik SM, Ahmed J, Frishman LJ. Rod and cone contributions to the a-wave of the electroretinogram of the macaque. J Physiol. 2003;547(Pt 2):509–30. Cameron AM, Mahroo OAR, Lamb TD. Dark adaptation of human rod bipolar cells measured from the b-wave of the scotopic electroretinogram. J Physiol. 2006;575(Pt 2):507–26. Robson JG, Maeda H, Saszik SM, Frishman LJ. In vivo studies of signaling in rod pathways of the mouse using the electroretinogram. Vision Res. 2004;44(28):3253–68. Naarendorp F, Williams GE. The d -wave of the rod electroretinogram of rat originates in the cone pathway. Vis Neurosci. 1999;16(1):91–105. Xu X, Karwoski C. Current source density analysis of the electroretinographic d wave of frog retina. J Neurophysiol. 1995;73(6):2459–69. Lei B. The ERG of guinea pig (Cavis porcellus): comparison with I-type monkey and E-type rat. Doc Ophthalmol. 2003;106(3):243–9. Kondo M, Miyake Y, Horiguchi M, Suzuki S, Tanikawa A. Recording Multifocal Electroretinogram On and Off Responses in Humans. Invest Ophthalmol Vis Sci. 1998;39(3):7. Ueno S, Kondo M, Ueno M, Miyata K, Terasaki H, Miyake Y. Contribution of retinal neurons to d-wave of primate photopic electroretinograms. Vision Res. 2006;46(5):658–64. Granit R. The components of the retinal action potential in mammals and their relation to the discharge in the optic nerve. J Physiol. 1933;77(3):207–39. Müller F, Kaupp UB. Signal transduction in photoreceptor cells. Naturwissenschaften. 1998;85(2):49–61. Green DG. Scotopic and photopic components of the rat electroetinogram. J Physiol. 1973;228(3):781–97. Granit R, Therman PO. Excitation and inhibition in the retina and in the optic nerve. J Physiol. 1935;83(3):359–81. Wündsch L, Lützow AV. The effect of aspartate on the ERG of the isolated rabbit retina. Vision Res. 1971;11(10):1207–8. Vinberg FJ, Strandman S, Koskelainen A. Origin of the fast negative ERG component from isolated aspartate-treated mouse retina. J Vis. 2009;9(12):9–9. Arden GB. Voltage gradients across the receptor layer of the isolated rat retina. J Physiol. 1976;256(2):333-360.1. Hanawa I, Tateishi T. The effect of aspartate on the electroretinogram of the vertebrate retina. Experientia. 1970;26(12):1311–2. Veske A, Nilsson SE, Narfström K, Gal A. Retinal dystrophy of Swedish briard/briard-beagle dogs is due to a 4-bp deletion in RPE65. Genomics. 1999;57(1):57–61. Kondo M, Das G, Imai R, Santana E, Nakashita T, Imawaka M, et al. A Naturally Occurring Canine Model of Autosomal Recessive Congenital Stationary Night Blindness. PLoS ONE. 2015;10(9):e0137072. Somma AT, Moreno JCD, Sato MT, Rodrigues BD, Bacellar-Galdino M, Occelli LM, et al. Characterization of a novel form of progressive retinal atrophy in Whippet dogs: a clinical, electroretinographic, and breeding study. Vet Ophthalmol. 2017;20(5):450–9. Marinho LLP, Occelli LM, Pasmanter N, Somma AT, Montiani-Ferreira F, Petersen-Jones SM. Autosomal recessive night blindness with progressive photoreceptor degeneration in a dog model. Invest Ophthalmol Vis Sci. 2019;60(9):465–465. Petersen-Jones SM, Occelli LM, Winkler PA, Lee W, Sparrow JR, Tsukikawa M, et al. Patients and animal models of CNGβ1-deficient retinitis pigmentosa support gene augmentation approach. J Clin Invest. 2018;128(1):190–206. Occelli LM, Schön C, Seeliger MW, Biel M, Michalakis S, Petersen-Jones S, et al. Gene Supplementation Rescues Rod Function and Preserves Photoreceptor and Retinal Morphology in Dogs, Leading the Way Towards Treating Human PDE6A-Retinitis Pigmentosa. Hum Gene Ther. 2018;28(12)1189-201. Petersen-Jones SM, Komáromy AM. Dog models for blinding inherited retinal dystrophies. Hum Gene Ther Clin Dev. 2015;26(1):15–26. Pasmanter N, Petersen-Jones SM. A review of electroretinography waveforms and models and their application in the dog. Vet Ophthalmol. 2020;23(3):418–35. Oh A, Loew ER, Foster ML, Davidson MG, English RV, Gervais KJ, Herring IP, Mowat FM. Phenotypic characterization of complete CSNB in the inbred research beagle: how common is CSNB in research and companion dogs? Doc Ophthalmol. 2018;137(2):87–101. Donner K. Noise and the absolute thresholds of cone and rod vision. Vision Res. 1992;32(5):853–66. Dunn FA, Doan T, Sampath AP, Rieke F. Controlling the gain of rod-mediated signals in the Mammalian retina. J Neurosci Off J Soc Neurosci. 2006;26(15):3959–70. Frishman LJ, Robson JG, Reddy MG. Effects of background light on the human dark-adapted electroretinogram and psychophysical threshold. J Opt Soc Am A. 1996;13(3):601. Shapley R, Enroth-Cugell C. Chapter 9 Visual adaptation and retinal gain controls. Prog Retin Res. 1984;3:263–346. Annear MJ, Bartoe JT, Barker SE, Smith AJ, Curran PG, Bainbridge JW, et al. Gene therapy in the second eye of RPE65-deficient dogs improves retinal function. Gene Ther. 2011;18(1):53–61. Brigell M, Jeffrey BG, Mahroo OA, Tzekov R. ISCEV extended protocol for derivation and analysis of the strong flash rod-isolated ERG a-wave. Doc Ophthalmol. 2020;140(1):5–12. Hood DC, Birch DG. Assessing abnormal rod photoreceptor activity with the a-wave of the electroretinogram: Applications and methods. Doc Ophthalmol. 1996;92(4):253–67. Hood DC, Birch DG. The A-wave of the human electroretinogram and rod receptor function. Invest Ophthalmol Vis Sci. 1990;31(10):2070–81. Van Rossum G, Drake FL. Python 3 Reference Manual. Scotts Valley: CreateSpace; 2009. Newville M, Stensitzki T, Allen DB, Ingargiola A. LMFIT: Non-Linear Least-Square Minimization and Curve-Fitting for Python. Zenodo; 2014 [cited 2020 Jan 20]. Available from: https://zenodo.org/record/11813 Levenberg K. A method for the solution of certain non-linear problems in least squares. Q Appl Math. 1944;2(2):164–8. Marquardt DW. An Algorithm for Least-Squares Estimation of Nonlinear Parameters. J Soc Ind Appl Math. 1963;11(2):431–41. The authors would like to thank Janice Querubin (MSU RATTS) for her help with premedication and induction of anesthesia and general care for the animals in this study. The Donald R. Myers and William E. Dunlap Endowment for Canine Health to Simon Petersen-Jones provided support for the work done in this study including animal husbandry cost and care, and personnel working on the study. That funder had no role in the design of the study, the collection, analysis, and interpretation of data nor in writing the manuscript. Department of Small Animal Clinical Sciences, College of Veterinary Medicine, Michigan State University, 736 Wilson Road, D208, East Lansing, MI, USA Nate Pasmanter & Simon M. Petersen-Jones Nate Pasmanter Simon M. Petersen-Jones NP has participated in the conception and design of the project, in all acquisition and analysis of data, as well as interpretation of data and manuscript writing. SPJ has participated in design of the project, interpretation of data and manuscript writing. All have approved the submitted version and all have agreed both to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution. Correspondence to Simon M. Petersen-Jones. Ethics approval: AUF PROTO202000013 and consent to participate: Not applicable. This study was carried out in compliance with the ARRIVE guidelines. All procedures were performed in accordance with the ARVO statement for the Use of Animals in Ophthalmic and Vision Research and approved by the Michigan State University Institutional Animal Care and Use Committee. The authors have no competing interests. Pasmanter, N., Petersen-Jones, S.M. Characterization of scotopic and mesopic rod signaling pathways in dogs using the On–Off electroretinogram. BMC Vet Res 18, 422 (2022). https://doi.org/10.1186/s12917-022-03505-z Long flash
CommonCrawl
Evaluating the efficacy and safety of the myrtle (Myrtus communis) in treatment and prognosis of patients suspected to novel coronavirus disease (COVID-19): study protocol for a randomized controlled trial Maryam Azimi1,2 & Fatemeh Sadat Hasheminasab ORCID: orcid.org/0000-0002-5709-76703 Trials volume 21, Article number: 978 (2020) Cite this article Since December 2019, the outbreak of coronavirus pneumonia was observed in China and quickly propagate in all of the world. Nowadays, many trials are underway on this disease in which the efficacy of various therapeutic remedies including chemical or natural agents as well as different non-pharmacological methods such as acupuncture are evaluated. This study aims at investigating the effect of M. communis fruit for treatment of COVID-19 disease. We are performing an open-label randomized controlled trial on outpatients clinically suspected to COVID-19 disease in the age range of 18–65 years old with mild to moderate symptoms and without respiratory distress. Patients in both groups (M. communis and control) receive conventional therapy, but those in M. communis group get M. communis preparation in addition to conventional therapy. Intervention will continue for 5 days and the study outcomes including clinical status as well as mortality rate and adverse effects will be measured up to 14 days. The protocol describes the design of an ongoing randomized controlled trial to establish the evidence for the usage of water extract of M. communis fruit in clinically suspected COVID-19 disease and identify any safety concerns. The trial has been registered at the Iranian Registry of Clinical Trials website under the code IRCT20180923041093N3 on March 28th, 2020 (https://www.irct.ir/trial/46721). The results will be disseminated through manuscript publications and presentations to scientific meetings. The outbreak of the pneumonia caused by the novel coronavirus (COVID-19) was first observed in Wuhan, Hubei province, China, in December 2019 and quickly spread all over the country, and then almost all throughout the world, it formed a global concern as a pandemic [1, 2]. The infection caused by the novel coronavirus has a wide range from asymptomatic infectious to mild upper respiratory tract disease which may lead to severe pneumonia and even death [3, 4]. At the onset of clinical symptoms, most patients have such complaints as fever, cough, shortness of breath, muscular pain, and fatigue. Some patients also experience loss of taste and smell, headaches, or diarrhea. Patients with mild symptoms may have only fever and fatigue, whereas in severe cases, patients experience shortness of breath, hypoxia, and acute respiratory distress syndrome, which may include severe metabolic acidosis, coagulation disorders, and septic shock. The mortality rate varies in different age groups and under different conditions and underlying diseases, but it is generally not high compared to similar diseases; nevertheless, due to the high probability of transmission of this disease, mortality and economic costs are significant [5,6,7]. Healers with different therapeutic approaches, including classic medicine, herbal medicine, acupuncture, Chinese medicine, and Persian medicine have intent to find a solution to cure or lessen the signs and symptoms of this contagious disease [7, 8]. Exploring the clinical trials registered on different World Health Organization Primary Registries proved a notable attention to complementary and alternative medicine (CAM) for controlling the novel coronavirus pneumonia [7]. Previously, considerable studies had been designated in the field of CAM for prevention, treatment, and rehabilitation of Severe acute respiratory syndrome (SARS) and influenza [9]. The aqueous extract of M. communis fruit in combination with sugar as a M. communis syrup is an ancient remedy for pneumonia recommended in Persian medicine manuscripts. Heart tonic, lung tonic, antitussive, and anti-diarrhea activities are some of the properties mentioned for M. communis [10, 11]. Recent studies demonstrated the antioxidant, anti-inflammatory, anti-viral, and anti-microbial properties of this herb. Therefore, considering the pharmacological activities of M. communis in addition to traditional approval, this remedy seems to be a promising candidate for performing clinical trials and assessing its efficacy on controlling this disease. Study objectives and hypothesis The main purpose of the proposed trial is to determine whether M. communis preparation can accelerate the healing process of patients clinically suspected to COVID-19 pneumonia and decrease the hospital admission and other related complications. Primary hypothesis: taking M. communis fruit preparation in the first days of clinical suspicion to COVID-19 will subside the sign and symptoms of the disease, as well as decrease the respiratory distress and enhance the well-being. The present trial is designed against the uncertainties about the value of applying alternative therapy and herbal medicine for alleviating the symptoms and promoting the prognosis of the mentioned disease. This issue is a popular subject raised by both people and health professionals involving the current pandemic. The protocol of this study is approved by the Local Medical Ethics Committee of Kerman University of Medical Sciences under the approval code IR.KMU.REC.1399.015; it is also registered at the Iranian Registry of Clinical Trials website under the code IRCT20180923041093N3. This study will be conducted in accordance with the guidelines of Declaration of Helsinki (2008 revision). The procedure will be explained to the patients complying with the inclusion criteria, and each participant will voluntarily sign an informed written consent. Data and source documents will be archived for the purposes of any need for monitoring or inspection by the Ethics Committee. The researchers are only allowed to publish the general and group results of this research without mentioning patients' name and details. The Research Ethics Committee can access patients' information to monitor their rights. Patients recruitment Patients clinically suspected to the COVID-19 pneumonia are considered and enrolled as suspected cases if they meet either an epidemiological history and two clinical manifestations or three clinical manifestations without epidemiological history. Other inclusion criteria include the age range of 18–65 years old, absence of respiratory distress, and candidate for outpatient care and home isolation. They would be eligible if they do not have exclusion criteria including pregnancy, lactation, allergy to M. communis, diabetes, hypertension, hepatic disorder, and renal disorder. Exclusion criteria further covered recent consumption of herbal drugs. The criteria for discontinuing the trial are allergy or any other adverse effects to M. communis, applying other herbs during the project, and irregular consumption of M. communis preparation. This open-label randomized controlled clinical trial will be conducted to determine the effect of M. communis preparation on subjects clinically suspected to COVID-19 pneumonia. In this trial, the allocation ratio was considered 1:1. Study setting This study will be conducted in the referral clinic of Afzalipour Hospital affiliated to Kerman University of Medical Sciences, Kerman, southeastern Iran. A trained general physician will visit the patients. Next, in case of clinical diagnosis of COVID-19 pneumonia, they will be introduced to researchers. Finally, patients who meet eligibility criteria will be invited to the study. Patients' recruitment has been started from April 2, 2020. Randomization and allocation All eligible patients will be randomly allocated to intervention and control groups. A biostatistician generated a randomization list via a blocked randomization method (non-stratified, four patients in each block) using Microsoft Excel® software. A secretary enrolls participants to intervention groups via sequentially numbered opaque envelopes. The researchers obtain written informed consent from the participants, then they are randomly divided into intervention (M. communis) and control groups. Patients in both groups are permitted to take only conventional treatment which are indicated in the fifth edition of the novel coronavirus pneumonia guideline of the Iranian Ministry of Health and Medical Education. According to this guideline, the allowed medication for outpatients who are not high risk is supportive care such as acetaminophen, and for those who are high risk is hydroxyl chloroquin sulfate or chloroquine phosphate. It should be noted that high-risk patients are excluded from this study. Patients in the intervention group should take M. communis preparation in addition to classic medication. So, they received packets containing M. communis fruit and sugar. On a daily basis, they should gently boil the contents of a pack containing 10 g of M. communis fruit and 10 g of sugar in 3 glasses of water until 2 glasses of the liquid remain; next, they should percolate it and drink 1 glass in the morning and 1 glass in the evening for 5 days. Trained assessors collect data and record them in prepared forms. The compliance of the participants will be evaluated via a telephone survey to record the usage of study medication and any side effects. A team from the Vice-Chancellor for Research monitors the processes. The test schedule and procedures are provided in Table 1. Table 1 The test schedule and procedures of suspected COVID-19 patients participating the study Ancillary and post-trial care Patients are followed up directly for 9 days after the intervention. They can contact the researchers until a month later in case of any presenting adverse effects. Researchers are responsible for providing treatment conditions to eliminate any side effects caused by the intervention. The outcomes will be determined in different time points including 0, 1, 2, 3, 4, 7, and 14 days after the intervention, and the primary assessment time will be on the 7th day after the intervention. The primary outcome and the method of measurement is as follows: Cough (severity), via Fisman Cough Severity Score, graded as five points (0, 1, 2, 3, 4) varying between 0 (no cough) and 4 (severe cough with chest discomfort) [12] The secondary outcomes include the following: Temperature, via thermometer (centigrade) Myalgia, via visual analog scale (VAS) [13] Weakness, via VAS Cough (frequency), via Fisman Cough Severity Score Respiratory rate (the number of breaths per minute) Hospital admission (%) Taste and smell disorder (%) Mortality rate (%) Due to the lack of a similar study, the primary sample size was initially considered to be 70 (35 in each arm) [14]. Then an interim analysis will be done and the final sample size will be calculated with the comparison of parameters (mean ± standard deviation of cough severity) between the intervention groups on the 7th day after intervention, using the following formula. $$ n=\frac{{\left({Z}_{1-\alpha /2}+{Z}_{1-\beta}\right)}^2\left({S}_1^2+{S}_2^2\right)}{{\left({\mu}_1-{\mu}_2\right)}^2} $$ Type I (α) and type II (β) errors will be set at 0.05 and 0.1, respectively. If the primary sample size is adequate, the study will be finished, and if it is inadequate, the study will be continued (under the supervision of the ethics committee of Kerman University of Medical Sciences). Only the data of participants who complete the follow-up will be considered. Their demographic data including gender and age will compare between the two groups using the chi-square test. To compare the changes in the symptoms experienced by the patients in the two groups at 7 different time points (on enrollment, 1, 2, 3, 4, 7, and 14 days), the repeated measures ANOVA will apply. The independent samples t-test will also be utilized for comparing the changes between the two groups. Differences of groups (M. communis and control) will be reported as mean and 95% confidence intervals. Thus, the resultant value of p < 0.05 will be considered significant. The statistical analysis will perform via SPSS 23. Definition of end of the study The end of study will be the last patient's last visit. However, the Vice-Chancellor for Research and ethics committee of Kerman University of Medical Sciences have supervision on trial and make the final decision to terminate the study. There are no legal, ethical, or security issues related to recording, collection of the data, storage, processing, and dissemination for this trial. We will not generate any sensitive data, and also we will not undersign any confidentiality contract. All data will be archived for up to 10 years after the study. Potential weakness in study design The protocol of this study was conceived when the PCR test was not adequately available for confirming the diagnosis of COVID-19 infection; on the other hand, the placebo-controlled setting can enhance the value of the study. Hence, it can be concluded that another placebo-controlled study on the confirmed COVID-19 patients is required and ethically justifiable. Lack of monitoring such laboratory data as inflammatory factors, lymphocyte count, and renal and liver function, as well as the lack of following the chest radiography, are the other limitations of this study. The efficacy, safety, and availability of the treatment are the key factors indicating the success of any drug in being welcomed. Previous successful experiences on the efficacy of herbs of traditional Chinese medicine in managing SARS, middle east respiratory syndrome (MERS), and influenza have resulted in designing various researches on different aspects and capacities of alternative medicine for alleviating COVID-19 disease [8]. The present research will provide evidence as to whether M. communis is safe and appropriate for treating COVID-19 pneumonia. Myrtus communis, as a potent anti-viral agent, may be useful especially in the early stage of the disease [15]; on the other hand, its anti-inflammatory property can reduce the cytokine storm [16]. The efficacy of this herb on diarrhea has been proved in several researches [17]. In addition, based on the ancient Persian medicine resources, M. communis extract can be recommended for pneumonia especially when accompanied by cough and diarrhea [11]. The evaluation of the safety of the plant is very crucial especially in such significant projects as COVID-19 disease. Previous clinical trials on M. communis did not mention any serious adverse effects [18]. On the other hand, M. communis is used only for 5 days. Therefore, there are no major concerns regarding the possible side effects of a long-term consumption. In addition, the growing trend of this disease, its high costs of treatment and hospitalization, and resource constraints prove the need to explore safe, effective, and inexpensive COVID-19 medications for shortening the disease course and enhancing the prognosis. Hence, an evidence-based clinical trial to evaluate the effectiveness of M. communis in treating pneumonia induced by COVID-19 certainly has merit. Resultantly, we have described a clinical trial for the treatment of COVID-19 pneumonia using M. communis in Kerman, Iran. Moreover, based on the results of this study, we will hold a large-scale clinical trial, with the aim to comprehensively assess the efficacy and safety of M. communis against COVID-19 infection. The protocol version number is two with revision ID: 137273 and registration date: June 3, 2020. The patient recruitment for this research has begun on April 18, 2020. It is expected to continue till July 30, 2020. Any modification in coordination with the Ethics Committee will be recorded on Iranian registry of clinical trials web-site. The results of this study have the potential for public health applicability. The target audience will be reached through oral presentations, publications, and seminars. The researchers will have to be sure that the participant's privacy is maintained. Data and source documents will be archived for the purposes of any need for monitoring or inspection by the Ethics Committee. At the end of the study, participants will be able to access a copy of the results of the trial from the researchers. Gautret P, Lagier J-C, Parola P, Meddeb L, Mailhe M, Doudier B, et al. Hydroxychloroquine and azithromycin as a treatment of COVID-19: results of an open-label non-randomized clinical trial. Int J Antimicrob Agents. 2020;1(56):105949. Lai C-C, Shih T-P, Ko W-C, Tang H-J, Hsueh P-R. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and corona virus disease-2019 (COVID-19): the epidemic and the challenges. Int J Antimicrob Agents. 2020;55(3):105924. Huang C, Wang Y, Li X, Ren L, Zhao J, Hu Y, et al. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet. 2020;395(10223):497–506. Wang D, Hu B, Hu C, Zhu F, Liu X, Zhang J, et al. Clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus–infected pneumonia in Wuhan, China. Jama. 2020;323(11):1061–9. Gautier JF, Ravussin Y. A new symptom of COVID-19: loss of taste and smell. Obesity. 2020;28(5):848. Guan W-J, Ni Z-Y, Hu Y, Liang W-H, Ou C-Q, He J-X, et al. Clinical characteristics of 2019 novel coronavirus infection in China. N Engl J Med. 2020;382(18):1708–20. Qiu R, Wei X, Zhao M, Zhong C, Zhao C, Hu J, et al. Outcome reporting from protocols of clinical trials of coronavirus disease 2019 (COVID-19): a review. medRxiv. 2020. https://doi.org/10.1101/2020.03.04.20031401. Cui H-T, Li Y-T, Guo L-Y, Liu X-G, Wang L-S, Jia J-W, et al. Traditional Chinese medicine for treatment of coronavirus disease 2019: a review. Tradit Med Res. 2020;5(2):65–73. Luo H, Q-l T, Y-x S, Liang S-b, Yang M, Robinson N, et al. Can Chinese medicine be used for prevention of corona virus disease 2019 (COVID-19)? A review of historical classics, research evidence and current prevention programs. Chin J Integr Med. 2020;26(4):243–50. Avicenna H. The canon of medicine (Al-Qanon fi al-Tibb). Beirut: Dar Ihyaa al-Turaath al-Arabi; 2005. al-Nafis I. Al-Shamel fi Sana'at al-Tibbiyah (Arabic). Tehran: Institute of Medical History, Islamic Medicine and Complementary Medicine; 2008. Fisman EZ, Shapira I, Motro M, Pines A, Tenenbaum A. The combined cough frequency/severity scoring: a new approach to cough evaluation in clinical settings. J Med. 2001;32(3–4):181–7. Hasheminasab FS, Hashemi SM, Dehghan A, Sharififar F, Setayesh M, Sasanpour P, et al. Effects of a Plantago ovata-based herbal compound in prevention and treatment of oral mucositis in patients with breast cancer receiving chemotherapy: a double-blind, randomized, controlled crossover trial. J Integr Med. 2020;18(3):214–21. Whitehead AL, Julious SA, Cooper CL, Campbell MJ. Estimating the sample size for a pilot randomised trial to minimise the overall trial sample size for the external pilot and main trial for a continuous outcome variable. Stat Methods Med Res. 2016;25(3):1057–73. Moradi M-T, Karimi A, Rafieian-Kopaei M, Kheiri S, Saedi-Marghmaleki M. The inhibitory effects of myrtle (Myrtus communis) extract on herpes simplex virus-1 replication in baby hamster kidney cells. J Shahrekord Univ Med Sci. 2011;12(4):54–61. Maxia A, Frau MA, Falconieri D, Karchuli MS, Kasture S. Essential oil of Myrtus communis inhibits inflammation in rats by reducing serum IL-6 and TNF-α. Nat Prod Commun. 2011;6(10):1934578X1100601034. Jabri M-A, Rtibi K, Sakly M, Marzouki L, Sebai H. Role of gastrointestinal motility inhibition and antioxidant properties of myrtle berries (Myrtus communis L.) juice in diarrhea treatment. Biomed Pharmacother. 2016;84:1937–44. Zohalinezhad ME, Hosseini-Asl MK, Akrami R, Nimrouzi M, Salehi A, Zarshenas MM. Myrtus communis L. freeze-dried aqueous extract versus omeprazol in gastrointestinal reflux disease: a double-blind randomized controlled clinical trial. J Evid Based Complement Altern Med. 2016;21(1):23–9. The researchers received no financial support for this project. Gastroenterology and Hepatology Research Center, Kerman University of Medical Sciences, Kerman, Iran Maryam Azimi Department of Traditional Medicine, School of Persian Medicine, Kerman University of Medical Sciences, Kerman, Iran Pharmacology Research Center, Zahedan University of Medical Sciences (ZaUMS), Zahedan, Iran Fatemeh Sadat Hasheminasab FS. H. is the Chief Investigator; she conceived the study, led the proposal protocol development, and wrote the draft of the manuscript. M. A. contributed to study design, development of the proposal, and collecting data. Both authors read and approved the final manuscript. Correspondence to Fatemeh Sadat Hasheminasab. The protocol of this study is approved by the Local Medical Ethics Committee of Kerman University of Medical Sciences under the approval code IR.KMU.REC.1399.015. Written, informed consent to participate will be obtained from all participants. Azimi, M., Hasheminasab, F.S. Evaluating the efficacy and safety of the myrtle (Myrtus communis) in treatment and prognosis of patients suspected to novel coronavirus disease (COVID-19): study protocol for a randomized controlled trial. Trials 21, 978 (2020). https://doi.org/10.1186/s13063-020-04915-w DOI: https://doi.org/10.1186/s13063-020-04915-w Persian medicine M. communis Coronavirus research highlights COVID-19 study protocols
CommonCrawl
A big part is that we are finally starting to apply complex systems science to psycho-neuro-pharmacology and a nootropic approach. The neural system is awesomely complex and old-fashioned reductionist science has a really hard time with complexity. Big companies spends hundreds of millions of dollars trying to separate the effects of just a single molecule from placebo – and nootropics invariably show up as "stacks" of many different ingredients (ours, Qualia , currently has 42 separate synergistic nootropics ingredients from alpha GPC to bacopa monnieri and L-theanine). That kind of complex, multi pathway input requires a different methodology to understand well that goes beyond simply what's put in capsules. Unfortunately, cognitive enhancement falls between the stools of research funding, which makes it unlikely that such research programs will be carried out. Disease-oriented funders will, by definition, not support research on normal healthy individuals. The topic intersects with drug abuse research only in the assessment of risk, leaving out the study of potential benefits, as well as the comparative benefits of other enhancement methods. As a fundamentally applied research question, it will not qualify for support by funders of basic science. The pharmaceutical industry would be expected to support such research only if cognitive enhancement were to be considered a legitimate indication by the FDA, which we hope would happen only after considerably more research has illuminated its risks, benefits, and societal impact. Even then, industry would have little incentive to delve into all of the issues raised here, including the comparison of drug effects to nonpharmaceutical means of enhancing cognition. If you're concerned with using either supplement, speak to your doctor. Others will replace these supplements with something like Phenylpiracetam or Pramiracetam. Both of these racetams provide increased energy levels, yielding less side-effects. If you do plan on taking Modafinil or Adrafinil, it's best to use them on occasion or cycle your doses. These are quite abstract concepts, though. There is a large gap, a grey area in between these concepts and our knowledge of how the brain functions physiologically – and it's in this grey area that cognitive enhancer development has to operate. Amy Arnsten, Professor of Neurobiology at Yale Medical School, is investigating how the cells in the brain work together to produce our higher cognition and executive function, which she describes as "being able to think about things that aren't currently stimulating your senses, the fundamentals of abstraction. This involves mental representations of our goals for the future, even if it's the future in just a few seconds." As shown in Table 6, two of these are fluency tasks, which require the generation of as large a set of unique responses as possible that meet the criteria given in the instructions. Fluency tasks are often considered tests of executive function because they require flexibility and the avoidance of perseveration and because they are often impaired along with other executive functions after prefrontal damage. In verbal fluency, subjects are asked to generate as many words that begin with a specific letter as possible. Neither Fleming et al. (1995), who administered d-AMP, nor Elliott et al. (1997), who administered MPH, found enhancement of verbal fluency. However, Elliott et al. found enhancement on a more complex nonverbal fluency task, the sequence generation task. Subjects were able to touch four squares in more unique orders with MPH than with placebo. Low-dose lithium orotate is extremely cheap, ~$10 a year. There is some research literature on it improving mood and impulse control in regular people, but some of it is epidemiological (which implies considerable unreliability); my current belief is that there is probably some effect size, but at just 5mg, it may be too tiny to matter. I have ~40% belief that there will be a large effect size, but I'm doing a long experiment and I should be able to detect a large effect size with >75% chance. So, the formula is NPV of the difference between taking and not taking, times quality of information, times expectation: \frac{10 - 0}{\ln 1.05} \times 0.75 \times 0.40 = 61.4, which justifies a time investment of less than 9 hours. As it happens, it took less than an hour to make the pills & placebos, and taking them is a matter of seconds per week, so the analysis will be the time-consuming part. This one may actually turn a profit. A synthetic derivative of Piracetam, aniracetam is believed to be the second most widely used nootropic in the Racetam family, popular for its stimulatory effects because it enters the bloodstream quickly. Initially developed for memory and learning, many anecdotal reports also claim that it increases creativity. However, clinical studies show no effect on the cognitive functioning of healthy adult mice. This doesn't fit the U-curve so well: while 60mg is substantially negative as one would extrapolate from 30mg being ~0, 48mg is actually better than 15mg. But we bought the estimates of 48mg/60mg at a steep price - we ignore the influence of magnesium which we know influences the data a great deal. And the higher doses were added towards the end, so may be influenced by the magnesium starting/stopping. Another fix for the missingness is to impute the missing data. In this case, we might argue that the placebo days of the magnesium experiment were identical to taking no magnesium at all and so we can classify each NA as a placebo day, and rerun the desired analysis: Schroeder, Mann-Koepke, Gualtieri, Eckerman, and Breese (1987) assessed the performance of subjects on placebo and MPH in a game that allowed subjects to switch between two different sectors seeking targets to shoot. They did not observe an effect of the drug on overall level of performance, but they did find fewer switches between sectors among subjects who took MPH, and perhaps because of this, these subjects did not develop a preference for the more fruitful sector. But there would also be significant downsides. Amphetamines are structurally similar to crystal meth – a potent, highly addictive recreational drug which has ruined countless lives and can be fatal. Both Adderall and Ritalin are known to be addictive, and there are already numerous reports of workers who struggled to give them up. There are also side effects, such as nervousness, anxiety, insomnia, stomach pains, and even hair loss, among others. And there are other uses that may make us uncomfortable. The military is interested in modafinil as a drug to maintain combat alertness. A drug such as propranolol could be used to protect soldiers from the horrors of war. That could be considered a good thing – post-traumatic stress disorder is common in soldiers. But the notion of troops being unaffected by their experiences makes many feel uneasy. Segmental analysis of the key components of the global smart pills market has been performed based on application, target area, disease indication, end-user, and region. Applications of smart pills are found in capsule endoscopy, drug delivery, patient monitoring, and others. Sub-division of the capsule endoscopy segment includes small bowel capsule endoscopy, controllable capsule endoscopy, colon capsule endoscopy, and others. Meanwhile, the patient monitoring segment is further divided into capsule pH monitoring and others. As with any thesis, there are exceptions to this general practice. For example, theanine for dogs is sold under the brand Anxitane is sold at almost a dollar a pill, and apparently a month's supply costs $50+ vs $13 for human-branded theanine; on the other hand, this thesis predicts downgrading if the market priced pet versions higher than human versions, and that Reddit poster appears to be doing just that with her dog.↩ The demands of university studies, career, and family responsibilities leaves people feeling stretched to the limit. Extreme stress actually interferes with optimal memory, focus, and performance. The discovery of nootropics and vitamins that make you smarter has provided a solution to help college students perform better in their classes and professionals become more productive and efficient at work. "As a brain injury survivor that still deals with extreme light sensitivity, eye issues and other brain related struggles I have found a great diet is a key to brain health! Cavin's book is a much needed guide to eating for brain health. While you can fill shelves with books that teach you good nutrition, Cavin's book teaches you how to help your brain with what you eat. This is a much needed addition to the nutrition section! If you are looking to get the optimum performance out of your brain, get this book now! You won't regret it." Either prescription or illegal, daily use of testosterone would not be cheap. On the other hand, if I am one of the people for whom testosterone works very well, it would be even more valuable than modafinil, in which case it is well worth even arduous experimenting. Since I am on the fence on whether it would help, this suggests the value of information is high. Going back to the 1960s, although it was a Romanian chemist who is credited with discovering nootropics, a substantial amount of research on racetams was conducted in the Soviet Union. This resulted in the birth of another category of substances entirely: adaptogens, which, in addition to benefiting cognitive function were thought to allow the body to better adapt to stress. It may also be necessary to ask not just whether a drug enhances cognition, but in whom. Researchers at the University of Sussex have found that nicotine improved performance on memory tests in young adults who carried one variant of a particular gene but not in those with a different version. In addition, there are already hints that the smarter you are, the less smart drugs will do for you. One study found that modafinil improved performance in a group of students whose mean IQ was 106, but not in a group with an average of 115. 1 PM; overall this was a pretty productive day, but I can't say it was very productive. I would almost say even odds, but for some reason I feel a little more inclined towards modafinil. Say 55%. That night's sleep was vile: the Zeo says it took me 40 minutes to fall asleep, I only slept 7:37 total, and I woke up 7 times. I'm comfortable taking this as evidence of modafinil (half-life 10 hours, 1 PM to midnight is only 1 full halving), bumping my prediction to 75%. I check, and sure enough - modafinil. Adderall is an amphetamine, used as a drug to help focus and concentration in people with ADHD, and promote wakefulness for sufferers of narcolepsy. Adderall increases levels of dopamine and norepinephrine in the brain, along with a few other chemicals and neurotransmitters. It's used off-label as a study drug, because, as mentioned, it is believed to increase focus and concentration, improve cognition and help users stay awake. Please note: Side Effects Possible. Maj. Jamie Schwandt, USAR, is a logistics officer and has served as an operations officer, planner and commander. He is certified as a Department of the Army Lean Six Sigma Master Black Belt, certified Red Team Member, and holds a doctorate from Kansas State University. This article represents his own personal views, which are not necessarily those of the Department of the Army. Another factor to consider is whether the nootropic is natural or synthetic. Natural nootropics generally have effects which are a bit more subtle, while synthetic nootropics can have more pronounced effects. It's also important to note that there are natural and synthetic nootropics. Some natural nootropics include Ginkgo biloba and ginseng. One benefit to using natural nootropics is they boost brain function and support brain health. They do this by increasing blood flow and oxygen delivery to the arteries and veins in the brain. Moreover, some nootropics contain Rhodiola rosea, panxax ginseng, and more. Some cognitive enhancers, such as donepezil and galantamine, are prescribed for elderly patients with impaired reasoning and memory deficits caused by various forms of dementia, including Alzheimer disease, Parkinson disease with dementia, dementia with Lewy bodies, and vascular dementia. Children and young adults with attention-deficit/hyperactivity disorder (ADHD) are often treated with the cognitive enhancers Ritalin (methylphenidate) or Adderall (mixed amphetamine salts). Persons diagnosed with narcolepsy find relief from sudden attacks of sleep through wake-promoting agents such as Provigil (modafinil). Generally speaking, cognitive enhancers improve working and episodic (event-specific) memory, attention, vigilance, and overall wakefulness but act through different brain systems and neurotransmitters to exert their enhancing effects. The Defense Department reports rely on data collected by the private real estate firms that operate base housing in partnership with military branches. The companies' compensation is partly determined by the results of resident satisfaction surveys. I had to re-read this sentence like 5 times to make sure I understood it correctly. I just can't even. Seriously, in what universe did anyone think that this would be a good idea? Flaxseed oil is, ounce for ounce, about as expensive as fish oil, and also must be refrigerated and goes bad within months anyway. Flax seeds on the other hand, do not go bad within months, and cost dollars per pound. Various resources I found online estimated that the ALA component of human-edible flaxseed to be around 20% So Amazon's 6lbs for $14 is ~1.2lbs of ALA, compared to 16fl-oz of fish oil weighing ~1lb and costing ~$17, while also keeping better and being a calorically useful part of my diet. The flaxseeds can be ground in an ordinary food processor or coffee grinder. It's not a hugely impressive cost-savings, but I think it's worth trying when I run out of fish oil. …Phenethylamine is intrinsically a stimulant, although it doesn't last long enough to express this property. In other words, it is rapidly and completely destroyed in the human body. It is only when a number of substituent groups are placed here or there on the molecule that this metabolic fate is avoided and pharmacological activity becomes apparent. I stayed up late writing some poems and about how [email protected] kills, and decided to make a night of it. I took the armodafinil at 1 AM; the interesting bit is that this was the morning/evening after what turned out to be an Adderall (as opposed to placebo) trial, so perhaps I will see how well or ill they go together. A set of normal scores from a previous day was 32%/43%/51%/48%. At 11 PM, I scored 39% on DNB; at 1 AM, I scored 50%/43%; 5:15 AM, 39%/37%; 4:10 PM, 42%/40%; 11 PM, 55%/21%/38%. (▂▄▆▅ vs ▃▅▄▃▃▄▃▇▁▃) That said, there are plenty of studies out there that point to its benefits. One study, published in the British Journal of Pharmacology, suggests brain function in elderly patients can be greatly improved after regular dosing with Piracetam. Another study, published in the journal Psychopharmacology, found that Piracetam improved memory in most adult volunteers. And another, published in the Journal of Clinical Psychopharmacology, suggests it can help students, especially dyslexic students, improve their nonverbal learning skills, like reading ability and reading comprehension. Basically, researchers know it has an effect, but they don't know what or how, and pinning it down requires additional research. Before taking any supplement or chemical, people want to know if there will be long term effects or consequences, When Dr. Corneliu Giurgea first authored the term "nootropics" in 1972, he also outlined the characteristics that define nootropics. Besides the ability to benefit memory and support the cognitive processes, Dr. Giurgea believed that nootropics should be safe and non-toxic. Piracetam boosts acetylcholine function, a neurotransmitter responsible for memory consolidation. Consequently, it improves memory in people who suffer from age-related dementia, which is why it is commonly prescribed to Alzheimer's patients and people struggling with pre-dementia symptoms. When it comes to healthy adults, it is believed to improve focus and memory, enhancing the learning process altogether. Many of the positive effects of cognitive enhancers have been seen in experiments using rats. For example, scientists can train rats on a specific test, such as maze running, and then see if the "smart drug" can improve the rats' performance. It is difficult to see how many of these data can be applied to human learning and memory. For example, what if the "smart drug" made the rat hungry? Wouldn't a hungry rat run faster in the maze to receive a food reward than a non-hungry rat? Maybe the rat did not get any "smarter" and did not have any improved memory. Perhaps the rat ran faster simply because it was hungrier. Therefore, it was the rat's motivation to run the maze, not its increased cognitive ability that affected the performance. Thus, it is important to be very careful when interpreting changes observed in these types of animal learning and memory experiments. Barbara Sahakian, a neuroscientist at Cambridge University, doesn't dismiss the possibility of nootropics to enhance cognitive function in healthy people. She would like to see society think about what might be considered acceptable use and where it draws the line – for example, young people whose brains are still developing. But she also points out a big problem: long-term safety studies in healthy people have never been done. Most efficacy studies have only been short-term. "Proving safety and efficacy is needed," she says. Nootropics (/noʊ.əˈtrɒpɪks/ noh-ə-TROP-iks) (colloquial: smart drugs and cognitive enhancers) are drugs, supplements, and other substances that may improve cognitive function, particularly executive functions, memory, creativity, or motivation, in healthy individuals.[1] While many substances are purported to improve cognition, research is at a preliminary stage as of 2018, and the effects of the majority of these agents are not fully determined.
CommonCrawl
Publication Info. Journal of the Korean Institute of Electrical and Electronic Material Engineers (한국전기전자재료학회논문지) The Korean Institute of Electrical and Electronic Material Engineers (한국전기전자재료학회) The J. KIEEME encompasses all types of semiconductor, electronic ceramics, insulation materials, thin films and sensors, display and optical devices, superconductor and magnetic materials, high voltage and discharge engineering, nano and oxide electronics, energy materials, technology education, and light source and application technology. http://www.jkieeme.org KSCI KCI Volume 16 Issue 12S PEDOT:PSS and Graphene Oxide Composite Hydrogen Gas Sensor Maeng, Sunglyul 69 https://doi.org/10.4313/JKEM.2018.31.2.69 PDF KSCI The power law is very important in gas sensing for the determination of gas concentration. In this study, the resistance of a gas sensor based on poly (3, 4-ethylenedioxythiophene) polystyrene sulfonate+graphene oxide composite was found to exhibit a power law dependence on hydrogen concentration at $150^{\circ}C$. Experiments were carried out in the gas concentration range of 30~180 ppm at which the sensor showed a sensitivity of 6~9% with a response and recovery time of 30s. Effects of Doping Concentration of Polycrystalline Silicon Gate Layer on Reliability Characteristics in MOSFET's Park, Keun-Hyung 74 In this report, the results of a systematic study on the effects of polycrystalline silicon gate depletion on the reliability characteristics of metal-oxide semiconductor field-effect transistor (MOSFET) devices were discussed. The devices were fabricated using standard complimentary metal-oxide semiconductor (CMOS) processes, wherein phosphorus ion implantation with implant doses varying from $10^{13}$ to $5{\times}10^{15}cm^{-2}$ was performed to dope the polycrystalline silicon gate layer. For implant doses of $10^{14}/cm^2$ or less, the threshold voltage was increased with the formation of a depletion layer in the polycrystalline silicon gate layer. The gate-depletion effect was more pronounced for shorter channel lengths, like the narrow-width effect, which indicated that the gate-depletion effect could be used to solve the short-channel effect. In addition, the hot-carrier effects were significantly reduced for implant doses of $10^{14}/cm^2$ or less, which was attributed to the decreased gate current under the gate-depletion effects. Study on Surface Plasmon Electrode Using Metal Nano-Structure for Maximizing Sterilization of Dielectric Discharge Ki, Hyun-Chul;Oh, Byeong-Yun 80 In this study, we investigated plasmon effects to maximize the sterilization of dielectric discharge. We predicted the effect using the finite difference time domain (FDTD) method as a function of electrode shape, size, and period. The structure of the electrode was designed with a thickness of 100 nm of silver nanoparticles on a glass substrate, and was varied according to the shape, size, and period of the electrode hole. Based on the results, it was confirmed that the effect of plasmons was independent of the shape of the electrode hole. It was thus confirmed that the plasmon effect depended only on the size and period of the holes. Further, the plasmon effect was affected by the size rather than period of the holes. Because the absorption of light by the metal varied according to the size of the hole, the plasmon effect generated by the absorption of light also varied. The best results were obtained when the radius and period of the electrode holes were $0.1{\mu}m$ and $0.4{\mu}m$, respectively. Si Based Photoelectric Device with ITO/AZO Double Layer Jang, Hee-Joon;Yoon, Han-Joon;Lee, Gyeong-Nam;Kim, Joondong 85 In this study, functional transparent conducting layers were investigated for Si-based photoelectric applications. Double transparent conductive oxide (TCO) films were deposited on a Si substrate in the sequence of indium tin oxide (ITO) followed by aluminum-doped zinc oxide (AZO). First, we observed that the conductivity and transparency of AZO dominate the overall performance of the double TCO layers. Secondly, the double layered TCO film (consisting of AZO/ITO) deposited by sputtering was compared to a AZO-only film in terms of their optical and electrical properties. We prepared three different AZO films: ITO:3min/AZO:10min, ITO:5min/AZO:7min, and ITO:7min/AZO:4min. The results show that the optical properties (transmittance, absorbance, and reflection) can be controlled by the film composition. This may provide a significant pathway for the manipulation of the optical and electrical properties of photoelectric devices. Electric Properties of High-Tc Ceramic Superconductor for Breaker Lee, Sang-Heon 90 This aim of this study was to develop a process for creating bulk single-crystal YBaCuO superconductors in a high magnetic field. To support the bulk unidirectional growth of $YBa_2Cu_3O_{7-y}$, $SmBa_2Cu_3O_{7-y}$ seeds were planted inside YBaCuO composites and samples were produced by melting, enabling the growth of two YBaCuO superconductors. Due to the magnetism generated inside the superconductor of the upper sample, the magnetization inside the superconducting single crystals was evenly distributed, the sharpness of the induced magnetic force was improved, and the superconducting magnetization were significantly improved. This approach is widely applicable for the production of superconducting wires and current leads used for DC power breakers. Phase Transition and Improvement of Output Efficiency of the PZT/PVDF Piezoelectric Device by Adding Carbon Nanotubes Lim, Youngtaek;Lee, Sunwoo 94 Lead zirconate titanate/poly-vinylidene fluoride (PZT/PVDF) piezoelectric devices were fabricated by incorporating carbon nanotubes (CNTs), for use as flexible energy harvesting devices. CNTs were added to maximize the formation of the ${\beta}$ phase of PVDF to enhance the piezoelectricity of the devices. The phase transition of PVDF induced by the addition of CNTs was confirmed by analyzing the X-ray diffraction patterns, scanning electron microscopy images, and atomic force microscopy images. The enhanced output efficiency of the PZT/PVDF piezoelectric devices was confirmed by measuring the output current and voltage of the fabricated devices. The maximum output current and voltage of the PZT/PVDF piezoelectric devices was 200 nA and 350 mV, respectively, upon incorporation of 0.06 wt% CNTs. Behavior of the Temperature Coefficient of Resistance at Parallelly Connected Resistors Lee, Sunwoo 98 In this paper, we discuss the fabrication of metal alloy resistors. We connected them in parallel to estimate their resistance and temperature coefficient of resistance (TCR). The fabricated resistors have different resistances, 5 and $10{\Omega}$ and different TCRs, 50 and $200ppm/^{\circ}C$. Each resistor was confirmed to have the correct atomic composition through the use of energy dispersive X-ray (EDX). The resistors' electrical properties were confirmed by measuring resistance and TCR. The resistance and TCR of the resistors connected in parallel were estimated through the increase in resistance due to the increase in temperature, and were compared with the measured values. We are confident that this TCR estimation technique, which uses the increase in resistance due to temperature, will be very useful in designing and fabricating resistors with low and stable TCR. Electrical Properties of Supercapacitor Based on Dispersion Controlled Graphene Oxide According to the Change of Solution State by Washing Process Sul, Ji-Hwan;You, In-kyu;Kang, Seok Hun;Kim, Bit-Na;Kim, In Gyoo 102 https://doi.org/10.4313/JKEM.2018.31.2.102 PDF KSCI Recently, there has been an increasing interest in the use of graphene as electrode materials for supercapacitors. In this regard, graphene oxide (GO) films were prepared using GO slurry obtained by dispersing GO powder in deionized (DI) water. The degree of dispersion of GO powder in DI water depends on the concentration of GO slurry, pH, impurity content, GO particle size, types of functional groups contained in GO, and manufacturing method of GO powder. In this study, the dispersivity of the GO powder was improved by adjusting the pH using only DI water (without additives), and a uniform GO film was obtained. The GO film was reduced by exposure to xenon intense pulsed light for a few milliseconds, and the reduced GO film was used as electrodes of a supercapacitor. The supercapacitor was characterized using cyclic voltammetry (CV), charge-discharge cycle, and electrochemical impedance spectroscopy measurements, and the specific capacitance of the supercapacitor was found to be ~140 F/g from the CV data. Electrical Properties of Temperature Coefficient of Resistance and Heat Radiation Structure Design for Shunt Fixed Resistor Kim, Eun Min;Kim, Hyeon Chang;Lee, Sunwoo 107 In this study, we designed the temperature coefficient of resistance (TCR) and heat radiation properties of shunt fixed resistors by adjusting the atomic composition of a metal alloy resistor, and fabricated a resistor that satisfied the designed properties. Resistors with similar atomic composition of copper and nickel showed low TCR and excellent shunt fixed resistor properties such as short-time overload, rated load, humidity load, and high temperature load. Finally, we expect that improved sensor accuracy will be obtained in current-distribution-type shunt fixed resistor for IoT sensors by designing the atomic composition of the metal alloy resistor proposed in this work. Electrochemical Properties of EDLC Electrodes with Diverse Graphene Flake Sizes Yu, Hye-Ryeon 112 Electric double layer capacitors (EDLCs) are promising candidates for energy storage devices in electronic applications. An EDLC yields high power density but has low specific capacitance. Carbon material is used in EDLCs owing to its large specific surface area, large pore volume, and good mechanical stability. Consequently, the use of carbon materials for EDLC electrodes has attracted considerable research interest. In this paper, in order to evaluate the electrochemical performance, graphene is used as an EDLC electrode with flake sizes of 3, 12, and 60 nm. The surface characteristic and electrochemical properties of graphene were investigated using SEM, BET, and cyclic voltammetry. The specific capacitance of the graphene based EDLC was measured in a 1 M $TEABF_4/ACN$ electrolyte at the scan rates of 2, 10, and 50 mV/s. The 3 nm graphene electrode had the highest specific capacitance (68.9 F/g) compared to other samples. This result was attributed to graphene's large surface area and meso-pore volume. Therefore, large surface area and meso-pore volume effectively enhances the specific capacitance of EDLCs. Optimization of Bismuth-Based Inorganic Thin Films for Eco-Friend, Pb-Free Perovskite Solar Cells Seo, Ye Jin;Kang, Dong-Won 117 Perovskite solar cells have received increasing attention in recent years because of their outstanding power conversion efficiency (exceeding 22%). However, they typically contain toxic Pb, which is a limiting factor for industrialization. We focused on preparing Pb-free perovskite films of Ag-Bi-I trivalent compounds. Perovskite thin films with improved optical properties were obtained by applying an anti-solvent (toluene) washing technique during the spin coating of perovskites. In addition, the surface condition of the perovskite film was optimized using a multi-step thermal annealing treatment. Using the optimized process parameters, $AgBi_2I_7$ perovskite films with good absorption and improved planar surface topography (root mean square roughness decreased from 80 to 26 nm) were obtained. This study is expected to open up new possibilities for the development of high performance $AgBi_2I_7$ perovskite solar cells for applications in Pb-free energy conversion devices. A Comparison Study Between International Standard and Statistical Analysis on LED Package Life Park, Se Il;Kim, Gun So;Kim, Chung Hyeok 122 In an attempt to estimate the life projection of LED packages, IESNA published a paper regarding an LED package measurement test method in 2008, and a life projection technical document in 2011, to be used for LED life estimation. IESNA's publications regarding LED package measurement methods were functional, but they were not internationally standardized before 2017. In order to develop a standardized method, the International Standard chose to use the LM-80 as a measurement method for LED life projection in their publication in 2017. Many projection methods have been discussed by the IEC Technical Committee 34 working group, including the method using an exponential function, which reflects lumen degradation characteristics well. This study is designed to explore alternative LED package life estimation methods using an exponential function with statistical analysis, other than the one suggested by the International Standard.
CommonCrawl
Experimental optical phase measurement approaching the exact Heisenberg limit Determination of the asymptotic limits of adaptive photon counting measurements for coherent-state optical phase estimation M. A. Rodríguez-García, M. T. DiMario, … F. E. Becerra Time-reversal-based quantum metrology with many-body entangled states Simone Colombo, Edwin Pedrozo-Peñafiel, … Vladan Vuletić Implementation of a canonical phase measurement with quantum feedback Leigh S. Martin, William P. Livingston, … Irfan Siddiqi Experimental few-copy multipartite entanglement detection Valeria Saggio, Aleksandra Dimić, … Borivoje Dakić Quantifying entanglement in a 68-billion-dimensional quantum state space James Schneeloch, Christopher C. Tison, … Gregory A. Howland Multipartite entanglement analysis from random correlations Lukas Knips, Jan Dziewior, … Jasmin D. A. Meinecke Direct estimation of quantum coherence by collective measurements Yuan Yuan, Zhibo Hou, … Guang-Can Guo Arbitrary entanglement of three qubits via linear optics Pawel Blasiak, Ewa Borsuk & Marcin Markiewicz Resolution of 100 photons and quantum generation of unbiased random numbers Miller Eaton, Amr Hossameldin, … Olivier Pfister Shakib Daryanoosh ORCID: orcid.org/0000-0003-2901-56631, Sergei Slussarenko ORCID: orcid.org/0000-0002-5318-37901, Dominic W. Berry2, Howard M. Wiseman1 & Geoff J. Pryde1 Nature Communications volume 9, Article number: 4606 (2018) Cite this article Quantum metrology Quantum optics The use of quantum resources can provide measurement precision beyond the shot-noise limit (SNL). The task of ab initio optical phase measurement—the estimation of a completely unknown phase—has been experimentally demonstrated with precision beyond the SNL, and even scaling like the ultimate bound, the Heisenberg limit (HL), but with an overhead factor. However, existing approaches have not been able—even in principle—to achieve the best possible precision, saturating the HL exactly. Here we demonstrate a scheme to achieve true HL phase measurement, using a combination of three techniques: entanglement, multiple samplings of the phase shift, and adaptive measurement. Our experimental demonstration of the scheme uses two photonic qubits, one double passed, so that, for a successful coincidence detection, the number of photon-passes is N = 3. We achieve a precision that is within 4% of the HL. This scheme can be extended to higher N and other physical systems. Precise measurement is at the heart of science and technology1. An important fundamental concern is how to achieve the best precision in measuring a physical quantity, relative to the resources of the probe system. As physical resources are fundamentally quantised, it is quantum physics that determines the ultimate precision that can be achieved. Correlated quantum resources2,3,4 such as entangled states can provide an enhancement over independent use of quantum systems in measurement. Quantum-enhanced optical phase estimation promises improvements in all measurement tasks for which interferometry is presently used5,6. Such optical quantum metrology can be divided into two distinct tasks. In phase sensing, the goal is to determine small deviations in a phase about an already well-known value—a very specific situation. The use of maximally path-entangled NOON states7,8 can, in principle, provide optimal sensitivity for this task9. The more challenging task is phase measurement, sometimes called ab initio phase measurement10, in which the aim is to determine an unknown phase ϕ with no prior information about its value. In this case, the use of multiple passes of the optical phase shift and adaptive quantum measurement11, or entanglement and adaptive quantum measurement12, have been shown to be capable of surpassing the shot-noise limit (SNL), VSNL = 1/N (for large N). The SNL represents the minimum variance achievable with a definite number N of independent samples of the phase shift by a photon. By making correlated samples of the phase shift, these schemes11,12,13 can achieve an asymptotic variance V = (Bπ/N)2. This is proportional to, but with a constant overhead B > 1 over, the ultimate limit (the Heisenberg limit, HL) of (π/N)2 for the asymptotic ab initio task. To be precise, in terms of Holevo's variance measure14,15, the exact HL for any value of N is $$V^{{\mathrm{HL}}} = {\mathrm{tan}}^2\left[ {\pi /\left( {N + 2} \right)} \right].$$ Phase measurement schemes are not limited to optics: equivalent techniques have also used phase shifts of superposition states of single-NV-centre measurements induced by magnetic fields16,17, for example. Here we demonstrate a technique to address this outstanding, fundamental question of quantum metrology: how to measure phase at the exact HL? We show a concrete way to implement the conceptual scheme previously proposed in theory15, and implement it experimentally. As in previous photonic ab initio phase estimation experiments, we characterise the quality of our implementation with respect to detected resources—it relies on probabilistic state preparation and measurement schemes, and takes into account only the successful coincidence detections in the calculation of precision. We thus prove the principle of the scheme, which in future can be extended to remove postselective elements. We begin by introducing the basic tools and techniques used in this work. The basic concept of optical phase measurement with photons is shown in Fig. 1a. The phase to be measured is inserted in one path of an interferometer; the other path is the reference arm. In the language of quantum information, a photon incident on the first beam splitter (BS) is represented by the logical state |0〉. The action of the BS is modelled by a Hadamard gate \({\cal H}|0\rangle = (|0\rangle + |1\rangle) /\sqrt 2\). The unknown phase shift applied on the path representing |1〉 is implemented by the unitary gate U(ϕ) = exp(iϕ|1〉〈1|). The last BS prior to detection stages maps the logical Z-basis onto the X-basis. Optical phase measurement concept. a Basic interferometric setup for estimation of an unknown phase ϕ. b Conceptual scheme of an advanced interferometer that includes multiple (p) passes of the phase shift ϕ and a controllable phase θ in the reference arm. c Quantum circuit representation of the interferometer shown in b. The interferometer is represented by a Hadamard gate \({\cal H}\) and a projective measurement in the X-basis, and the application of reference and unknown phases (p passes) is represented by unitary operators \({\cal R}(\theta )\) and Up, respectively. d Quantum circuit for Heisenberg-limited interferometric phase estimation with N = 3 resources. The protocol is extensible to higher N, in principle15. e Quantum circuit for the preparation of the optimal state \(|\psi _{{\mathrm{opt}}}\rangle\), Eq. (2), using a CNOT gate with control and target qubits prepared in \(|\psi _{\mathrm{C}}\rangle\) and \(|\psi _{\mathrm{T}}\rangle\), respectively A more general protocol may include more sophisticated techniques. The relevant constituents are: the quantum state of the light in the interferometer paths; the possibility of multiple coherent samplings of the phase shift by some photons; and the detection strategy. For example, Fig. 1b generalises the basic single photon interferometer to include p ≥ 1 applications of U(ϕ) and a classically controllable phase, described by \({\cal R}(\theta ) = {\mathrm{exp}}(i\theta |0\rangle \langle 0|)\), on the reference path (representing |0〉). We can also depict this interferometer following the quantum circuit convention, as in Fig. 1c. For ab initio phase measurement with N photons and no multipassing (p = 1), it is known theoretically that the HL can be achieved by preparing a path-entangled state10,18 and implementing an entangling detection scheme19. The problem is that both of these steps are very difficult to do. An alternative way15 to achieve the HL uses entanglement across multiple spatio-temporal modes, and multiple applications p of the phase gate, combined with the inverse quantum Fourier transform (IQFT) for the measurement. While the IQFT is also an entangling operation, it has been known for some time20 that, in this phase estimation algorithm (PEA)21, it can be replaced by an adaptive measurement scheme1, where individual photons are measured one by one, with the reference phase adjusted after each measurement. This replacement requires the photons in the entangled state to be spread out in time, but suffers no penalty in measurement precision. Here, we show the practicality of combining entanglement, multipassing and adaptive measurement to achieve the HL. Our Heisenberg-limited interferometric phase estimation algorithm (HPEA)15 is illustrated in Fig. 1d. This protocol is based on the standard PEA such that using K + 1 qubits yields an estimate ϕest of the true phase ϕ with K + 1 bits of precision21. It involves application of the phase gate N = 2K+1−1 times, with the number of applications being p = 2K, 2K−1, …, 20 on each successive qubit (photon). Our particular demonstration is an instance of a (K + 1=) 2-photon superposition state15 that may be used to perform a protocol with N = 2K+1−1 = 3 resources, achieving a variance for ab initio phase estimation of exactly VHL (Eq. (1)). The optimal entangled state for the HPEA is15 $$\left| {\psi _{{\mathrm{opt}}}} \right\rangle = c_0\left| {{\mathrm{\Phi }}^ + } \right\rangle + c_1\left| {{\mathrm{\Psi }}^ + } \right\rangle ,$$ $$c_j = \frac{{{\mathrm{sin}}\left[ {(j + 1)\pi /5} \right]}}{{\sqrt {\mathop {\sum}\nolimits_{k = 0}^1 {\mathrm{sin}^2\left[ {(k + 1)\pi /5} \right]} } }},$$ and where \(|{\mathrm{\Phi }}^ + \rangle = \left( {|00\rangle + |11\rangle } \right)/\sqrt 2\) and \(|{\mathrm{\Psi }}^ + \rangle = \left( {|01\rangle + |10\rangle } \right)/\sqrt 2\) are Bell states. The optimal adaptive measurement20 is implemented by measuring the qubits sequentially in the X-basis, and, conditioned on the results, adjusting the controllable phase θ shifts on subsequent qubits, as shown in Fig. 1d. Experimental scheme In our experiment (Fig. 2), we used orthogonal right- and left-circular polarisations instead of paths to form the two arms of the interferometer. We used a non-deterministic CNOT gate, acting on photon polarisation qubits (horizontal |h〉 ≡ |0〉, vertical |v〉 ≡ |1〉), to generate the state in Eq. (2). As shown in Fig. 1e, the control qubit is prepared in the diagonal polarisation state \(|\psi _{\mathrm{C}}\rangle = (|{\mathrm{h}}\rangle + |{\mathrm{v}}\rangle )/\sqrt 2\), and the target qubit in the linear polarisation |ψT〉 = c0|h〉 + c1|v〉, so that the output state after the CNOT is the optimal state: \(|\psi _{{\mathrm{opt}}}\rangle = \hat U_{{\mathrm{CNOT}}}(|\psi _{\mathrm{C}}\rangle \otimes |\psi _{\mathrm{T}}\rangle )\). Figure 3 shows the density matrices of the experimentally generated state ρexp and the ideal state ρopt ≡ |ψopt〉〈ψopt|. Schematic of the experimental setup. Single photons at 820 nm are generated via a type-I spontaneous parametric downconversion (SPDC) process (blue background) and collected using single-mode fibres and passed into the entangling gate (green background) in order to realise the state \(|\psi _{{\mathrm{exp}}}\rangle\). Input polarisation was adjusted with fibre polarisation controllers (FPC). The non-deterministic universal CNOT gate, composed of 3 partially polarising beam splitters (PPBS) and 2 half-waveplates (HWP), performs the state preparation by post-selecting coincidence events between the control and target output ports with success probability 1/9. The area with grey background corresponds to the implementation of the phase estimation. Photons in mode C pass twice through the HWP (acting as a phase shift element), in order to realise the \(U(\phi )^2\) operation. Photons in mode T experience the phase shift once (performing the \(U(\phi )\) operation). The effect of the feedforward operation, \({\cal R}(\theta )\), is simulated by dialling a HWP (depicted with a white rim), for a fixed time period, with 0 and π/8 corresponding to the ON and OFF settings of the control operation. Finally, photons are independently directed to a polarisation analysis unit consisting of a quarter-wave plate (QWP), HWP and a polarising beam splitter (PBS) followed by a 2 nm spectral filter and a single photon counting module (SPCM). See Methods section for further details on the experimental setup operation Density matrices of the experimental state and \({\rho }_{{\mathrm{opt}}}\). a Real part of the state matrix \(\rho _{{\mathrm{exp}}}\) reconstructed with polarisation state tomography. The fidelity of the state with the optimal state \(|\psi _{{\mathrm{opt}}}\rangle\), Eq. (2), is \(\langle \psi _{{\mathrm{opt}}}|\rho _{{\mathrm{exp}}}{\mathrm{|}}\psi _{{\mathrm{opt}}}\rangle = 0.980 \pm 0.003\), and the purity is \({\mathrm{Tr}}\left[ {\rho _{{\mathrm{exp}}}^2} \right] = 0.965 \pm 0.006\). The density matrix was calculated from ~50,000 twofold coincidence events. Uncertainties in fidelity and purity represent 95% confidence intervals calculated with Monte-Carlo simulation22. Imaginary components (not shown) are ≤0.013. b Real part of the ideal optimal state \(\rho _{{\mathrm{opt}}}\). Note that \({\mathrm{Im}}(\rho _{{\mathrm{opt}}}) = 0\) The polarisation interferometer, highlighted by the grey background in Fig. 2, used a large half-wave plate (HWP) to implement the unknown phase shift between the arms. Mode C was passed twice through this unknown phase. Another HWP (shown in Fig. 2 with a white rim) was used as the reference phase shift θ on mode T, in order to implement the detection scheme. We implemented the feedforward step non-deterministically, using waveplates that were fixed for each run, combined with postselective sorting of the data based on the results from the detector labeled C. Although this approach would be inadequate for estimation from exactly one shot, it is an accurate way to characterise the performance of the scheme over many repetitions. Table 1 shows how the data were sorted and how phase values were allocated for each shot, according to the detector firing patterns. Table 1 The detection outcome patterns Experimental phase estimation To characterise the performance of our HPEA, we first calculate the conditional Holevo variance \(V_{\mathrm{H}}^\phi\) in the estimates for each applied phase ϕ (see Methods section for details on data analysis). Here \(V_{\mathrm{H}}^\phi = \left| {\left\langle {\exp [i(\phi - \phi _{{\mathrm{est}}})]} \right\rangle _{\phi _{{\mathrm{est}}}}} \right|^{ - 2} - 1\) for a given ϕ, where \(\left\langle \ldots \right\rangle _{\phi _{{\mathrm{est}}}}\) indicates averaging over the values of ϕest resulting from the data. Figure 4 shows \(V_{\mathrm{H}}^\phi\) for the entire range of ϕ ∈ [0, 2π). The protocol performs best when ϕ = 0, π/2, π, and 3π/2, corresponding to the cases where, to a good approximation, only one of the four possible detection outcomes occur: dd, ad, da, and aa, respectively, as shown in Fig. 5. (Here, d(a) means the diagonal (antidiagonal) polarisation states, which are X-basis eigenstates.) It performs worst for intermediate phases. This explains the oscillatory nature of the data in Fig. 4. Heisenberg-limited phase estimation with N = 3 resources. Red dots represent experimentally measured variance \(V_{\mathrm{H}}^\phi\) as a function of ϕ. The red horizontal line-segment cutting the left axis shows the optimal protocol Holevo variance \(V_{\mathrm{H}} = 0.5497 \pm 0.0007\), determined from these data, while blue line-segment shows the HL. The blue and the green curves represent results of numerical simulations of the variance for the ideal optimal state \(\rho _{{\mathrm{opt}}}\) and experimentally prepared state \(\rho _{{\mathrm{exp}}}\), respectively. Brown dots represent \(V_{\mathrm{H}}^\phi\) for the shot-noise-limited interferometry and the black dashed line represents the measured Holevo variance \(V_{\mathrm{H}} = 0.7870 \pm 0.0007\) for the same measurement. The grey solid line shows the SNL. Numerical values for the experimental results and corresponding limits are detailed in Table 2. Each data point was calculated from at least 50,000 twofold coincidence events and the error bars represent 95% confidence intervals calculated with the bootstrap method23 Probability distribution of measurement outcomes. The probabilities of obtaining the four possible \(\{ {\mathrm{dd}},\,{\mathrm{ad}},\,{\mathrm{da}},\,{\mathrm{aa}}\}\) measurement outcomes which correspond to four possible \(\phi _{{\mathrm{est}}}\) values, for each phase value shown in Fig. 4. The variance \(V_{\mathrm{H}}^\phi\) is minimised for those ϕ values when one of the probabilities is maximum. Dots are experimental values and lines are numerical simulations that use the experimentally generated \(\rho _{{\mathrm{exp}}}\) as input. Error bars, representing the statistical uncertainty due to the finite number of measurement sets, are smaller than the dot size As we are interested in evaluating the precision of ab initio phase estimation, we cannot use any knowledge of ϕ. Thus we erase any initial phase information by calculating the unconditional Holevo variance \(V_{\mathrm{H}} = \left| {\left\langle {\left\langle {\exp [i(\phi - \phi _{{\mathrm{est}}})]} \right\rangle _{\phi _{{\mathrm{est}}}}} \right\rangle _\phi } \right|^{ - 2} - 1\), which averages over ϕ. We find VH = 0.5497 ± 0.0007, whereas the Heisenberg limit for N = 3 resources is VHL ≈ 0.527824. As can be seen from the simulation (described in Supplementary Note 1) results in Fig. 4, this 4% discrepancy between the experimental result and theoretical bound can be attributed to the non-unit fidelity of the prepared entangled state with respect to ρopt, highlighting the strong correlation between the protocol performance and quality of the prepared state25. The small phase offset between the measured data and numerical simulations appears due to a residual phase shift from mirrors and other optical components. This constant phase offset does not influence HPEA precision and can be compensated by a more sophisticated calibration of the setup, or in postprocessing, if required. For comparison, we perform standard quantum interferometry with three independent photons (see Supplementary Notes 2 and 3 for details). Calculating the Holevo variance for this measurement gives VH = 0.7870 ± 0.0007 which is close to the theoretical value of VSLN = 0.7778 for the SNL with N = 3 resources. We also compare our results with the theoretically optimal results for other schemes that use a subset of the three protocol components; Table 2. It can readily be observed that our scheme outperforms all those that use two of the components only. While the experimentally measured VH is numerically only a little lower than the next best theoretical bound (see Supplementary Note 4 for derivation of theory results), the difference amounts to a 10 standard deviation improvement. We note that arbitrary entanglement can always do the job of multiple passes, by replacing each multipassed photon with a multiple-photon NOON state7, split across the two polarisations. Thus our results could, in principle, be reproduced by an entangled state of three photonic qubits, two in one spatio-temporal mode and the third in another, with both modes going through U(ϕ) once. We rule out such complicated schemes in our comparison by restricting to symmetric entanglement, in which each photon that passes through U(ϕ) a given number of times is prepared identically. (This is the case for the entanglement in our scheme since each of the two photons passes through U(ϕ) a different number of times.) Table 2 The Holevo variance for different schemes We have experimentally demonstrated how to use entanglement, adaptive measurement and multiple passes of the phase shift to perform ab initio phase measurement that outperforms any other scheme, in terms of sensitivity per resource. Our results are very close to the Heisenberg limit for N = 3, giving substantial experimental justification to the theoretical prediction that this method can achieve the ultimate measurement sensitivity. While in our analysis we count only photons detected, in twofold coincidences consistent with success of the probabilistic operations, as resources, advances in nascent photon source27 and detection28 technology, heralded state preparation schemes29,30 and deterministic adaptive measurement (with e.g. a Pockels cell) may soon allow saturation of the Heisenberg limit bound even when all the employed resources are taken into account. As quantum phase-sensitive states are susceptible to loss31, we expect that similar considerations would apply to the states in our scheme. For small N, as we use here, loss has less of an effect on the sensitivity. Future extensions to the scheme will employ K + 1 > 2 photons, yielding N = 2K+1−1 resources and a correspondingly decreased phase uncertainty, as quantum logic circuits become increasingly capable of producing large entangled states with high fidelity. We note that while we have implemented this scheme optically, it can be applied to the estimation of any parameter that implements a phase shift between qubit states of some physical system. Photon source We used spontaneous parametric downconversion (SPDC) to produce pairs of polarisation-unentangled single photons. Ultrashort pulses from a mode-locked Ti:sapphire laser at 820 nm, with repetition rate of 80 MHz, were upconverted to 410 nm wavelength through a second-harmonic-generation (SHG) process with a 2 mm lithium triborate (LBO) crystal. The SHG beam was collimated with a f = 75 mm lens and the IR pump was spatially filtered away with two dispersive prisms. The UV light was focused on a 0.5 mm BiBO crystal to generate photon pairs via a type-I SPDC. The pump power was set to ~100 mW to ensure low probability of double pair emission from the crystal. Using 2 nm narrow band spectral filters, and Excelitas single photon counting modules (SPCMs) with detection efficiency in the range (50–60)%, the overall coincidence efficiency was in the window of (11–13)% with single-detection count rates of ~40,000 counts/s. Entangling gate The single photons produced in the SPDC process were spatially filtered using antireflection (AR) coated single-mode fibres, and sent through the entangling gate to produce a state close to the optimal state ρopt. The logical circuit of the gate consisted of three PPBSs, with ηv = 1/3 and ηh = 1 for the transmissivity of vertically and horizontally polarised light respectively, to produce a non-deterministic controlled-Z operation32. Two of the PPBSs were oriented 90° (around the photon propagation axis) such that ηv = 1 and ηh = 1/3, as illustrated in Fig. 2. Two HWPs oriented at 22.5° with respect to the optical axis were used to perform the Hadamard operations required for the correct operation of the CNOT gate. The successful operation of the gate is heralded by the presence of one photon in each output mode of the gate, with overall success probability of 1/9. At the core of this realisation is the non-classical interference that occurs between vertically polarised photons in modes C and T impinging on the central PPBS, Fig. 2. The maximum interference visibility that can be observed with ηv = 1/3 transmissivity is 0.8. We observed 0.790 ± 0.005 visibility (Supplementary Fig. 1) Hong–Ou–Mandel interference33, indicating excellent performance of the gate. In the measurement with three uncorrelated resources, input photon polarisations were set to |h〉, so the photons propagated through the gate without undergoing non-classical interference, but still suffering 2/3 loss in each mode. Photons in mode C were sent to a SPCM and acted as heralds for photons in mode T, which in turn were used to perform the shot-noise-limited interferometry. Phase shifts and probabilistic adaptive measurements To encode both unknown and classically controllable phases we proceeded as follows. The prepared state at the end of the entangling gate is ideally in the form of |ψopt〉 = c0|Φ+〉 + c1|Ψ+〉, Eq. (2), which is a superposition of the Bell states, \(|\Phi ^ + \rangle = (|{\mathrm{hh}}\rangle + |{\mathrm{vv}}\rangle )/\sqrt 2\), and \(|{\mathrm{\Psi }}^ + \rangle = (|{\mathrm{hv}}\rangle + |{\mathrm{vh}}\rangle )/\sqrt 2\). Here h and v are horizontal and vertical, respectively, polarisation states of a single photon, and encode the logical |0〉 and |1〉 states of a qubit. The linear polarisations were transformed to circular ones prior to the application of the phase shift. This was done by a QWP set at π/4, yielding $$\left( {\begin{array}{*{20}{c}} {|{\mathrm{h}}\rangle } \\ {|{\mathrm{v}}\rangle } \end{array}} \right)\mathop{\longrightarrow}\limits^{{U_{\mathrm{Q}}^{(\pi /4)}}}\left( {\begin{array}{*{20}{c}} {e^{i\pi /4}|{\mathrm{r}}\rangle } \\ {e^{ - i\pi /4}|{\mathrm{l}}\rangle } \end{array}} \right).$$ Here \(U_{\mathrm{Q}}^{(\gamma )}\) is the unitary operation for a QWP with optic axis oriented at γ with respect to horizontal axis. The phase shift of ϕ between the right (r) and left (l) circular polarisations could then be applied by setting the 2-inch HWP in Fig. 2 at ϕ/4 + π/8, producing the transformation $$\left( {\begin{array}{*{20}{c}} {e^{i\pi /4}|{\mathrm{r}}\rangle } \\ {e^{ - i\pi /4}|{\mathrm{l}}\rangle } \end{array}} \right)\mathop{\longrightarrow}\limits^{{U_{\mathrm{H}}^{( - \phi /4 + \pi /8)}}}\left( {\begin{array}{*{20}{c}} {e^{i\phi }|{\mathrm{l}}\rangle } \\ {|{\mathrm{r}}\rangle } \end{array}} \right),$$ where we have ignored the global phase factor, and \(U_{\mathrm{H}}^{(\gamma )}\) is the operator of a HWP with optic axis set at γ. We implemented the feedforward operation through the same procedure. By analogy with (4) and (5), implementing the feedforward operation by itself, setting the corresponding HWP at θ/4 + π/8, gives $$\left( {\begin{array}{*{20}{c}} {|{\mathrm{h}}\rangle } \\ {|{\mathrm{v}}\rangle } \end{array}} \right)\mathop{\longrightarrow}\limits^{{U_{\mathrm{Q}}^{(\pi /4)}}}\left( {\begin{array}{*{20}{c}} {e^{i\pi /4}|{\mathrm{r}}\rangle } \\ {e^{ - i\pi /4}|{\mathrm{l}}\rangle } \end{array}} \right)\mathop{\longrightarrow}\limits^{{U_{\mathrm{H}}^{(\theta /4 + \pi /8)}}}\left( {\begin{array}{*{20}{c}} {|{\mathrm{l}}\rangle } \\ {e^{i\theta }|{\mathrm{r}}\rangle } \end{array}} \right).$$ Combining both allowed us to encode the phase shift \(\phi - \theta\) between the two arms of the interferometer. The next step was to perform the adaptive measurements, which we implemented in a probabilistic manner. As the feedback-controlled unitary operation \({\cal R}(\theta )\) has only two settings in this scheme, we set the corresponding HWP at \(\theta = 0\) and collected data for a fixed period of time. We recorded only those coincidence events where detector C (Fig. 2) registered a d-polarised photon, as shown in Table 1. We repeated this for \(\theta = \pi /8\) and detection of a polarisation at detector C. In other words, when the photon in mode C is projected onto |d〉 (|a〉) state, it is expected that the feedforward unit is in an OFF (ON) setting, equivalent to dialling \(\theta = 0{\kern 1pt} (\theta = \pi /8)\) for the HWP acting on the photon in mode T. This provides for characterisation of the protocol performance without active switching. Each single shot detection (recorded coincidence) provides \(\phi _{{\mathrm{est}}} = \pi (\phi _0 \times 2^0 + \phi _1 \times 2^1)/2\). Here, \(\phi _0\phi _1 \in \{ 00,\,01,\,10,\,11\} \leftrightarrow \{ {\mathrm{dd}},\,{\mathrm{ad}},\,{\mathrm{da}},\,{\mathrm{aa}}\}\). The probability of obtaining the \(\phi _0\phi _1\) result is equal to the number of times \(n_{\phi _0\phi _1}\) that this measurement result occurs, divided by the size of the ensemble nens over which the Holevo variance is calculated. Thus from the measurement record we evaluated the true phase ϕ using the relation $$\phi \approx {\mathrm{arg}}\left[ {\frac{1}{{n_{{\mathrm{ens}}}}}\mathop {\sum}\limits_{\phi _0 = 0}^1 {\mathop {\sum}\limits_{\phi _1 = 0}^1 {n_{\phi _0\phi _1}} } \,{\mathrm{exp}}\left( {i\phi _{{\mathrm{est}}}} \right)} \right],$$ which becomes exact when \(n_{{\mathrm{ens}}} \to \infty\). The conditional Holevo variance \(V_{\mathrm{H}}^\phi\) is then calculated according to \(V_{\mathrm{H}}^\phi = \left| {\left\langle {\cal S} \right\rangle _{\phi _{{\mathrm{est}}}}} \right|^{ - 2} - 1\), with \({\cal S} = {\mathrm{exp}}[i(\phi - \phi _{{\mathrm{est}}})]\). Finally, the unconditional Holevo variance18,24 is calculated as \(V_{\mathrm{H}} = \left| {\left\langle {\cal S} \right\rangle _{\phi _{{\mathrm{est}}},\phi }} \right|^{ - 2} - 1\), or, equivalently, $$V_{\mathrm{H}} = \left| {\left\langle {(V_{\mathrm{H}}^\phi + 1)^{ - 1/2}} \right\rangle _\phi } \right|^{ - 2} - 1.$$ The data sets generated during the current study are available from the corresponding authors on reasonable request. Wiseman, H. M. & Milburn, G. J. Quantum measurement and control. (Cambridge University Press, Cambridge 2010). Giovannetti, V., Lloyd, S. & Maccone, L. Quantum metrology. Phys. Rev. Lett. 96, 010401 (2006). Article ADS MathSciNet Google Scholar Moreau, P. -A. et al. Demonstrating an absolute quantum advantage in direct absorption measurement. Sci. Rep. 7, 6256 (2017). Sabines-Chesterking, J. et al. Sub-shot-noise transmission measurement enabled by active feed-forward of heralded single photons. Phys. Rev. Appl. 8, 014016 (2017). Caves, C. M. Quantum-mechanical noise in an interferometer. Phys. Rev. D 23, 1693–1708 (1981). Giovannetti, V., Lloyd, S. & Maccone, L. Advances in quantum metrology. Nat. Photon. 5, 222–229 (2011). Dowling, J. P. Quantum optical metrology—the lowdown on high-N00N states. Quant. Phys. 49, 125–143 (2008). Nagata, T., Okamoto, R., O'Brien, J. L., Sasaki, K. & Takeuchi, S. Beating the standard quantum limit with four-entangled photons. Science 316, 726–729 (2007). Slussarenko, S. et al. Unconditional violation of the shot noise limit in photonic quantum metrology. Nat. Photon. 11, 700–703 (2017). Berry, D. W. & Wiseman, H. M. Optimal states and almost optimal adaptive measurements for quantum interferometry. Phys. Rev. Lett. 85, 5098–5101 (2000). Higgins, B. L., Berry, D. W., Bartlett, S. D., Wiseman, H. M. & Pryde, G. J. Entanglement-free Heisenberg-limited phase estimation. Nature 450, 393–396 (2007). Xiang, G. Y., Higgins, B. L., Berry, D. W., Wiseman, H. M. & Pryde, G. J. Entanglement-enhanced measurement of a completely unknown optical phase. Nat. Photon. 5, 43–47 (2011). Berni, A. A. et al. Ab initio quantum-enhanced optical phase estimation using real-time feedback control. Nat. Photon. 9, 577–581 (2015). Holevo, A. S. Covariant measurements and imprimitivity systems. Lect. Notes Math. 1055, 153–172 (1984). Article MathSciNet Google Scholar Wiseman, H. M., Berry, D. W., Bartlett, S. D., Higgins, B. L. & Pryde, G. J. Adaptive measurements in the optical quantum information laboratory. IEEE J. Sel. Top. Quantum Electron. 15, 1661–1672 (2009). Waldherr, G. et al. High-dynamic-range magnetometry with a single nuclear spin in diamond. Nat. Nanotech. 7, 105–108 (2012). Nusran, N. M., Ummal Momeen, M. & Gurudev Dutt, M. V. High-dynamic-range magnetometry with a single electronic spin in diamond. Nat. Nanotech. 7, 109–113 (2012). Berry, D. W., Wiseman, H. M. & Breslin, J. K. Optimal input states and feedback for interferometric phase estimation. Phys. Rev. A 63, 053804 (2001). Sanders, B. C. & Milburn, G. J. Optimal quantum measurements for phase estimation. Phys. Rev. Lett. 75, 2944–2947 (1995). Griffiths, R. B. & Niu, C.-S. Semiclassical fourier transform for quantum computation. Phys. Rev. Lett. 76, 3228–3231 (1996). Nielsen, M. A. & Chuang, I. L. Quantum computation and quantum information. (Cambridge University Press, Cambridge 2001). White, A. G. et al. Measuring two-qubit gates. J. Opt. Soc. Am. B 24, 172–183 (2007). Davidson, A. C. & Hinkley, D. V. Bootstrap methods and their application. (Cambridge University Press, Cambridge 1998). Berry, D. W. et al. How to perform the most accurate possible phase measurements. Phys. Rev. A 80, 052114 (2009). Modi, K., Céleri, L. C., Thompson, J. & Gu, M. Fragile states are better for quantum metrology. arXiv:1608.01443 (2016). Berry, D. W. Adaptive phase measurements, PhD Thesis, The University of Queensland, arXiv:quant-ph/0202136 (2001). Weston, M. M. et al. Efficient and pure femtosecond-pulse-length source of polarization-entangled photons. Opt. Express 24, 10869–10879 (2016). Marsili, F. et al. Detecting single infrared photons with 93% system efficiency. Nat. Photon. 7, 210–214 (2013). Barz, S., Cronenberg, G., Zeilinger, A. & Walther, P. Heralded generation of entangled photon pairs. Nat. Photon. 4, 553–556 (2010). Ulanov, A. E., Fedorov, I. A., Sychev, D., Grangier, P. & Lvovsky, A. I. Loss-tolerant state engineering for quantum-enhanced metrology via the reverse Hong-Ou-Mandel effect. Nat. Commun. 7, 11925 (2016). Knysh, S., Smelyanskiy, V. N. & Durkin, G. A. Scaling laws for precision in quantum interferometry and the bifurcation landscape of the optimal state. Phys. Rev. A 83, 021804 (2011). Ralph, T. C. & Pryde, G. J. Optical quantum computation. Prog. Opt. 54, 209–269 (2009). Hong, C. K., Ou, Z. Y. & Mandel, L. Measurement of subpicosecond time intervals between two photons by interference. Phys. Rev. Lett. 59, 2044–2046 (1987). The authors thank R. B. Patel for assistance with data acquisition code. This research was supported by the Australian Research Council Centre of Excellence Grant No. CE110001027. S.D. acknowledges financial support through an Australian Government Research Training Program Scholarship. D.W.B. is funded by an Australian Research Council Discovery Project Grant No. DP160102426. Centre for Quantum Dynamics and Centre for Quantum Computation and Communication Technology, Griffith University, Brisbane, Queensland, 4111, Australia Shakib Daryanoosh, Sergei Slussarenko, Howard M. Wiseman & Geoff J. Pryde Department of Physics and Astronomy, Macquarie University, Sydney, NSW, 2113, Australia Dominic W. Berry Shakib Daryanoosh Sergei Slussarenko Howard M. Wiseman Geoff J. Pryde S.D. and H.M.W. developed the theory, S.D., S.S. and G.J.P. designed and performed the experiment. D.W.B. performed theoretical comparison of different measurement schemes. All the authors discussed the results and contributed to the writing of the manuscript. Correspondence to Howard M. Wiseman or Geoff J. Pryde. Daryanoosh, S., Slussarenko, S., Berry, D.W. et al. Experimental optical phase measurement approaching the exact Heisenberg limit. Nat Commun 9, 4606 (2018). https://doi.org/10.1038/s41467-018-06601-7 M. A. Rodríguez-García M. T. DiMario F. E. Becerra npj Quantum Information (2022) Ameliorated phase sensitivity through intensity measurements in a Mach–Zehnder interferometer Jayanth Ramakrishnan J. Solomon Ivan Quantum Information Processing (2022) Distributed quantum phase estimation with entangled photons Li-Zheng Liu Yu-Zhe Zhang Jian-Wei Pan Nature Photonics (2021) Phase estimation algorithm for the multibeam optical metrology V. V. Zemlyanov N. S. Kirsanov G. B. Lesovik Scientific Reports (2020) Quantum-enhanced interferometry with large heralded photon-number states G. S. Thekkadath M. E. Mycroft I. A. Walmsley Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
Why is Jupiter called a "Gas Giant"? Jupiter's enormous gravity would turn its atmosphere first into a liquid from a certain depth, and then into a solid further towards its centre. So Jupiter has a solid core, above which is a liquid layer and an atmosphere above that. How is Jupiter fundamentally different than the (solid) inner planets? Why is Jupiter called a Gas Giant? jupiter terminology planet gas-giant Nathan Tuggy boardriderboardrider $\begingroup$ Keep in mind that most of those models on the inner workings of the larger planets have not been thoroughly sampled. They are projections based on what is known concerning the obvious: composition, mass, volume, hydrodynamics visible on the surfaces, magnetic fields, et al. Very few probes have been dropped into the interiors of those planets, and so there is quite a lot of room for uncertainty. $\endgroup$ – can-ned_food $\begingroup$ Shouldn't this be asked on astronomy SE? $\endgroup$ – Walter $\begingroup$ Clearly off topic here. No reference to observations of a probe. Flagged but flag declined. $\endgroup$ – James K $\begingroup$ @JamesK: Probably because study of the sun and planets (but not necessarily exoplanets) is on-topic here in its own right, without regard to how this is accomplished. $\endgroup$ – Nathan Tuggy Jupiter being a gas giant is not about its appearance, as another answer stated. It's only about the mass distribution of a planet. Jupiter's mass is 320 Earth masses, while we know from the Juno mission that the rock/ice in the core account for 5–25 of these Earth masses. So the rest of about 300 Earth masses is gas. Thus Jupiter is a gas giant. It is really that simple. The more exotic states of matter that the solids in its interior probably exist in, don't affect the definition of a gas giant. To put that into perspective: The same thing goes for Saturn, 95 Earth masses total, with estimated 20 Earth masses in the core. So 75 Earth masses of gas. Gas giant. Uranus and Neptune are different. Their masses are 14 and 17 Earth masses, respectively, with at least half of that in rock/ice. Thus they're not gas giants, but loosely named 'ice giants'. Another way of inferring what Jupiter is made of, is taking a look at its average density. This is how it was done historically, before we had sophisticated computer models and high-pressure lab experiments. Knowing Newton's laws, one can measure Jupiter's mass $M$ from its orbiting moons. Its size $r$ is known from the distance. Thus one can calculate its average density $\bar{\rho} = \frac{M}{4/3 \pi r^3}$ and finds a value of $\bar{\rho}_♃ = 1.326~\mathrm{g/cm^3}$. This is significantly lower than what Earth has ($\bar{\rho}_{\oplus} = 5.55~\mathrm{g/cm^3}$) and the other rocky planets. Through simple weighted averaging with some rock mass of a few percent, it was then realized early on that one needs a lot of gas to reach such a low average density for such a planet as Jupiter. An interesting detail here is, that one cannot build Jupiter of gas only, the resulting average density would be lower than the real one. This is how it was realized that some very dense material needs to exist somewhere in Jupiter, possibly a core of solid material. AtmosphericPrisonEscapeAtmosphericPrisonEscape $\begingroup$ "Jupiter's mass is 320 Earth masses, while we know from the Juno mission that 5-25 of those is in rock/ice in the core. So the rest of about 300 Earth masses is gas." No liquid? $\endgroup$ – David Richerby $\begingroup$ "It's only about the mass distribution of a planet." "The more exotic states of matter that the solids in its interior probably exist in, don't affect the definition of a gas giant." It would be nice if the answer actually stated the definition of a gas giant. $\endgroup$ – JiK $\begingroup$ @JiK: There is no hard definition. In the case of the solar system the boundaries are clear, because our planets are nicely ordered in a hierarchy of masses. But when you look into the exoplanet database, then all we have are average densities, and those are all over the place. If your interested I can edit a plot for that in. There are definitions for gas giants from planet formation theory, but there the physics is not 100% clear and classifications are still disputed. $\endgroup$ $\begingroup$ @DavidRicherby: Sure, there's gonna be varius liquids floating around there at high pressures. But I was presenting the fundamental argumentation why one needs another very dense component besides hydrogen to explain Jupiters mass at given average density. $\endgroup$ $\begingroup$ @DavidRicherby The majority of the "gas" is in a supercritical fluid phase, which may be considered a hybrid between liquid and gas - the distinction doesn't exist at those pressures/temperatures. "Supercritical fluid giant" doesn't quite have the same ring to it though. $\endgroup$ – pericynthion One reason they are called gas giants is because they are mostly composed of elements that are gaseous at Earth like temperatures and pressures. Jupiter is primarily composed of hydrogen with a quarter of its mass being helium, though helium comprises only about a tenth of the number of molecules. Jupiter's upper atmosphere is about 88–92% hydrogen and 8–12% helium by percent volume of gas molecules. A helium atom has about four times as much mass as a hydrogen atom, so the composition changes when described as the proportion of mass contributed by different atoms. Thus, Jupiter's atmosphere is approximately 75% hydrogen and 24% helium by mass, with the remaining one percent of the mass consisting of other elements. The atmosphere contains trace amounts of methane, water vapor, ammonia, and silicon-based compounds. There are also traces of carbon, ethane, hydrogen sulfide, neon, oxygen, phosphine, and sulfur. The outermost layer of the atmosphere contains crystals of frozen ammonia. The interior contains denser materials - by mass it is roughly 71% hydrogen, 24% helium, and 5% other elements.[21][22] Through infrared and ultraviolet measurements, trace amounts of benzene and other hydrocarbons have also been found. https://en.wikipedia.org/wiki/Jupiter So Jupiter and Saturn are almost totally composed of hydrogen and helium, elements that are gaseous at Earth like temperatures and pressures. Of course the temperatures and pressures deeper inside Jupiter and Saturn are not exactly Earth like! But the elements that they are composed of are commonly called gases even though they might be in exotic conditions such as liquid metallic hydrogen under the immense pressure and temperatures inside the planets. Most of us think of hydrogen and helium as gaseous elements, not liquids, or solids, or highly exotic forms of matter. Thus Jupiter and Saturn are "gas giants". A second reason they are called "gas giants" is historical. Famed science fiction writer James Blish wrote a science fiction story called "Solar Plexus", published in Astonishing stories, September, 1941. "Solar Plexus" was rewritten and republished in an anthology Beyond Human Ken, edited by Judith Merrill, 1954. The 1954 rewritten version contained the line: A quick glance over the boards revealed that there was a magnetic field of some strength near by, one that didn't belong to the invisible gas giant revolving half a million miles away. Science fiction readers who knew anything about the structure of giant planets thought that "gas giant" was a very fitting phrase to describe them. And some of them were professional astronomers. Thus the phrase began to be used by astronomers to describe the giant planets in the solar system. M.A. GoldingM.A. Golding $\begingroup$ @DiegoSánchez - Why do you say that "those numbers don't add up"? Are you maybe assuming that all molecules/atoms are equally massive? That wouldn't be correct. For instance, a helium atom is about four times as massive as a hydrogen atom is. $\endgroup$ – Mico $\begingroup$ @AtmosphericPrisonEscape. Helium molecules are monoatomic, hence He is both an atom and a molecule in itself. Also, some ways of addressing people are politer than others, just saying. $\endgroup$ – Diego Sánchez $\begingroup$ @DiegoSanchez: Helium, under exotic conditions can in fact bond into an actual molecule, such that it has a chemical bond (covalent or otherwise). A single Helium atom, in absence of any bondings, it not a molecule. $\endgroup$ $\begingroup$ @AtmosphericPrisonEscape. Did you readme first comment? The Wikipedia article mentions those figures and i wanted to know how that can be possible. Things spiralled out of control from there, but I honestly understood you knew more about it. I know its not very relevant, but somehow sheds some doubt about the main reference used for this answer. $\endgroup$ $\begingroup$ Perhaps the answer could make its statement clearer, but regarding @AtmosphericPrisonEscape concern that the gaseous state at STP conditions has nothing to do with whether a planet is classified as a "gas giant" or not: the term is historical, back when the physical properties of the constituent elements in a "gas giant" were not well studied or theorized. People like Blish probably thought "So, it is mostly hydrogen and helium, and has no visible terrestrial surface? And, it is a giant planet? 'Gas giant'!" $\endgroup$ Jupiter is composed primarily of gaseous and liquid matter. It is the largest of the gas giants, and like them, is divided between a gaseous outer atmosphere and an interior that is made up of denser materials. It's upper atmosphere is composed of about 88–92% hydrogen and 8–12% helium by percent volume of gas molecules, and approx. 75% hydrogen and 24% helium by mass, with the remaining one percent consisting of other elements. The atmosphere contains trace amounts of methane, water vapor, ammonia, and silicon-based compounds as well as trace amounts of benzene and other hydrocarbons. There are also traces of carbon, ethane, hydrogen sulfide, neon, oxygen, phosphine, and sulfur. Crystals of frozen ammonia have also been observed in the outermost layer of the atmosphere. The interior contains denser materials, such that the distribution is roughly 71% hydrogen, 24% helium and 5% other elements by mass. It is believed that Jupiter's core is a dense mix of elements – a surrounding layer of liquid metallic hydrogen with some helium, and an outer layer predominantly of molecular hydrogen. The core has also been described as rocky, but this remains unknown as well. Not the answer you're looking for? Browse other questions tagged jupiter terminology planet gas-giant or ask your own question. Did we know the gas planets were truly gaseous before Pioneer 10? How will Juno establish existence of solid core within Jupiter and determine its size? What is the barometric formula for a gas giant? What will be the effect if we stand on Jupiter? If a gas giant is far enough away from a sun will it freeze solid? Does Jupiter's high rotation speed affect its gravitational strength at the surface? Why would Juno's originally planned orbit lowering partially mitigate radiation damage? How does gravity vary in a gas giant's interior? How deep could a gas giant communications network penetrate?
CommonCrawl
Polyhydroxyalkanoate production from rice straw hydrolysate obtained by alkaline pretreatment and enzymatic hydrolysis using Bacillus strains isolated from decomposing straw Doan Van Thuoc1, Nguyen Thi Chung1 & Rajni Hatti-Kaul ORCID: orcid.org/0000-0001-5229-58142 Bioresources and Bioprocessing volume 8, Article number: 98 (2021) Cite this article Rice straw is an important low-cost feedstock for bio-based economy. This report presents a study in which rice straw was used both as a source for isolation of bacteria producing the biodegradable polyester polyhydroxyalkanoate (PHA), as well as the carbon source for the production of the polymer by the isolated bacteria. Of the 100 bacterial isolates, seven were found to be positive for PHA production by Nile blue staining and were identified as Bacillus species by 16S rRNA gene sequence analysis. Three isolates showed 100% sequence identity to B. cereus, one to B. paranthracis, two with 99 and 100% identity to B. anthracis, while one was closely similar to B. thuringiensis. For use in PHA production, rice straw was subjected to mild alkaline pretreatment followed by enzymatic hydrolysis. Comparison of pretreatment by 2% sodium hydroxide, 2% calcium hydroxide and 20% aqueous ammonia, respectively, at different temperatures showed maximum weight loss with NaOH at 80 °C for 5 h, but ammonia for 15 h at 80 °C led to highest lignin removal of 63%. The ammonia-pretreated rice straw also led to highest release of total reducing sugar up to 92% on hydrolysis by a cocktail of cellulases and hemicellulases at 50 °C. Cultivation of the Bacillus isolates on the pretreated rice straw revealed highest PHA content of 59.3 and 46.4%, and PHA concentration of 2.96 and 2.51 g/L by Bacillus cereus VK92 and VK98, respectively. Rice straw is an obvious choice to be used as residual low-cost feedstock for the bio-based economy, especially in South- and South-East Asian countries that are the main producers of rice, the third most important grain crop in the world after wheat and corn (Binod et al. 2010). For every ton of rice grain, 700–1500 kg of rice straw is produced, and over 700 million tons of rice straw is produced worldwide (Bakker et al. 2013). In Vietnam, the total rice planted area is about 7.5 million ha, with a total of 40 million tons of rice and about 50 million tons of rice straw being produced annually (Diep et al. 2015). Some of the rice straw is collected for cooking, animal feed, roof covering or mushroom production. With improvement in the living conditions of farmers, such use of rice straw has become less common and 25–60% (depending on the region) is left for burning in the fields or considered as waste material (Bakker et al. 2013; Nguyen et al. 2016; Le et al. 2020;). Rice straw burning is estimated to release 11 tons of CO2-equivalent per hectare of land in addition to a large amount of NOx, a precursor to photochemical smog, and fine dust (Sarkar et al. 2012; Bakker et al. 2013). Valorization of rice straw would thus offset such emissions. Many studies claim that 25–35% of the straw may be available for biofuels and other products after leaving that is needed to conserve soil quality and for competitive uses (Bakker et al. 2013). There have been many studies on the use of rice straw for the production of second-generation biofuel, and more recently even to other products like insulation materials, composites, biodegradable plastics and chemicals (Abraham et al. 2016; Bilo et al. 2018; Goodman 2020; Overturf et al. 2020). The biodegradable bio-based plastics that are increasingly attracting interest to replace the fossil-based plastics are the microbial polyesters—polyhydroxyalkanoates (PHAs) (Bedade et al. 2021; Naser et al. 2021; Yadav et al. 2021); their high rate of biodegradability even in marine environments lowers the risk of their accumulation as microplastics (Suzuki et al. 2021). As a major part of the production cost of PHA is attributed to the carbon source, use of rice straw could provide a potential low-cost alternative besides lowering the environmental impact caused by its burning (Obruca et al. 2015; Heng et al. 2016). Rice straw contains over 70% carbohydrates in the form of cellulose and hemicellulose, access to which requires pretreatment, as for all lignocelluloses, for the removal of the lignin network (Binod et al. 2010; Sarkar et al. 2012). Unlike other agricultural residues, the straw also contains high ash content that poses a challenge in thermal processes leading to a high tendency for fouling the combustion systems. Various methods have been tested for pretreatment of rice straw (Hendriks and Zeeman 2009; Agbor et al. 2011; Tsegaye et al. 2019; 2020; Guo et al. 2020; Kurokochi and Sato 2020; Zang et al. 2020). Production of PHAs by Bacillus firmus and Cupriavidus necator using the hydrolysate of acid or alkali-pretreated rice straw with good yields has been reported (Sindhu et al. 2013; Ahn et al. 2015, 2016). Mild alkaline pretreatment has been widely applied on rice straw because of its simplicity, efficiency and relatively low cost. It increases the surface area by swelling the biomass particles and increasing carbohydrate accessibility to enzymes, while reducing the degree of polymerization and crystallinity of the cellulose (Tsegaye et al. 2019; Li et al. 2020). As very little of the carbohydrate is solubilized, most of it can be converted to sugars during the subsequent hydrolysis step (Hendriks and Zeeman 2009; Binod et al. 2010; Agbor et al. 2011; Ashoor and Sukumaran 2020). The present report describes a study in which rice straw was used both as a source for isolation of PHA-producing bacteria, as well as the carbon source for the production of the polymer by the isolated bacteria. Decomposing rice straw was used for bacterial isolation, while dried straw pretreated with alkali followed by enzymatic hydrolysis was used as carbon source for PHA production. The effect of pretreatment with three different alkaline reagents (aqueous ammonia, sodium hydroxide and calcium hydroxide) at different temperatures on lignin removal and the sugar recovery from the enzymatic step was compared to find the best conditions for preparing the carbon source for PHA production. Rice straw (Oryza sativa L.) was collected from a field in the rural region of Dong Anh province in Vietnam, dried in air, ground and sieved. The particles that passed through a sieve with mesh size of 0.5 mm but not through a sieve with 0.2 mm mesh size were collected. Rice straw composition was analyzed using a standard analytical procedure (National Renewable Energy Laboratory (NREL), Golden, CO, USA) (Sluiter et al. 2008), and was determined to contain 43.1 ± 1.2% glucan, 17.7 ± 0.5% xylan, 3.0 ± 0.1% arabinan, 2.6 ± 0.1% galactan, and 12.9 ± 0.2% acid-insoluble lignin on a dry weight basis. Isolation of polyhydroxyalkanoate-producing bacteria Decomposing rice straw was collected from the same field as above, ground, suspended in 0.9% NaCl solution, and the supernatant serially diluted, prior to spreading 100 µL of the diluted sample on a solid medium [meat–peptone–agar (MPA)] containing per liter: 5 g each of NaCl, meat extract, and peptone, and 20 g granulated agar. The plates were incubated at 35 °C for 48 h. Hundreds of colonies were picked and plated again on fresh MPA-agar medium. PHA-producing bacteria were then detected by Nile blue A staining method (Spiekermann et al. 1999), for which the bacterial isolates were grown on the modified MPA medium containing per liter: 5 g NaCl, 1 g meat extract, 1 g peptone, 20 g glucose, 20 g granulated agar, and Nile red (Sigma) (dissolved in dimethylsulfoxide) with a final concentration of 0.5 µg dye per mL of the medium. The agar plates were incubated at 35 °C for 2 days and then exposed to ultraviolet light (312 nm). The colonies with fluorescent bright orange staining were chosen for further studies. Phylogenetic characterization of the selected PHA-producing bacteria The genomic DNA of the seven selected strains was extracted by Thermo Scientific GeneJET Genomic DNA Purification Kit according to the manufacturer's recommendations. The 16S rRNA gene was amplified using the universal primers, 27F (5'-AGAGTTTGATCCTGGCTCAG-3') and 1492R (5'-GGTTACCTTGTTACGCTT-3'). Sequencing of the amplified DNA fragment was performed at 1st Base (Singapore), and GenBank database was used to search for 16S rRNA gene similarities. Phylogenic analysis based on 16S rRNA gene was performed with the aid of MEGA X software (Kumar et al. 2018) using the Maximum Likelihood method and Tamura–Nei model (Tamura and Nei 1993). The almost complete sequences of the 16S rRNA gene of the bacterial strains were deposited in GenBank/EMBL/DDBJ databases and used in the analysis. Alkaline pretreatment of dried rice straw Three different alkaline solutions (2% sodium hydroxide, 2% calcium hydroxide and 20% aqueous ammonia) were tested for the pretreatment of rice straw at solid:liquid ratio of 1:10. The mixtures of 10 g dry weight rice straw and 100 mL alkaline solution were placed in 250-mL glass bottles and incubated at different temperatures and time periods. Subsequently, the soaked rice straw was recovered by filtration, washed with clean water until neutral pH, and then dried at 105 °C for 24 h prior to enzymatic hydrolysis. Rice straw recovery was calculated based on the percent of amount of insoluble fraction recovered after pretreatment with respect to that before pretreatment. Enzymatic hydrolysis of the pretreated straw Both pretreated and untreated rice straw samples were used as substrates for enzymatic hydrolysis. Three enzymes including Celluclast 1.5 L (129.3 mg protein/mL, 30.7 cellobiose units/mL and 63.8 filter paper units/mL), Novozyme 188 (102.2 mg protein/mL, 626.4 CBU/mL), and Pentopan Mono BG (2500 U/g) provided by Novozymes (Bagsvaerd, Denmark) were used, and the optimum conditions for rice straw hydrolysis were determined after several trials. Pretreated or untreated rice straw (1 g) was mixed with 25 mL of sodium acetate buffer (pH 5.0 containing 1% (v/v) of Celluclast 1.5L, 0.4% (v/v) of Novozyme 188 and 0.2% (w/v) of Pentopan in 100-mL glass bottles at 50 °C in a shaker incubator at 180 rpm for 40 h. Samples were withdrawn at different time intervals for monomeric sugar (glucose, xylose and arabinose) analysis. Polyhydroxyalkanoate production from rice straw hydrolysate using the bacterial isolates The selected bacterial isolates were grown in 20 mL of liquid MPA medium in 100-mL Erlenmeyer flasks with rotary shaking at 180 rpm for 13 h. Subsequently, 1 mL of each culture was inoculated in 50 mL of modified MPA medium in 250-mL Erlenmeyer flasks. The medium contained per liter 5 g NaCl, 1 g meat extract, 1 g peptone, 20 g glucose or reducing sugars from hydrolysates, and the pH was initially adjusted to 7.0 using 0.05 M phosphate buffer. The cultures were incubated at 35 °C with rotary shaking at 180 rpm. Samples were withdrawn at 48 h of cultivation for determination of cell dry weight (CDW) and PHA content. The surface characteristics of the untreated and pretreated rice straw were analyzed by scanning electron microscopy (SEM) (S-4800, Hitachi, Tokyo, Japan). The PHA granules in the bacterial cells were observed by transmission electron microscopy (TEM) using JEM-1010 TEM (Jeol Korea Ltd., Seoul, South Korea). The contents of sugars (glucose, xylose and arabinose) in the enzymatically hydrolyzed samples were determined using a HPLC system (Jasco, Tokyo, Japan) equipped with a reflective index detector (ERC, Taguchi, Japan). The sugars were separated on an Aminex HPX-87P column, using MilliQ water as mobile phase at a rate of 0.4 mL/min, and column temperature of 65 °C. The glucose concentration was used to calculate glucan-to-glucose conversion as follows: $${\text{Glucan}}\;{\text{conversion}}\left( \% \right) = {\text{glucose}}\;{\text{liberated }}({\text{g}}) \times 0.{\text{9}} \times 100/{\text{initial}}\;{\text{cellulose}}(g)$$ Cell dry weight (CDW) was determined by centrifuging 3 mL of the culture samples at 4 000 g for 10 min in pre-weighed centrifuge tubes, the pellet was washed once with 3 mL distilled water, centrifuged and dried at 105 °C until constant weight was obtained. The tubes were weighed again to calculate the CDW. PHA quantification was performed using a gas-chromatographic method (Huijberts et al. 1994). For this, about 10 mg of freeze-dried cells was mixed with 1 mL of chloroform and 1 mL of methanol solution containing 15% (v/v) sulphuric acid and 0.4% (w/v) benzoic acid. The mixture was incubated at 100 °C for 3 h to convert the constituents to their methyl esters. After cooling to room temperature, 0.5 mL of distilled water was added and the mixture was shaken for 30 s. The chloroform layer was transferred into a fresh tube and used for GC analysis to determine the PHA content. Sample volume of 2 μL was injected into the gas chromatography column (VARIAN, Factor Four Capillary Column, CP8907). The injection temperature was set at 250 °C, detector temperature at 240 °C, while the column temperature at 60 °C for the first 5 min and then increased at 3 °C/min until 120 °C was reached. Poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) containing 12% valerate (Sigma) was used as a standard for calibration. Analyses of sugars, PHA and CDW were performed in triplicates. PHA content (weight percent, wt%) was calculated as the percentage of the ratio of PHA concentration to CDW, while residual cell mass (RCM) was defined as the CDW minus PHA concentration (Lee et al. 2000). PHA yield (g/g) was calculated as concentration of the polymer divided by the amount of sugar used. PHA productivity (g/L/h) was calculated as PHA concentration divided by the cultivation time. Isolation and identification of PHA-producing bacteria from decomposing rice straw More than 100 bacterial colonies were isolated from decomposing rice straw, among which seven isolates were found to show significant PHA accumulation by Nile blue staining, and were used in this study. The phylogenetic characterization of the seven isolates based on their 16S rRNA gene sequences showed them to belong to genus Bacillus (Fig. 1). The sequence of strain VK24 showed a high level of similarity (98.2%) with that of B. thuringiensis LDC 507. Two strains VK33 and VK38 clustered together and showed the highest similarity of 99% and 100%, respectively, with B. anthracis IHB B 7021. Three other strains VK91, VK92 and VK98, also clustered together and showed similarity of 100% with B. cereus DBA1.1. Strain VK164 showed 100% sequence identity with B. paranthracis NWPZ-61 (100%). According to earlier reports, the bacterial populations degrading the straw under anoxic conditions comprised mainly Clostridium species with Bacillus species as one of the minor groups (Weber et al. 2001), while Bacillus species were predominant during composting of rice straw (Hefnawy et al. 2013). In fact, Bacillus species is used as an inoculant in rice straw composting for which they play an important role in degradation of cellulose and hemicellulose (Zhang et al. 2021). Maximum likelihood phylogenetic tree based on 16S rRNA gene sequences showing the relationships between the seven selected strains and other strains of the genus Bacillus. Bar 0.05 subtitutions per position PHA production by several Bacillus species such as B. subtilis, B. firmus, B. cereus, B. thuringiensis isolated from different ecosystems and using different carbon sources has been reported earlier (Singh et al. 2009; Sindhu et al. 2013; Gowda and Shivakumar 2014; Odeniyi and Adeola 2017; Mohandas et al. 2018; Ponnusamy et al. 2019; Mohammed et al. 2020). Synthesis of both homo- and copolymers has been reported depending on the substrate (Singh et al. 2009; Mohapatra et al. 2017). Mild alkaline pretreatment of rice straw and dry matter loss Alkaline pretreatment of the dried rice straw was tested with 2% NaOH, 2% Ca(OH)2 and 20% aqueous NH3, respectively, at solid:liquid ratio of 1:10. These reagents have been used earlier for pretreatment of various biomass substrates including rice straw, rice hulls, switchgrass, corn stover, wood and bagasse (Chang et al. 1997, 1998; Park and Kim 2012; Sindhu et al. 2015; Heng et al. 2016; Kobkam et al. 2018; Tsegaye et al. 2019). Alkaline pretreatment solubilizes lignin by disrupting its structure and breaking the linkages with carbohydrate residues. It also removes acetyl and uronic acid substitutions on hemicellulose and helps to improve the accessibility of the polysaccharides to the hydrolytic enzymes (Carvalheiro et al, 2008; Bali et al. 2015; Kobkam et al. 2018; Tsegaye et al. 2019), but also resulting in their partial hydrolysis under strong alkaline conditions (Zhang and Cai 2008; Kobkam et al. 2018; Tsegaye et al. 2019). In accordance with previous reports (Ko et al. 2009; Rodrigues et al. 2016; Sophonputtanaphoca et al. 2018), the amount of residual rice straw decreased with increase in temperature and time for the pretreatment; increase in temperature being a more effective parameter (Table 1). The highest straw recovery with 2% NaOH was 83-84% after 1 h treatment at 30–50 °C. Higher degree of solubilization was achieved during 1–5 h at 80 °C (Table 1), while removing 41–52% of lignin. In an earlier study, release of about 61% of hemicellulose and 36.2% of lignin was reported on treatment of chopped rice straw with 2% NaOH at solid:liquid ratio of 1:4, 85 °C for 1 h (Zhang and Cai 2008). Table 1 Rice straw recovery after pretreatment with alkali at different incubation temperatures and times Pretreatment with 2% Ca(OH)2 resulted in rather high recovery of residual straw at all temperatures although slight decrease was seen with increasing time period; the lowest recovery (84.6%) being after 1 h treatment at 121 °C. Ca(OH)2 is a weaker alkali with low solubility in water and hence requires much longer treatment time and higher temperature, and moreover the calcium ions form calcium–lignin complex that hinder proper lignin solubilization (Rodrigues et al. 2016). Aqueous NH3 is considered to be highly selective for lignin removal; it cleaves C–O–C bonds in lignin as well as ether and ester bonds in the lignin–carbohydrate complex (Binod et al. 2010), and has a significant swelling effect on lignocellulose. Residual straw recovery was intermediate to that obtained with NaOH and Ca(OH)2. Pretreatment with 20% aqueous NH3 for 15 h at 80 °C resulted in 29.5% of the weight loss of straw (Table 1), and 63% lignin removal (data not shown), which agrees well with the study by Park and Kim (2012) reporting removal of up to 66% lignin along with 22.9% hemicellulose and 11% cellulose from the rice straw treated with 15% aqueous NH3 (solid:liquid ratio of 1:10) at 60 °C for 24 h. The alkaline-pretreated rice straw was much softer than the untreated one, indicating removal of large amounts of lignin and hemicellulose. As seen in Fig. 2, the rice straw pretreated with NaOH was distorted and separated from the initially connected structure, thus increasing the surface area (Fig. 2A) in comparison to the untreated straw with rigid, smooth and highly ordered fibrils (Fig. 2B). SEM micrographs of rice straw before (A) and after (B) pretreatment with 2% NaOH at 80 °C for 5 h Enzymatic hydrolysis for releasing sugars from the polysaccharides The effect of pretreatment with different alkaline reagents on the efficiency of enzymatic hydrolysis step was then evaluated (Additional files 1: Figures S1–S3). Both pretreated and untreated rice straw samples (1 g) were incubated in 4% w/v suspension at pH 5.0, 50 °C with an enzyme cocktail comprising 1% v/v cellulase preparation derived from Trichoderma reesei (Celluclast 1.5L), 0.4% v/v β-glucosidase from Aspergillus niger (Novozyme 188) and 0.2% w/v 1,4-β-xylanase from Thermomyces lanuginosus (Pentopan Mono BG). Glucose and xylose were the major products of hydrolysis besides a small amount of arabinose. Maximum yield of total reducing sugars from untreated rice straw was only 34.2%. In case of the straw pretreated with NaOH (Additional files 1: Figure S1), up to 72.2% of total reducing sugar was obtained for the sample subjected to pretreatment at 30 °C for 7.5 h while longer pretreatment time gave lower yield (Additional files 1: Figure S1A). Increasing the pretreatment temperature to 50 °C (2.5 or 5 h) and 80 °C (3 h) resulted in a slight increase in the sugar yield to about 74% (Additional files 1: Figure S1B) and 77% (Additional files 1: Figure S1C), respectively. In case of Ca(OH)2-pretreated rice straw, the highest sugar yield of 60–66% was obtained for the samples pretreated at 30 °C for 48 h (Additional file 1: Figure S2A), or 80 °C for 24 h (Additional files 1: Figure S2B), while significantly lower yield was noted when the pretreatment was performed at 121 °C (Additional files 1: Figure S2C). Pretreatment with aqueous ammonia gave the highest sugar yield ranging between 77 and 87% upon enzymatic hydrolysis (Additional file 1: Figure S3A, B), the highest yield being obtained for the straw pretreated at 80 °C for 15 h (Additional file 1: Figure S3B). However, a much longer time (30 h) for enzymatic treatment was needed to obtain the maximal sugar release as compared to that for the samples treated with Ca(OH)2 and NaOH which took about 15 h for the maximal release. Table 2 shows that the glucan conversion efficiency obtained in this study was in the same range as that in previous studies. A few studies have reported slightly higher yields for the straw pretreated either at a higher temperature, longer time or lower solid loading (Ko et al. 2009; Kim and Han 2012; Kim et al. 2014; Tsegaye et al. 2019). Table 2 Comparison of the glucan-to-glucose conversion of cellulose in rice straw treated with different pretreatment methods followed by enzymatic hydrolysis PHA production from the rice straw hydrolysate The seven bacterial isolates were cultured in the medium containing 20 g/L glucose or 20 g/L of reducing sugars obtained by saccharification of the ammonia-pretreated rice straw followed by enzymatic hydrolysis at pH 5.0 and re-adjusting the pH to 7.0. The CDW, PHA content and concentration obtained after 48 h of cultivation are summarized in Tables 3 and 4. All the seven strains grew well in the glucose medium with CDW ranging from 2.2 to 3.32 g/L, and the PHA content ranged between 42 and 73 wt% of the cell dry weight, the highest value being obtained for the strain B. cereus VK98 (Table 3 and Fig. 3A). The final cell mass obtained was generally lower in the straw hydrolysate medium, the only exceptions being the strains B. cereus VK92 and VK98 that gave CDWs of 5 g/L and 5.42 g/L, respectively (Additional file 1: Figures S4, S5). The PHA content was also highest in the B. cereus strains, 59.3 wt% in VK92, followed by 46.4 wt% in VK98 (Additional file 1: Figures S4, S5) and 43.8 wt% in VK 91 (Table 4). Figure 3A and B shows the TEM micrographs at 2 μm resolution of the PHA granules accumulated by the strain VK98 grown with glucose and straw hydrolysate, respectively. The polymer was primarily polyhydroxybutyrate (PHB) with traces of hydroxyvalerate. Glucose was fully consumed, but the bacteria did not utilize xylose. B. anthracis VK33, B. anthracis VK38, B. paranthracis VK164 exhibited PHA content of less than 10 wt%, while no PHA was detected in the cells of Bacillus sp. VK24. This suggests B. cereus VK92 and VK98 are resistant to the inhibitory compounds formed during the pretreatment and autoclaving. These include weak acids such as acetic acid, glycolic acid, formic acid and levulinic acids, and phenolic compounds, e.g., coumaric acid, syringaldehyde, 4-hydroxybenzaldehyde, and vanillin (Jönsson et al. 2013; van der Pol et al. 2016). VK92 and VK98 also exhibited higher final pH values (Table 4), which may suggest that these isolates are able to utilize some of the acids and transform them to other metabolic products or perhaps even to PHA with different monomer composition (Vu et al. 2021; Ahn et al. 2016). This needs, however, to be further investigated. Table 3 Growth and PHA accumulation in seven Bacillus species in the medium containing 20 g/L glucose Table 4 Growth and PHA accumulation by seven isolated Bacillus species in the medium containing 20 g/L reducing sugars produced by saccharification of aqueous ammonia-pretreated rice straw TEM micrographs of PHA granules accumulated by strain VK98 on: A glucose-based culture medium and B rice straw hydrolysate-containing medium, respectively Comparison of the results obtained in this study with earlier reports on PHA production from rice straw hydrolysate clearly shows the production parameters to be highly comparable (Table 5). Majority of the other studies have utilized acid-hydrolyzed rice straw. The highest accumulation of PHB (89% w/w) was reported in Bacillus firmus grown on pentose-rich hydrolysate with 0.75% xylose but at a much lower cell mass, hence yielding low PHB concentration (1.7 g/L) (Sindhu et al. 2013). On the other hand, Ralstonia eutropha (or C. necator), the most commonly used bacteria for PHA production, yielded the highest amount of PHB (9.88 g/L, 70.1% w/w) from the rice straw pretreated with NaOH followed by two-stage enzymatic hydrolysis (Saratale and Oh 2015) instead of the acid-hydrolyzed straw (Ahn et al. 2015, 2016). The high polymer concentration obtained in R. eutropha was primarily due to the very high concentration of the cell mass (15.5 g/L) but also high PHA accumulation in the cells. Table 5 Comparison of PHA production by different bacterial strains from rice straw hydrolysates Moderate PHA yield of 0.15 g/g was obtained with respect to the sugar consumed in the B. cereus isolates (Table 5), which was equivalent to 59.5 g PHA per kg rice straw in case of B. cereus VK92, considering also the losses during the pretreatment. It is possible to further improve the PHA content and productivity in the isolates by medium optimization, e.g., with respect to carbon:nitrogen ratio, phosphate level, aeration, etc., and by using fed-batch mode of cultivation under controlled conditions (Singh et al. 2021). Moreover, strategies to further improve the resource- and cost-efficiency need to be developed when utilizing rice straw as feedstock, e.g., by transformation of other components to other value added products. This study shows the potential of utilizing the bacteria, involved in degradation of rice straw, as hosts for PHA production from reducing sugars liberated by pretreatment and hydrolysis of the rice straw lignocelluloses. Aqueous ammonia soaking pretreatment was found to be an efficient method for lignin removal from rice straw, providing suitable substrate for enzymatic digestibility of the polysaccharides. Further improvements in PHA content and productivity are possible by optimization of the culture medium and mode of cultivation. The pretreatment and the enzymatic hydrolysis, especially the latter, are the most cost-determining steps for PHA production from rice straw. In order to reduce these costs, it would be interesting to investigate PHA production in association with the Bacillus species that are involved in composting of rice straw through their polysaccharide-degrading activities (Hefnawy et al. 2013; Zhang et al. 2021), and even to consider developing a biorefinery by co-production of other chemicals and materials from the feedstock. The raw data supporting the conclusion of this article will be made available by the authors without undue reservation. Abraham A, Mathew AK, Sindhu R, Pandey A, Binod P (2016) Potential of rice straw for biorefining: an overview. Bioresour Technol 215:29–36. https://doi.org/10.1016/j.biortech.2016.04.011 Agbor VB, Cicek N, Sparling R, Berlin A, Levin DB (2011) Biomass pretreatment: fundamentals toward application. Biotechnol Adv 29:675–685. https://doi.org/10.1016/j.biotechadv.2011.05.005 Ahn J, Jho EH, Nam K (2015) Effect of C/N ratio on polyhydroxyalkanoates (PHA) accumulation by Cupriavidus necator and its implication on the use of rice straw hydrolysates. Environ Eng Res 20:246–253. https://doi.org/10.4491/eer.2015.055 Ahn J, Jho EH, Kim M, Nam K (2016) Increased 3HV concentration in the bacterial production of 3-hydroxybutyrate (3HB) and 3-hydroxyvalerate (3HV) copolymer with acid-digested rice straw waste. J Polym Environ 24:98–103. https://doi.org/10.1007/s10924-015-0749-0 Ashoor S, Sukumaran RK (2020) Mild alkaline pretreatment can achieve high hydrolytic and fermentation efficiencies for rice straw conversion to bioethanol. Prep Biochem Biotechnol 50:814–819. https://doi.org/10.1080/10826068.2020.1744007 Bakker RRC, Elbersen HW, Poppens RP, Lesschen JP (2013) Rice straw and wheat straw-potential feedstocks for the biobased economy. NL Agency Bali G, Meng X, Deneff JI, Sun Q, Ragauskas AJ (2015) The effect of alkaline pretreatment methods on cellulose structure and accessibility. Chemsuschem 8(2):275–279. https://doi.org/10.1002/cssc.201402752 Bedade DK, Edson CB, Gross RA (2021) Emergent approaches to efficient and sustainable polyhydroxyalkanoate production. Molecules 26:3463. https://doi.org/10.3390/molecules26113463 Bilo F, Pandini S, Sartore L, Depero LE, Gargiulo G, Bonassi A, Federici S, Bontempi E (2018) A sustainable bioplastic obtained from rice straw. J Clean Prod 200:357–368. https://doi.org/10.1016/j.jclepro.2018.07.252 Binod P, Sindhu R, Singhania RR, Vikram S, Devi L, Nagalakshmi S, Kurien N, Sukumaran RK, Pandey A (2010) Bioethanol production from rice straw: an overview. Bioresour Technol 101:4767–4774. https://doi.org/10.1016/j.biortech.2009.10.079 Carvalheiro F, Duarte LC, Girio GM (2008) Hemicellulose biorefineries: a review on biomass pretreatments. J Sci Ind Res 67:849–864 Chang VS, Burr B, Holtzapple MT (1997) Lime pretreatment of switchgrass. Appl Biochem Biotechnol 63–65:3–19. https://doi.org/10.1007/BF02920408 Chang VS, Nagwani M, Holtzapple MT (1998) Lime pretreatment of crop residues bagasse and wheat straw. Appl Biochem Biotechnol 74:135–159. https://doi.org/10.1007/BF02825962 Cheng Y-S, Zheng Y, Yu CW, Dooley TM, Jenkins BM, VanderGheynst JS (2010) Evaluation of high solids alkaline pretreatment of rice straw. Appl Biochem Biotechnol 162:1768–1784. https://doi.org/10.1007/s12010-010-8958-4 Diep NQ, Sakanishi K, Nakagoshi N, Fujimoto S, Minowa T (2015) Potential for rice straw ethanol production in the Mekong Delta Vietnam. Renew Energy 74:456–463. https://doi.org/10.1016/j.renene.2014.08.051 Goodman BA (2020) Utilization of waste straw and husks from rice production: a review. J Bioresour Bioprod 5:143–162. https://doi.org/10.1016/j.jobab.2020.07.001 Gowda V, Shivakumar S (2014) Agrowaste-based Polyhydroxyalkanoate (PHA) production using hydrolytic potential of Bacillus thuringiensis IAM 12077. Braz Arch Biol Technol 57:55–61. https://doi.org/10.1590/S1516-89132014000100009 Gu Y, Zhang Y, Zhou X (2015) Effect of Ca(OH)2 pretreatment on extruded rice straw anaerobic digestion. Bioresour Biotech 196:116–122. https://doi.org/10.1016/j.biortech.2015.07.004 Guo JM, Wang YT, Cheng JR, Zhu MJ (2020) Enhancing enzymatic hydrolysis and fermentation efficiency of rice straw by pretreatment of sodium perborate. Biomass Conv Biorefin. https://doi.org/10.1007/s13399-020-00668-3 Hefnawy M, Gharieb M, Nagdi OM (2013) Microbial diversity during composting cycles of rice straw. Int J Adv Biol Biomed Res 1(3):232–245 Hendriks ATWM, Zeeman G (2009) Pretreatments to enhance the digestibility of lignocellulosic biomass. Bioresour Technol 100:10–18. https://doi.org/10.1016/j.biortech.2008.05.027 Heng K-S, Hatti-Kaul R, Adam F, Fukui T, Sudesh K (2016) Conversion of rice husks to polyhydroxyalkanoates (PHA) via three-step process: optimized alkaline pretreatment, enzymatic hydrolysis, and biosynthesis by Burkholderia cepacia USM (JCM 15050). J Chem Technol Biotechnol 92:100–108. https://doi.org/10.1002/jctb.4993 Huijberts GNM, van der Wal H, Wilkinson C, Eggink G (1994) Gas-chromatographic analysis of poly(3-hydroxyalkanoates) in bacteria. Biotechnol Tech 8:187–192. https://doi.org/10.1007/BF00161588 Jampatesh S, Sawisit A, Wong N, Jantama SS, Jantama K (2019) Evaluation of inhibitory effect and feasible utilization of dilute acid pretreatment rice straw on succinate production by metabolically engineered Escherichia coli AS1600a. Bioresour Technol 273:93–102. https://doi.org/10.1016/j.biortech.2018.11.002 Jönsson L, Alriksson B, Nilvebrant N-O (2013) Bioconversion of lignocellulose: inhibitors and detoxification. Biotechnol Biofuels 6:16. https://doi.org/10.1186/1754-6834-6-16 Kim I, Han J-I (2012) Optimization of alkaline pretreatment conditions for enhancing glucose yield of rice straw by response surface methodology. Biomass Bioenergy 46:210–217. https://doi.org/10.1016/j.biombioe.2021.106131 Kim I, Lee B, Song D, Han J-I (2014) Effects of ammonium carbonate pretreatment on the enzymatic digestibility and structural features of rice straw. Bioresour Technol 166:353–357. https://doi.org/10.1016/j.biortech.2014.04.101 Ko JK, Bak JS, Jung MW, Lee HJ, Choi IG, Kim TH, Kim KH (2009) Ethanol production from rice straw using optimized aqueous-ammonia soaking pretreatment and simultaneous saccharification and fermentation processes. Bioresour Biotech 100:4374–4380. https://doi.org/10.1016/j.biortech.2009.04.026 Kobkam C, Tinoi J, Kittiwachana S (2018) Alkali pretreatment and enzyme hydrolysis to enhance the digestibility of rice straw cellulose for microbial oil production. KMUTNB Int J Appl Sci Technol 11(4):247–256. https://doi.org/10.14416/j.ijast.2018.07.003 Kumar S, Stecher G, Li M, Knyaz C, Tamura K (2018) MEGA X: molecular evolutionary genetics analysis across computing platforms. Mol Biol Evol 35:1547–1549. https://doi.org/10.1093/molbev/msy096 Kurokochi Y, Sato M (2020) Steam treatment to enhance rice straw binderless board focusing hemicellulose and cellulose decomposition products. J Wood Sci 66:7. https://doi.org/10.1186/s10086-020-1855-8 Le HA, Phuong DM, Linh LT (2020) Emission inventories of rice straw open burning in the Red River Delta of Vietnam: evaluation of the potential of satellite data. Environ Pollut 260:113972. https://doi.org/10.1016/j.envpol.2020.113972 Lee SY, Wong HH, Choi J, Lee SH, Lee SC, Han CS (2000) Production of medium-chain-length polyhydroxyalkanoates by high-cell-density cultivation of Pseudomonas putida under phosphorus limitation. Biotechnol Bioeng 64:466–470. https://doi.org/10.1002/(SICI)1097-0290(20000520)68:4%3c466::AID-BIT12%3e3.0.CO;2-T Lee C, Zheng Y, VanderGheynst JS (2015) Effect of pretreatment conditions and post-pretreatment washing on ethanol production from dilute acid pretreated rice straw. Biosyst Eng 137:36–42. https://doi.org/10.1016/j.biosystemseng.2015.07.001 Li X, Sha J, Xia Y, Sheng K, Liu Y, He Y (2020) Quantitative visualization of subcellular lignocellulose revealing of mechanism of alkali pretreatment to promote methane production of rice straw. Biotechnol Biofuel 13:8. https://doi.org/10.1186/s13068-020-1648-8 Li J, Yang Z, Zhang K, Liu M, Liu D, Yan X, Si M, Shi Y (2021) Valorizing waste liquor from dilute acid pretreatment of lignocellulosic biomass by Bacillus megaterium B-10. Ind Crops Prod 161:113160. https://doi.org/10.1016/j.indcrop.2020.113160 Mohammed S, Behera HT, Dekebo A, Ray L (2020) Optimization of the culture conditions for production of polyhydroxyalkanoate and its characterization from a new Bacillus cereus sp. BNPI-92 strain, isolated from plastic waste dumping yard. Int J Biol Macromol 156:1064–1080. https://doi.org/10.1016/j.ijbiomac.2019.11.138 Mohandas SP, Balan L, Jayanath G, Anoop BS, Philip R, Cubelio SS, Bright Singh IS (2018) Biosynthesis and characterization of polyhydroxyalkanoate from marine Bacillus cereus MCCB 281 utilizing glycerol as carbon source. Int J Biol Macromol 119:380–392. https://doi.org/10.1016/j.ijbiomac.2018.07.044 Mohapatra S, Maity S, Dash HR, Das S, Pattnaik S, Rath CC, Samantaray D (2017) Bacillus and biopolymer: Prospects and challenges. Biochem Biophys Rep 12:206–213. https://doi.org/10.1016/j.bbrep.2017.10.001 Naser AZ, Deiab I, Darras BM (2021) Poly(lactic acid) (PLA) and polyhydroxyalkanoates (PHAs), green alternatives to petroleum-based plastics: a review. RSC Adv 11:17151. https://doi.org/10.1039/D1RA02390J Nguyen HV, Nguyen CD, Tran TV, Hau HD, Nguyen NT, Gunmert M (2016) Energy efficiency, greenhouse gas emissions, and cost of rice straw collection in the Mekong River Delta of Vietnam. Field Crops Res 198:16–22. https://doi.org/10.1016/j.fcr.2016.08.024 Obruca S, Benesova P, Marsalek L, Marova I (2015) Use of lignocellulosic materials for PHA production. Chem Biochem Eng Q 29:135–144. https://doi.org/10.15255/CABEQ.2014.2253 Odeniyi OA, Adeola OJ (2017) Production and characterization of polyhydroxyalkanoic acid from Bacillus thuringiensis using different carbon substrates. Int J Biol Macromol 104:407–413. https://doi.org/10.1016/j.ijbiomac.2017.06.041 Overturf E, Ravasio N, Zaccheria F, Tonin C, Patrucco A, Bertini F, Canetti M, Avramidou K, Speranza G, Bavaro T, Ubiali D (2020) Towards a more sustainable circular bioeconomy. Innovative approaches to rice residue valorization: the RiceRes case study. Bioresour Technol Rep 11:100427. https://doi.org/10.1016/j.biteb.2020.100427 Park YC, Kim JS (2012) Comparison of various alkaline pretreatment methods of lignocellulosic biomass. Energy 47:31–35. https://doi.org/10.1016/j.energy.2012.08.010 Ponnusamy S, Viswanathan S, Periyasamy A, Rajaiah S (2019) Production and characterization of PHB-HV copolymer by Bacillus thuringiensis isolated from Eisenia foetida. Biotechnol Appl Biochem 66:340–352. https://doi.org/10.1002/bab.1730 Rodrigues CIS, Jackson JJ, Montross MD (2016) A molar basis comparison of calcium hydroxide, sodium hydroxide, and potassium hydroxide on the pretreatment of switchgrass and miscanthus under high solids conditions. Ind Crop Prod 92:165–173. https://doi.org/10.1016/j.indcrop.2016.08.010 Saratale GD, Oh MK (2015) Characterization of poly-3-hydroxybutyrate (PHB) produced from Ralstonia eutropha using an alkaline-pretreatment biomass feedstock. Int J Biol Macromol 8:627–635. https://doi.org/10.1016/j.ijbiomac.2015.07.034 Sarkar N, Ghosh SK, Bannerjee S, Aikat K (2012) Bioethanol production from agricultural wastes: an overview. Renew Energy 37:19–27. https://doi.org/10.1016/j.renene.2011.06.045 Semwal S, Raj T, Kumar R, Christopher J, Gupta RP, Puri SK, Kumar R, Ramakumar SSV (2019) Process optimization and mass balance studies of pilot scale steam explosion pretreatment of rice straw for higher sugar release. Biomass Bionergy 130:105390. https://doi.org/10.1016/j.biombioe.2019.105390 Sindhu R, Silviya N, Binod P, Pandey A (2013) Pentose-rich hydrolysate from acid pretreated rice straw as a carbon source for the production of poly-3-hydroxybutyrate. Biochem Eng J 78:67–72. https://doi.org/10.1016/j.bej.2012.12.015 Sindhu R, Pandey A, Binod P (2015) Alkaline treatment. In: Pandey A, Negi S, Binod P, Larroche C (eds) Pretreatment of biomass. Process and technologies. Elsevier, Amsterdam, pp 51–60. https://doi.org/10.1016/b978-0-12-800080-9.00004-9 Singh M, Patel SK, Kalia VC (2009) Bacillus subtilis as potential producer for polyhydroxyalkanoates. Microb Cell Fact 8:38. https://doi.org/10.1186/1475-2859-8-38 Singh S, Sithole B, Lekha P, Permaul K, Govinden R (2021) Optimization of cultivation medium and cyclic fed-batch fermentation strategy for enhanced polyhydroxyalkanoate production by Bacillus thuringiensis using a glucose-rich hydrolyzate. Bioresour Bioprocess 8:11. https://doi.org/10.1186/s40643-021-00361-x Sluiter A, Hames B, Ruiz R, Scarlata C, Sluiter J, Templeton D, Crocker D (2008) Determination of structural carbohydrates and lignin in biomass. National Renewable Energy Laboratory, Colorado Sophonputtanaphoca S, Sirigatmaneerat K, Kruakrut K (2018) Effect of low temperatures and residence times of pretreatment on glucan reactivity of sodium hydroxide-pretreated rice straw. Walailak J Sci Tech 15:313–323. https://doi.org/10.48048/wjst.2018.3697 Spiekermann P, Rehm BH, Kalscheuer R, Baumeister D, Steinbüchel A (1999) A sensitive, viable-colony staining method using Nile red for direct screening of bacteria that accumulate polyhydroxyalkanoic acids and other lipid storage compounds. Arch Microbiol 171:73–80. https://doi.org/10.1007/s002030050681 Suzuki M, Tachibana Y, Kasuya K (2021) Biodegradability of poly(3-hydroxyalkanoate) and poly(ɛ-caprolactone) via biological carbon cycles in marine environments. Polym J 53:47–66. https://doi.org/10.1038/s41428-020-00396-5 Takano M, Hoshino K (2018) Bioethanol production from rice straw by simultaneous saccharification and fermentation with statistical optimized cellulase cocktail and fermenting fungus. Bioresour Bioprocess 5:16. https://doi.org/10.1186/s40643-018-0203-y Tamura K, Nei M (1993) Estimation of the number of nucleotide substitutions in the control region of mitochondrial DNA in humans and chimpanzees. Mol Biol Evol 10:512–526. https://doi.org/10.1093/oxfordjournals.molbev.a040023 Tsegaye B, Balomajumder C, Roy P (2019) Alkali delignification and Bacillus sp. BMP01 hydrolysis of rice straw for enhancing biofuel yields. Bull Nat Res Cent 43:136. https://doi.org/10.1186/s42269-019-0175-x Tsegaye B, Balomajumder C, Roy P (2020) Organosolv pretreatments of rice straw followed by microbial hydrolysis for efficient biofuel production. Renew Energy 148:923–934. https://doi.org/10.1016/j.renene.2019.10.176 van der Pol EC, Vaessen E, Weusthuis RA, Eggink G (2016) Identifying inhibitory effects of lignocellulosic by-products on growth of lactic acid producing micro-organisms using a rapid small-scale screening method. Bioresour Technol 209:297–304. https://doi.org/10.1016/j.biortech.2016.03.037 Vu DH, Wainaina S, Taherzadeh MJ, Dan Åkesson D, Ferreira JA (2021) Production of polyhydroxyalkanoates (PHAs) by Bacillus megaterium using food waste acidogenic fermentation-derived volatile fatty acids. Bioengineered 12:2480–2498. https://doi.org/10.1080/21655979.2021.1935524 Weber S, Stubner S, Conrad R (2001) Bacterial populations colonizing and degrading rice straw in anoxic paddy soil. Appl Environ Microbiol 67:1318–1327. https://doi.org/10.1128/AEM.67.3.1318-1327.2001 Yadav B, Talan A, Tyagi RD, Drogui P (2021) Concomitant production of value-added products with polyhydroxyalkanoate (PHA) synthesis: a review. Bioresour Technol 337:125419. https://doi.org/10.1016/j.biortech.2021.125419 Zang Q, Cai W (2008) Enzymatic hydrolysis of alkali-pretreated rice straw by Trichoderma reesei ZM4-F3. Biomass Bioenergy 32:1130–1135. https://doi.org/10.1016/j.biombioe.2008.02.006 Zhang W, Liu J, Wang Y, Sun J, Huang P, Chang K (2020) Effect of ultrasound on ionic liquid-hydrochloric acid pretreatment with rice straw. Biomass Conv Bioref. https://doi.org/10.1007/s13399-019-00595-y Zhang S, Xia T, Wang J, Zhao Y, Xie X, Wei Z, Zhang X, Song C, Song X (2021) Role of Bacillus inoculation in rice straw composting and bacterial community stability after inoculation: unite resistance or individual collapse. Bioresour Technol 337:125464. https://doi.org/10.1016/j.biortech.2021.125464 The authors are grateful to the Swedish Research Council for funding the study. The authors thank the Swedish Research Council (Swedish Research Links grant, 348-2012-6169) for supporting this work. Department of Biotechnology and Microbiology, Faculty of Biology, Hanoi National University of Education, 136 Xuan Thuy, Cau Giay, Hanoi, Vietnam Doan Van Thuoc & Nguyen Thi Chung Division of Biotechnology, Department of Chemistry, Center for Chemistry and Chemical Engineering, Lund University, P.O. Box 124, 221 00, Lund, Sweden Rajni Hatti-Kaul Doan Van Thuoc Nguyen Thi Chung DVT: conceptualization, conducted all of the experimental work, analyzed and interpreted all data, writing—original draft, reviewing and editing. NTC: helped to conduct some parts of the experimental work. RHK: funding acquisition, writing—reviewing and editing, supervision. All authors read and approved the final manuscript. Correspondence to Rajni Hatti-Kaul. Additional file 1: Fig S1 . Yield of reducing sugars obtained after enzymatic hydrolysis of NaOH pretreated rice straw at (A) 30°C, (B) 50°C, and (C) 80°C, respectively, for different time periods. The enzymatic treatment was performed at 50°C. Fig S2. Yield of reducing sugars obtained during enzymatic hydrolysis of rice straw pretreated with Ca(OH)2 for different time periods at (A) 30°C, (B) 80°C, and (C) 121°C, respectively. The enzymatic treatment was performed at 50°C. Fig S3. Yield of reducing sugars obtained after enzymatic hydrolysis of rice straw pretreated with aqueous ammonia for different incubation times at (A) 50°C, and (B) 80°C, respectively. The enzymatic treatment was performed at 50°C. Fig S4. Cell growth and PHA accumulation by the strain B. cereus VK92 in culture media using (A) glucose and (B) rice straw hydrolysate, as carbon source at 35°C. Fig S5. Cell growth and PHA accumulation by strain B. cereus VK98 in culture media using (A) glucose and (B) rice straw hydrolysate, as carbon source at 35°C. Van Thuoc, D., Chung, N.T. & Hatti-Kaul, R. Polyhydroxyalkanoate production from rice straw hydrolysate obtained by alkaline pretreatment and enzymatic hydrolysis using Bacillus strains isolated from decomposing straw. Bioresour. Bioprocess. 8, 98 (2021). https://doi.org/10.1186/s40643-021-00454-7 Bacillus species Polyhydroxyalkanoate Mild alkaline pretreatment Enzymatic hydrolysis
CommonCrawl
Geometric analysis of a mixed elliptic-parabolic conformally invariant boundary value problem Jürgen Jost, Lei Liu, and Miaomiao Zhu Submission date: 19. Jun. 2018 (revised version: May 2019) MSC-Numbers: 53C43, 58E20 Keywords and phrases: Supersymmetric nonlinear sigma model, Dirac-harmonic maps, $\alpha$-Dirac-harmonic maps, $\alpha$-Dirac-harmonic map flow, blow-up, energy identity, neck analysis In this paper, we show the existence of Dirac-harmonic maps from a compact spin Riemann surface with smooth boundary to a general compact Riemannian manifold via a heat flow method when a Dirichlet boundary condition is imposed on the map and a chiral boundary condition on the spinor. Technically, we solve a new elliptic-parabolic system arising in geometric analysis that is motivated by the nonlinear supersymmetric sigma model of quantum field theory. The corresponding action functional involves two fields, a map from a Riemann surface into a Riemannian manifold and a spinor coupled to the map. The first field has to satisfy a second order elliptic system, which we turn into a parabolic system so as to apply heat flow techniques. The spinor, however, satisfies a first order Dirac type equation. We carry that equation as a nonlinear constraint along the flow. In order to solve this system, we adapt the idea of Sacks-Uhlenbeck to raise the integrand of the harmonic map action to a power α > 1; the solutions of the resulting Euler-Lagrange equations are called α-Dirac harmonic maps. Because of the (unchanged) spinor action, the analysis is more difficult than that of Sacks-Uhlenbeck. Nevertheless, we can carry out the limit α ↘ 1 to solve our original problem. Then we develop a general spectrum of methods (Pohozaev identity, three-circle method, blow-up analysis, energy identities, energy decay estimates etc.) for the compactness problem of the space of α-Dirac harmonic maps and for a further analysis of the limiting problem. We study the refined blow-up behaviour and asymptotic analysis for a sequence of α-Dirac harmonic maps from a compact Riemann surface with smooth boundary into a general compact Riemannian manifold with uniformly bounded energy. We prove generalized energy identities for both the map part and the spinor part. We also show that the map parts of the α-Dirac-harmonic necks converge to some geodesics on the target manifold. Moreover, we give a length formula for the limiting geodesic near a blow-up point. In particular, if the target manifold has a positive lower bound on the Ricci curvature or has a finite fundamental group and the sequence of α-Dirac harmonic maps has bounded Morse index, then the limit of the map part of the necks consists of geodesics of finite length which ensures the energy identities hold. In technical terms, these results are achieved by establishing a new decay estimate of the tangential energies of both the map part and the spinor part as well as a new decay estimate for the energy of the spinor as α ↘ 1.
CommonCrawl
Tagged: matrix Column Rank = Row Rank. (The Rank of a Matrix is the Same as the Rank of its Transpose) Let $A$ be an $m\times n$ matrix. Prove that the rank of $A$ is the same as the rank of the transpose matrix $A^{\trans}$. Let $A$ be an $m \times n$ matrix and $B$ be an $n \times l$ matrix. Then prove the followings. (a) $\rk(AB) \leq \rk(A)$. (b) If the matrix $B$ is nonsingular, then $\rk(AB)=\rk(A)$. Square Root of an Upper Triangular Matrix. How Many Square Roots Exist? Find a square root of the matrix 1 & 3 & -3 \\ 0 &4 &5 \\ 0 & 0 & 9 How many square roots does this matrix have? (University of California, Berkeley Qualifying Exam) Find a Basis For the Null Space of a Given $2\times 3$ Matrix 1 & 1 & 0 \\ 1 &1 &0 \end{bmatrix}\] be a matrix. Find a basis of the null space of the matrix $A$. (Remark: a null space is also called a kernel.) Find Values of $a$ so that the Matrix is Nonsingular Let $A$ be the following $3 \times 3$ matrix. 1 & 1 & a \end{bmatrix}.\] Determine the values of $a$ so that the matrix $A$ is nonsingular. The Null Space (the Kernel) of a Matrix is a Subspace of $\R^n$ Let $A$ be an $m \times n$ real matrix. Then the null space $\calN(A)$ of $A$ is defined by \[ \calN(A)=\{ \mathbf{x}\in \R^n \mid A\mathbf{x}=\mathbf{0}_m\}.\] That is, the null space is the set of solutions to the homogeneous system $A\mathbf{x}=\mathbf{0}_m$. Prove that the null space $\calN(A)$ is a subspace of the vector space $\R^n$. (Note that the null space is also called the kernel of $A$.) Express a Vector as a Linear Combination of Other Vectors Express the vector $\mathbf{b}=\begin{bmatrix} 2 \\ 13 \\ \end{bmatrix}$ as a linear combination of the vectors \[\mathbf{v}_1=\begin{bmatrix} \end{bmatrix}, \mathbf{v}_2= \begin{bmatrix} (The Ohio State University, Linear Algebra Exam) Compute the Product $A^{2017}\mathbf{u}$ of a Matrix Power and a Vector -1 & 2 \\ 0 & -1 \end{bmatrix} \text{ and } \mathbf{u}=\begin{bmatrix} 1\\ \end{bmatrix}.\] Compute $A^{2017}\mathbf{u}$. Symmetric Matrices and the Product of Two Matrices Let $A$ and $B$ be $n \times n$ real symmetric matrices. Prove the followings. (a) The product $AB$ is symmetric if and only if $AB=BA$. (b) If the product $AB$ is a diagonal matrix, then $AB=BA$. 10 True or False Problems about Basic Matrix Operations Test your understanding of basic properties of matrix operations. There are 10 True or False Quiz Problems. These 10 problems are very common and essential. So make sure to understand these and don't lose a point if any of these is your exam problems. (These are actual exam problems at the Ohio State University.) You can take the quiz as many times as you like. The solutions will be given after completing all the 10 problems. Click the View question button to see the solutions. Find the Rank of a Matrix with a Parameter Find the rank of the following real matrix. \[ \begin{bmatrix} a & 1 & 2 \\ -1 & 1 & 1-a \end{bmatrix},\] where $a$ is a real number. (Kyoto University, Linear Algebra Exam) Possibilities For the Number of Solutions for a Linear System Determine whether the following systems of equations (or matrix equations) described below has no solution, one unique solution or infinitely many solutions and justify your answer. (a) \[\left\{ \begin{array}{c} ax+by=c \\ dx+ey=f, \right. \] where $a,b,c, d$ are scalars satisfying $a/d=b/e=c/f$. (b) $A \mathbf{x}=\mathbf{0}$, where $A$ is a singular matrix. (c) A homogeneous system of $3$ equations in $4$ unknowns. (d) $A\mathbf{x}=\mathbf{b}$, where the row-reduced echelon form of the augmented matrix $[A|\mathbf{b}]$ looks as follows: \[\begin{bmatrix} 1 & 0 & -1 & 0 \\ 0 &1 & 2 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}.\] (The Ohio State University, Linear Algebra Exam) Determine When the Given Matrix Invertible For which choice(s) of the constant $k$ is the following matrix invertible? 1 &2 &k \\ 1 & 4 & k^2 (Johns Hopkins University, Linear Algebra Exam) The Vector Space Consisting of All Traceless Diagonal Matrices Let $V$ be the set of all $n \times n$ diagonal matrices whose traces are zero. That is, \begin{equation*} V:=\left\{ A=\begin{bmatrix} a_{11} & 0 & \dots & 0 \\ 0 &a_{22} & \dots & 0 \\ 0 & 0 & \ddots & \vdots \\ 0 & 0 & \dots & a_{nn} \end{bmatrix} \quad \middle| \quad a_{11}, \dots, a_{nn} \in \C,\\ \tr(A)=0 \\ \right\} \end{equation*} Let $E_{ij}$ denote the $n \times n$ matrix whose $(i,j)$-entry is $1$ and zero elsewhere. (a) Show that $V$ is a subspace of the vector space $M_n$ over $\C$ of all $n\times n$ matrices. (You may assume without a proof that $M_n$ is a vector space.) (b) Show that matrices \[E_{11}-E_{22}, \, E_{22}-E_{33}, \, \dots,\, E_{n-1\, n-1}-E_{nn}\] are a basis for the vector space $V$. (c) Find the dimension of $V$. Determine Whether the Following Matrix Invertible. If So Find Its Inverse Matrix. Let A be the matrix 1 & -1 & 0 \\ 0 &1 &-1 \\ \end{bmatrix}.\] Is the matrix $A$ invertible? If not, then explain why it isn't invertible. If so, then find the inverse. (The Ohio State University Linear Algebra Exam) Explicit Field Isomorphism of Finite Fields Row Equivalent Matrix, Bases for the Null Space, Range, and Row Space of a Matrix Perturbation of a Singular Matrix is Nonsingular
CommonCrawl
Express Letter Using Himawari-8, estimation of SO2 cloud altitude at Aso volcano eruption, on October 8, 2016 Kensuke Ishii1, Yuta Hayashi2 & Toshiki Shimbori1 It is vital to detect volcanic plumes as soon as possible for volcanic hazard mitigation such as aviation safety and the life of residents. Himawari-8, the Japan Meteorological Agency's (JMA's) geostationary meteorological satellite, has high spatial resolution and sixteen observation bands including the 8.6 μm band to detect sulfur dioxide (SO2). Therefore, Ash RGB composite images (RED: brightness temperature (BT) difference between 12.4 and 10.4 μm, GREEN: BT difference between 10.4 and 8.6 μm, BLUE: 10.4 μm) discriminate SO2 clouds and volcanic ash clouds from meteorological clouds. Since the Himawari-8 has also high temporal resolution, the real-time monitoring of ash and SO2 clouds is of great use. A phreatomagmatic eruption of Aso volcano in Kyushu, Japan, occurred at 01:46 JST on October 8, 2016. For this eruption, the Ash RGB could detect SO2 cloud from Aso volcano immediately after the eruption and track it even 12 h after. In this case, the Ash RGB images every 2.5 min could clearly detect the SO2 cloud that conventional images such as infrared and split window could not detect sufficiently. Furthermore, we could estimate the height of the SO2 cloud by comparing the Ash RGB images and simulations of the JMA Global Atmospheric Transport Model with a variety of height parameters. As a result of comparison, the top and bottom height of the SO2 cloud emitted from the eruption was estimated as 7 and 13–14 km, respectively. Assuming the plume height was 13–14 km and eruption duration was 160–220 s (as estimated by seismic observation), the total emission mass of volcanic ash from the eruption was estimated as 6.1–11.8 × 108 kg, which is relatively consistent with 6.0–6.5 × 108 kg from field survey. In an eruption event, in order to estimate the amount of damage promptly, it is important to know the eruption source parameters such as mass eruption rate, plume height and eruption duration. For example, the Japan Meteorological Agency (JMA) operates the Volcanic Ash Fall Forecast (VAFF) system to issue forecasts of areas where ash or lapilli fall is expected around a volcanic eruption (Hasegawa et al. 2015). In this operation, the JMA Regional Atmospheric Transport Model (JMA-RATM) uses initial conditions including total eruption mass which is estimated using the relationship between the total mass and the top height of the plume and eruption duration. In the Volcanic Ash Advisory Center (VAAC) operation, after an eruption, Volcanic Ash Advisories (VAA) are issued at 6-h intervals (normally at 00, 06, 12 and 18 UTC) for as long as an ash cloud is identified in satellite imagery (Tokyo VAAC 2016). For the forecast of volcanic ash by the JMA Global Atmospheric Transport Model (JMA-GATM) in the Tokyo VAAC of the JMA, the top height of ash clouds is an essential parameter. We can estimate total mass using the relationship between a mass eruption rate and a plume height (e.g., Mastin et al. 2009). However, in some cases, it is not easy to estimate the top height of plumes because meteorological clouds obscure remote cameras and satellite observation. Since weather radar observation is much more sensitive to large particles than small particles such as fine ash or SO2 gas which are found around the top of the plumes, there are cases in which radar is not useful for estimation of plume height. However, the total eruption mass is sensitive to the top height, and its accurate estimation is indispensable. In addition, for disaster prevention, promptness is vital. Especially, for aviation safety, timely estimation is of great importance. In this context, the Himawari-8 satellite allows very timely volcanic plume monitoring. Himawari-8, which is a new geostationary meteorological satellite, was put into operation on July 7, 2015 (Bessho et al. 2016). It has significantly advanced features, having sixteen observation bands compared to five in its predecessor. The spatial resolution is 2 km for infrared bands. Furthermore, it has high observation frequencies. Full disk images are taken every 10 min, Japan area images are taken every 2.5 min, and landmark area images are taken every 0.5 min. For the detection and analysis of volcanic eruptions, one of the most strikingly enhanced points is that the 16 observation bands include new bands #10 (7.3 μm) and #11(8.6 μm) which have sensitivity for SO2 gas (Watson et al. 2004). Ash RGB (one of the visualization schemes of satellite data; see the "Ash RGB" section for details) using band #11, #13 (10.4 μm) and #15 (12.4 μm) are tricolored satellite images (RGB image) that can discriminate SO2 clouds and ash clouds from meteorological clouds. The predecessor of Himawari-8 was the Multi-functional Transport Satellite-2 (MTSAT-2, also called Himawari-7), which was also used for the detection and analysis of ash clouds. MTSAT-2 had no bands sensitive to SO2 gas, so that SO2 clouds could not be detected, and infrared (IR, 10.8 μm) and a split window (10.8–12.0 μm) were mainly used to discriminate volcanic ash clouds from meteorological clouds (Prata 1989a, b). In the Japan area, there are other satellites and sophisticated retrieval schemes which have been developed and are available for detection or analysis of volcanic ash or SO2 clouds. Low Earth orbit satellites provide high-spectral resolution data (Cooke et al. 2014). For example, Yang et al. (2007) developed retrieval of SO2 from the Ozone Monitoring Instrument (OMI). However, since they are Earth orbit satellites, there are very few chances of observations of ash clouds immediately after an eruption. Geosynchronous satellites provide high temporal resolution allowing detection in near real time (Francis et al. 2012). For geosynchronous satellites, a retrieval algorithm of physical properties of ash cloud (e.g., ash cloud height, optical depth, effective particle radius and mass loading) was developed (Pavolonis and Sieglaff 2010). On the other hand, the Ash RGB is composite imagery and provides no quantitative information such as the amount of ash or SO2 in the column and no vertical profile such as the top height of ash or SO2 clouds. However, because flow of SO2 clouds depends on the wind field, SO2 clouds at each altitude flow with different directions and speeds. In this sense, a time series of the Ash RGB should include information of the vertical profile in the atmosphere with vertical wind shear. Therefore, it is possible that vertical information of SO2 clouds can be obtained from a time series of the Ash RGB and wind field. In this study, we applied the Ash RGB for the case of SO2 cloud of the 2016 Aso volcano eruption. The Ash RGB could clearly track the SO2 cloud from Aso volcano. Furthermore, we estimated the altitude of the SO2 cloud by comparing the Ash RGB images and the SO2 numerical simulation results using the JMA-GATM which has processes such as wind advection, gravitational settling, turbulent diffusion and wet/dry deposition. This model is used for volcanic ash forecasts for the safety of aviation services in the Tokyo VAAC operation. Volcanic ash falls by gravitational settling, while the gravitational effect is negligible for SO2 clouds. For the SO2 simulation, we executed the JMA-GATM without gravity settling, in which tracers act as SO2 cloud. For the estimation of the altitude of SO2 cloud, we tried a variety of vertical profiles of SO2 tracers and sought the best initial vertical profile of SO2 cloud which is consistent with the two-dimensional spread estimated by the Ash RGB images from Himawari-8. In addition, considering pilots' reports and OMI, a limitation of the Ash RGB for the thin SO2 region was inferred. Aso volcano In Japan, there are 111 active volcanos which have erupted within 10,000 years or which have vigorous fumarolic activity. Of these, 50 volcanos were selected by the Coordinating Committee for Prediction of Volcanic Eruption and are constantly monitored by seismometers, tiltmeters, infrasound microphones, remote cameras, etc. (Yamasato et al. 2013). Aso volcano in Kyushu, Japan (Fig. 1) is one of these. Map of the Aso volcano. The large triangle indicates Aso volcano. The smaller triangles indicate other volcanos Aso volcano, including Aso caldera and post-caldera central cones, is one of the most active volcanos in Japan. Nakadake, one of the central cones, is the only active cone and consists of seven craterlets. Of these, only one crater (called "First Crater") has been active in the past 80 years (JMA and VSJ 2013). Most recently, several ash emissions occurred in 2003–2005, and volcanic gases and ash were emitted in 2014–2016 (e.g., Miyabuchi et al. 2008). At Nakadake, a phreatomagmatic eruption occurred at 01:46 JST on October 8, 2016. Prior to this eruption, the last explosive eruption including large infrasonic waves was January 26, 1980. A seismometer showed that the duration of the eruption was approximately 160–220 s (Fig. 2) (Shimbori 2017). The volcanic earthquake (vertical component) as measured by a seismogram from Aso volcano eruption, 8 Oct., 2016, 01:46:30–01:50:30JST. The observation point is located approximately 1.2 km west from the vent. The black arrow indicates the eruption time (around 01:46:36JST) For the 2016 eruption, because meteorological cloud obscured the remote cameras, the top height of volcanic plumes could not be estimated immediately after the eruption. The Tokyo VAAC issued a VAA with the top height of the plume as 39,000 feet estimated using Infrared (band #13) of Himawari-8 (Tokyo VAAC 2016). In this eruption, ash and lapilli fall spread several kilometers, mainly to the northeast by the ambient wind. For example, lapilli which fell approximately 4.5 km from the vent broke windowpanes, and lapilli which fell approximately 6.5 km away broke more than 1500 solar panels (Sasaki et al. 2017). Further, there were some other effects such as malfunctions of train signals, changes of routes and delays of airplanes, ash fall on crops and damage to agricultural greenhouses. In addition, electricity was cut to 29,000 households around the volcano, and the power outage also caused some disruptions to the water supply. Ash RGB When a volcanic eruption occurs, we need to know and understand the details of the eruption as soon as possible for estimating the magnitude of eruption and damage around the volcano, and for predictions such as ash fall forecasts for residents and airborne ash forecasts for aviation safety. For this purpose, we can make the most of Himawari-8. Enhancement of spatial resolution and observation frequencies are very useful for analysis and monitoring of eruption plumes. Compared to MTSAT-2, Himawari-8′s most enhanced point for monitoring eruption plumes is the addition of bands #10 and #11 which are sensitive to SO2 gas. These bands enable us to detect "SO2 rich" plumes which are overlooked by using conventional images such as IR (band #13) and a split window (band #13–#15) for Himawari-8. In this study, the Ash RGB is based on the report of the Meteorological Satellite Center (2015). It is composed of 3 beams: the RED beam is correlated with the brightness temperature (BT) difference (− 4 to 2 [K]) of band #15 and #13, the GREEN beam is correlated with the BT difference (− 4 to 5 [K]) of band #13 and #11, and the BLUE beam is correlated with band #13(208–243 [K]). In the Ash RGB image, the pinkish color corresponds to ash, and the bright green–yellow color corresponds to SO2. Figure 3 shows the Ash RGB images for Aso volcano in this study. The plume which was released from Aso volcano was identified approximately 10 min after eruption from the Ash RGB, and blown eastward while spreading to the north and south by the wind. Within 1 or 2 h after the eruption, the cloud could be tracked by IR (band #13) or the split window even though it was not clear. However, 3 or 4 h after the eruption, the plume could not be tracked. On the other hand, the Ash RGB could track it clearly 12 h after the eruption (see Fig. 5). The color of the Ash RGB in the cloud region indicated SO2, so that the cloud was presumed to be "SO2 rich". The Ash RGB (upper row), band #13 image (middle row) and split window (band #13–#15) image (bottom row). The left column is 02:45JST (approximately 1 h after eruption), the center column is 04:45JST (approximately 3 h after eruption), and the right column is 06:45JST (approximately 5 h after eruption) Model description and methodology The numerical simulation provides forecasting of SO2 clouds by using an initial condition (i.e., initial SO2 distribution for the numerical model). However, the simulation result depends on the accuracy of the initial conditions. If the initial conditions are not set accurately, the numerical simulations will provide an inaccurate forecast. Conversely, initial conditions providing a forecast which is consistent with observations should have realistic initial distribution. In this case, a simulation result using realistic top and bottom height of SO2 clouds for the initial conditions of the model should be consistent with the Ash RGB images. In this study, the numerical simulations of SO2 clouds are performed by the JMA-GATM which is based on Iwasaki et al. (1998). The JMA-GATM is a Lagrangian model which calculates the time evolution of location of many tracer particles as volcanic ash particles. Each particle's displacement during 1 time step \(\delta t\) is as follows: $$x\left( {t + \delta t} \right) = x\left( t \right) + \bar{u} \delta t + \varGamma \sqrt {2K_{\text{h}} \delta t}$$ $$y\left( {t + \delta t} \right) = y\left( t \right) + \bar{v} \delta t + \varGamma \sqrt {2K_{\text{h}} \delta t}$$ $$z\left( {t + \delta t} \right) = z\left( t \right) + \bar{w} \delta t + \mathop \sum \limits_{\delta t'} \varGamma \sqrt {2K_{{{\text{v}} }} \delta t'} - V_{\text{g}} \delta t$$ where \(x\left( t \right),y\left( t \right),z\left( t \right)\) are the locations at the time t, and \(\bar{u}, \bar{v}\) are wind velocities at the tracer location. \(\bar{w}\) is vertical wind which is calculated diagnostically from horizontal divergence of \(\bar{u}, \bar{v}\). The third terms in each equation represent sub-grid scale deviations of horizontal and vertical wind. \(K_{\text{h}} ,\,K_{\text{v}}\) are horizontal and vertical diffusion coefficients, and \(\varGamma\) is random displacement whose statistical distribution takes the Gaussian distribution function with mean 0 and standard deviation 1. \(V_{\text{g}}\) is the terminal fall velocity, which mainly depends on the size of the tracer. The wind velocities \(\bar{u}, \bar{v}, \bar{w}\) are included in the meteorological field predicted by the Global Spectral Model (JMA-GSM) (JMA 2013) for numerical weather prediction. Horizontal and vertical diffusions are formulated assuming the random walk model, in which the probability density of the displacements is Gaussian distribution with \(\varGamma \sqrt {2K_{\text{h}} \delta t}\) for horizontal diffusion with \(K_{\text{h}}\) = 4.0 × 103 m2/s and \(\varGamma \sqrt {2K_{{{\text{v}} }} \delta t^{\prime } }\) for vertical diffusion. For the vertical diffusion, \(K_{\text{v}}\) is based on Louis et al. (1982), \(\delta t^{\prime }\) is divided into a smaller value than \(\delta t\). Wet and dry deposition processes are included in the JMA-GATM. The tracers are scavenged by the wet/dry deposition process. Actually, wet deposition includes washout (below-cloud scavenging) and rainout (in-cloud scavenging). However, the JMA-GATM includes only washout which scavenges the tracer using precipitation calculated from JMA-GSM. Tracers near the surface are scavenged by dry deposition which calculates deposition velocity from aerodynamic resistance (Kitada et al.1986). The gravitational settling is calculated based on Suzuki (1983). In this numerical simulation, 10,000 tracers in the JMA-GATM as SO2 tracers are set above Aso volcano as a straight line source from bottom height to top height for 3 min from the eruption time, 01:46JST 8 October 2016 (FT = 0 h), and simulations were done up to 15 h after the eruption. In this simulation, an initial vertical profile of SO2 cloud is used with a bottom height of 2–16 km (every 1 km) and top height 5–17 km (every 1 km), i.e., 2–5, 2–6 km, etc., up to 2–17 km. This was repeated for every new bottom and top height in increments of 1 km; therefore, the final vertical profile was 16–17 km. Considering that the top height is larger than the bottom height, the total number of experiments is 117. The meteorological fields of the JMA-GSM forecast with an initial time of 21JST October 7, 2016 are used. The meteorological fields such as wind, temperature, pressure and precipitation are taken as grid point values every 3 h (00, 03, 06, 09, 12, 15, 18, 21JST). The time step \(\delta t\) in Eqs. (1)–(3) is 600 [s]. For the location of each tracer at each time step, the meteorological field which is used in the processes is interpolated linearly by time and space. Because tracers in the JMA-GATM simulations have the role of SO2 cloud, the gravitational settling is switched off. Results of numerical simulations Representative results of the 117 experiments by the JMA-GATM are shown in Fig. 4. The spread of SO2 cloud in each result significantly depends on the initial vertical profile. For example, for an initial SO2 cloud with an altitude of 2–7 km, the cloud was blown and spread widely from north–northeast to east–northeast from the volcano (upper row in Fig. 4). For an initial SO2 cloud with an altitude of 7–14 km, it was blown and spread more narrowly from east–northeast to east compared to the 2–7 km cloud, but spread north–south (middle row in Fig. 4). For an initial SO2 cloud with an altitude of 14–17 km, we can see that the cloud was blown to the east while spreading in an east–west direction (bottom row in Fig. 4). These differences are caused by wind direction shear and wind speed shear. That is, at the time of eruption, in lower altitudes such as from the surface to 7 km, the wind direction is north–northeast to east–northeast. On the other hand, at altitudes of 7–14 km, the wind direction is east–northeast to east, and at altitudes of 14–17 km, the wind blows eastward with wind speed shear. These differences between directions of SO2 cloud at each altitude were seen 20–30 min after eruption and later. Results of the SO2 simulations by JMA-GATM. Upper row: simulation from initial altitude of SO2 at 2–7 km. Middle row: simulation from initial altitude of SO2 at 7–14 km. Bottom row: simulation from initial altitude of SO2 at 14–17 km. The left column is approximately 1 h after eruption (02:46JST). The center column is approximately 3 h after eruption (04:46JST). The right column is approximately 5 h after eruption (06:46JST). The color bar indicates the height [m] of SO2 tracers Comparing the Ash RGB and the result of the model simulations, SO2 vertical profiles with 7–13 and 7–14 km altitudes seem to be most likely. The simulation results show that SO2 cloud under 7 km was blown to the northeast unlike distribution indicated from the Ash RGB (upper row of Fig. 3). However, actually thin SO2 cloud seemed to be blown under 7 km from other observations (to be discussed below). For the simulations with SO2 cloud over 14 km, the south end of the cloud was spreading to the east and west by wind speed shear unlike SO2 cloud deduced from the Ash RGB. Because the SO2 cloud spread at 7–13 and 7–14 km is very similar, we could not discriminate the top height of 13–14 km. In the simulations, the north end of the SO2 cloud reached Kanto, Japan, as did SO2 cloud observed by the Ash RGB 12 h after eruption. However, in the south of approximately 35° north latitude, the simulation is not similar to the SO2 cloud of the Ash RGB. The SO2 cloud of the Ash RGB is located west of that of the simulation. This difference is possibly caused by the wind of the atmospheric fields. For further understanding of the results of our simulations, it must be noted that there is uncertainty of the horizontal diffusion coefficient. Because the coefficient depends on the resolution of the meteorological numerical model (Iwasaki et al. 1998), it is difficult to find a universal value. In our study, we determined the coefficient by comparing the Ash RGB and results with some ordered value. However, for a more detailed analysis such as quantitative comparison of ash fall amount around the volcano, the coefficient should be determined more carefully (e.g., Tanaka and Yamamoto 2002). Comparing the Ash RGB image and the OMI measurements at approximately 12 h after eruption, they are very similar in spread except for the area around the north end of the SO2 cloud (Fig. 5). That is, OMI detected a more northern region (~ 37° north) of SO2 than the Ash RGB (~ 35–36° north). For the simulation of an initial profile with an altitude of 5–14 km, the SO2 cloud spread to the north region like the observation of OMI, unlike the Ash RGB. The locations of pilots' reports which noted the smell of SO2 are consistent with the simulation of an initial profile with an altitude of 5–14 km (height errors are approximately ± 2 km, except for point h in Fig. 5) and OMI. Therefore, it seems the Ash RGB could not detect the thin SO2 region under an altitude of 7 km. The two left figures are the time series (FT = 1(02:46JST)-FT13(14:46JST)) of the JMA-GATM simulations and pilots' reports. (The upper figure is the result from an initial altitude of SO2 at 5–14 km. The bottom figure is the result from an initial altitude of SO2 at 7–14 km.) The lettered points are pilots' reports including "VA SMELL" or "SULFUR" as follows: a;06:00JST(FL270–320), b;09:16JST(FL210), c;09:34JST(FL210–240), d;09:50JST(FL180), e;10:20JST(FL190), ff';10:28JST(FL090), g;10:39JST(FL290), h;13:32JST(FL090–100), xx';06:08JST(FL370–390), y;08:31JST(FL170), zz';10:40JST(FL360). FL is flight level (FL100 = 10,000 feet). The right upper figure is SO2 observation from Aura/OMI, 00:35–02:17JST (NASA-GFSC et al. 2016). The right bottom figure is the Ash RGB at 12:46JST We can calculate total mass using the duration of eruption of 160–220 s and a top height 13–14 km using the relationship \(H = 2.00 \times \dot{V}^{0.241}\) (Mastin et al. 2009), where H is the top height (km) of the eruption plume, and \(\dot{V}\) is the volumetric flow rate (m3 dense-rock equivalent (DRE) per second). Assuming magma density is 2500 kg/m3, the total mass of the eruption is 6.1–11.8 × 108 kg, which is consistent with field survey 6.0–6.5 × 108 kg (Miyabuchi et al. 2017). In addition, the current study estimation of a top height of 13–14 km is also consistent with the JMA-RATM simulation (Shimbori 2017) that showed volcanic ash fall forecast calculated with a top height of 13.1 km is most consistent with ash fall distribution observation. Therefore, the current simulation results are consistent with both field survey and previous simulation results. A phreatomagmatic eruption of Aso volcano in Kyushu, Japan, occurred at 01:46 JST October 8, 2016. The Ash RGB images using three observation bands of Himawari-8, including the bands sensitive to SO2, could detect an "SO2 rich" cloud which could not be detected by conventional IR (band #13) or split window images more than 12 h after the eruption. In this study, we estimated the altitude at which the SO2 cloud was blown by comparing the Ash RGB images and the simulation by the JMA-GATM. It is estimated that the height of the SO2 cloud was 7–13 or 7–14 km. OMI measurements and pilots' reports suggest that the Ash RGB images could not detect thin SO2 cloud, and thin SO2 cloud existed at the altitude of 5–7 km. Using a top height of 13–14 km and an eruption duration of 160–220 s from the volcanic earthquake, the total emission mass of the eruption is estimated as 6.1–11.8 × 108 kg. It is relatively consistent with 6.0–6.5 × 108 kg from field survey. BT: brightness temperature JMA: Japan Meteorological Agency JMA-GATM: JMA Global Atmospheric Transport Model JMA-RATM: JMA Regional Atmospheric Transport Model OMI: Ozone Monitoring Instrument VAA: Volcanic Ash Advisory Bessho K, Date K, Hayashi M, Ikeda A, Imai T, Inoue H, Kumagai Y, Miyakawa T, Murata H, Ohno T, Okuyama A, Oyama R, Sasaki Y, Shimazu Y, Shimoji K, Sumida Y, Suzuki M, Taniguchi H, Tsuchiyama H, Uesawa D, Yokota H, Yoshida R (2016) An introduction to Himawari-8/9—Japan's new-generation geostationary meteorological satellites. J Meteorol Soc Jpn 94:151–183. https://doi.org/10.2151/jmsj.2016-009 Cooke MC, Francis PN, Millington S, Saunders R, Witham C (2014) Detection of the Grímsvötn 2011 volcanic eruption plumes using infrared satellite measurements. Atmos Sci Lett 15:321–327. https://doi.org/10.1002/asl2.506 Francis PN, Cooke MC, Saunders RW (2012) Retrieval of physical properties of volcanic ash using Meteosat: a case study from the 2010 Eyjafjallajökull eruption. J Geophys Res 117D00:U09. https://doi.org/10.1029/2011jd016788 Hasegawa Y, Sugai A, Hayashi Y, Hayashi Y, Saito S, Shimbori T (2015) Improvements of volcanic ash fall forecasts issued by the Japan Meteorological Agency. J Appl Volcanol 4:2. https://doi.org/10.1186/s13617-014-0018-2 Iwasaki T, Maki T, Katayama K (1998) Tracer transport model at Japan Meteorological Agency and its application to the ETEX data. Atmos Environ 32:4285–4295. https://doi.org/10.1016/S1352-2310(98)00171-X Japan Meteorological Agency (2013) Outline of the operational numerical weather prediction at the Japan Meteorological Agency, pp 43–61. http://www.jma.go.jp/jma/jma-eng/jma-center/nwp/outline2013-nwp/index.htm. Accessed 19 July 2017 Japan Meteorological Agency and Volcanological Society of Japan (2013) National catalogue of the active volcanoes in Japan. http://www.data.jma.go.jp/svd/vois/data/tokyo/STOCK/souran_eng/menu.htm. Accessed 26 Oct 2017 Kitada T, Carmichael GR, Peters LK (1986) Effects of dry deposition on the concentration-distributions of atmospheric pollutants within land- and sea-breeze circulations. Atmos Environ 20:1999–2010. https://doi.org/10.1016/0004-6981(86)90341-0 Louis JF, Tiedtke M, Geleyn JF (1982) A short history of the PBL parameterization at ECMWF. Workshop on planetary boundary layer parameterization, Shinfield Park, Reading, 25–27 Nov. 1981. Available via DIALOG. https://www.ecmwf.int/en/elibrary/10845-short-history-pbl-parameterization-ecmwf. Accessed 19 July 2017 Mastin LG, Guffanti M, Servanckx R, Webley P, Barsotti S, Dean K, Durant A, Ewert JW, Neri A, Rose WI, Schneider D, Siebert L, Stunder B, Swanson G, Tupper A, Volentik A, Waythomas CF (2009) A multidisciplinary effort to assign realistic source parameters to models of volcanic ash-cloud transport and dispersion during eruptions. J Volcanol Geotherm Res 186:10–21. https://doi.org/10.1016/j.jvolgeores.2009.01.008 Meteorological Satellite Center (2015) Ash RGB detection of volcanic ash. http://www.data.jma.go.jp/mscweb/en/VRL/VLab_RGB/materials/RGB-Ash-Detection_of_Volcanic_Ash.pdf. Accessed 19 July 2017 Miyabuchi Y, Ikebe S, Watanabe K (2008) Geological constraints on the 2003–2005 ash emissions from the Nakadake crater lake, Aso Volcano, Japan. J Volcanol Geotherm Res 178:169–183. https://doi.org/10.1016/j.jvolgeores.2008.06.025 Miyabuchi Y, Maeno F, Nakada S, Nagai M, Iizuka Y, Hoshizumi H, Tanaka A, Itoh J, Kawanabe Y, Oishi M, Yokoo A, Ohkura T (2017) The October 7–8, 2016 eruptions of Nakadake crater, Aso Volcano, Japan and their deposits. Abstract of Japan Geoscience Union—American Geoscience Union Joint Meeting 2017: SVC47-11. https://confit.atlas.jp/guide/event-img/jpguagu2017/SVC47-11/public/pdf?type=in. Accessed 19 July 2017 NASA-GFSC, Michigan Technological University, National Institute of Advanced Industrial Science and Technology, The School of Science at the University of Tokyo (2016) Amount of SO2 from Aso eruption, 8th Oct, 2016. https://www.gsj.jp/hazards/volcano/kazan-bukai/yochiren/aso_20161011_2.pdf. Accessed 19 July 2017 (in Japanese, with English captions) Pavolonis M, Sieglaff J (2010) GOES-R Advanced Baseline Imager (ABI) Algorithm theoretical basis document for volcanic ash (detection and height). Version 2.0, NOAA NESDIS Center for Satellite Applications and Research. Available via DIALOG. http://www.goes-r.gov/products/baseline-volcanic-ash.html. Accessed 19 July 2017 Prata AJ (1989a) Observations of volcanic ash clouds in the 10–12 μm window using AVHRR/2 data. Int J Remote Sens 10:751–761. https://doi.org/10.1080/01431168908903916 Prata AJ (1989b) Infrared radiative transfer calculations for volcanic ash clouds. Geophys Res Lett 16:1293–1296. https://doi.org/10.1029/GL016i011p01293 Sasaki H, Naruke S, Chiba T (2017) Characteristics of damage caused by lapilli fall of the October 8, 2016 eruption of Aso volcano, Japan. Abstract of Japan Geoscience Union—American Geoscience Union Joint Meeting 2017: SVC49-11. https://confit.atlas.jp/guide/event-img/jpguagu2017/SVC49-11/public/pdf. Accessed 19 July 2017 Shimbori T (2017) Volcanic ash and lapilli blowing in the wind. Wind Eng JAWE 42:261–272 (in Japanese) Suzuki T (1983) A theoretical model for dispersion of Tephra. In: Shimozuru D, Yokoyama I (eds) Arc volcanism, physics and tectonics. Terra Scientific Publishing Company, Meguro, pp 95–113 Tanaka HL, Yamamoto K (2002) Numerical simulation of volcanic plume dispersal from Usu volcano in Japan on 31 March 2000 using PUFF model. Earth Planets Space 54:743–752. https://doi.org/10.1186/BF03351727 Tokyo VAAC (2016) Volcanic ash advisories text of 17:56 UTC, 07 Oct. 2016 ASOSAN 2016/8. http://ds.data.jma.go.jp/svd/vaac/data/Archives/2016_vaac_list.html. Accessed 19 July 2017 Watson IM, Realmuto VJ, Rose WI, Prata AJ, Bluth GJS, Gu Y, Bader CE, Yu T (2004) Thermal infrared remote sensing of volcanic emissions using the moderate resolution imaging spectroradiometer. J Volcanol Geotherm Res 135:75–89. https://doi.org/10.1016/j.jvolgeores.2003.12.017 Yamasato H, Funasaki J, Takagi Y (2013) The Japan Meteorological Agency's volcanic disaster mitigation initiatives. Technical Note of the National Research Institute for earth science and disaster prevention 380:101–107. http://vivaweb2.bosai.go.jp/v-hazard/pdf/03E.pdf. Accessed 19 July 2017 Yang K, Krotkov NA, Krueger AJ, Carn SA, Bhartia PK, Levelt PF (2007) Retrieval of large volcanic SO2 columns from the Aura Ozone Monitoring Instrument: comparison and limitations. J Geophys Res 112D24:S43. https://doi.org/10.1029/2007jd008825 KI performed the SO2 simulations and comparison between observation and simulation results. YH implemented the algorithm of the Ash RGB from Himawari-8 and analyzed them. TS gathered the observation data such as volcanic earthquake and pilots' reports and analyzed them. All authors read and approved the final manuscript. The numerical simulations in this study were performed using a Fujitsu FX100 supercomputer system at the Meteorological Research Institute. Kensuke Ishii is a researcher at the Meteorological Research Institute. Yuta Hayashi is an examination officer at the National Personnel Authority. Toshiki Shimbori is a senior researcher at the Meteorological Research Institute. Volcanology Research Department, Meteorological Research Institute, 1-1 Nagamine, Tsukuba, Ibaraki, 305-0052, Japan Kensuke Ishii & Toshiki Shimbori National Personnel Authority, 1-2-3 Kasumigaseki, Chiyoda-ku, Tokyo, 100-8913, Japan Yuta Hayashi Kensuke Ishii Toshiki Shimbori Correspondence to Kensuke Ishii. Ishii, K., Hayashi, Y. & Shimbori, T. Using Himawari-8, estimation of SO2 cloud altitude at Aso volcano eruption, on October 8, 2016. Earth Planets Space 70, 19 (2018). https://doi.org/10.1186/s40623-018-0793-9 Atmospheric transport model Advancement of our knowledge on Aso volcano: Current activity and background
CommonCrawl
A systematic review and meta-analysis of the effectiveness of food safety education interventions for consumers in developed countries Ian Young1, Lisa Waddell1,2, Shannon Harding1, Judy Greig1, Mariola Mascarenhas1, Bhairavi Sivaramalingam1,2, Mai T. Pham1,2 & Andrew Papadopoulos2 Foodborne illness has a large public health and economic burden worldwide, and many cases are associated with food handled and prepared at home. Educational interventions are necessary to improve consumer food safety practices and reduce the associated burden of foodborne illness. We conducted a systematic review and targeted meta-analyses to investigate the effectiveness of food safety education interventions for consumers. Relevant articles were identified through a preliminary scoping review that included: a comprehensive search in 10 bibliographic databases with verification; relevance screening of abstracts; and extraction of article characteristics. Experimental studies conducted in developed countries were prioritized for risk-of-bias assessment and data extraction. Meta-analysis was conducted on data subgroups stratified by key study design-intervention-population-outcome categories and subgroups were assessed for their quality of evidence. Meta-regression was conducted where appropriate to identify possible sources of between-trial heterogeneity. We identified 79 relevant studies: 17 randomized controlled trials (RCTs); 12 non-randomized controlled trials (NRTs); and 50 uncontrolled before-and-after studies. Several studies did not provide sufficient details on key design features (e.g. blinding), with some high risk-of-bias ratings due to incomplete outcome data and selective reporting. We identified a moderate to high confidence in results from two large RCTs investigating community- and school-based educational training interventions on behaviour outcomes in children and youth (median standardized mean difference [SMD] = 0.20, range: 0.05, 0.35); in two small RCTs evaluating video and written instructional messaging on behavioural intentions in adults (SMD = 0.36, 95 % confidence interval [CI]: 0.02, 0.69); and in two NRT studies for university-based education on attitudes of students and staff (SMD = 0.26, 95 % CI: 0.10, 0.43). Uncontrolled before-and-after study outcomes were very heterogeneous and we have little confidence that the meta-analysis results reflect the true effect. Some variation in outcomes was explained in meta-regression models, including a dose effect for behaviour outcomes in RCTs. In controlled trials, food safety education interventions showed significant effects in some contexts; however, many outcomes were very heterogeneous and do not provide a strong quality of evidence to support decision-making. Future research in this area is needed using more robust experimental designs to build on interventions shown to be effective in uncontrolled before-and-after studies. Foodborne illness has a large public health and economic burden worldwide. For example, an estimated 48 million cases of foodborne illness occur each year in the United States (US), causing approximately 128,000 hospitalizations and 3000 deaths [1, 2]. In addition, 14 major foodborne pathogens are estimated to cause US$14.0 billion and a loss of 61,000 quality-adjusted life years annually [3]. In Canada, approximately 4 million cases of foodborne illness occur each year [4], with acute gastroenteritis estimated to cost $3.7 billion annually [5]. Reliable data on the burden of foodborne illness due to consumer mishandling of food prepared and consumed in domestic households is not routinely and consistently collected and reported in many countries. However, previous research suggests that most sporadic cases of foodborne illness, which are often underreported and underdiagnosed, are more frequently associated with food consumed at home than other settings [6–8], and across Europe reported outbreaks of foodborne illness are largely associated with domestic household kitchens [9]. Many consumers tend to expect the foods they purchase to be safe and believe that there is a low risk of becoming ill from food prepared and consumed in their home [8, 10, 11]. In addition, previous surveys of food safety behaviours among consumers in the US, Canada, and the United Kingdom have found that many consumers do not follow key safe food handling recommendations [8, 11–13]. These studies, as well as government outbreak reports and food safety policy documents [14–17], have identified a need for enhanced food safety education for consumers in targeted areas. Educational interventions for consumers are necessary to increase their knowledge and awareness about food safety, to change their food handling and preparation behaviours, and ultimately, to decrease the incidence and burden of foodborne illness due to food prepared and handled at home [18–20]. There is a need to update and expand upon previous systematic reviews conducted in this area, which are significantly outdated [21, 22] or had restricted inclusion criteria for the interventions and study designs considered [19]. Therefore, we conducted a comprehensive scoping and systematic review to synthesize the effectiveness of all types of food safety educational interventions for consumers. We report here on the systematic review component of this project; the scoping review results are summarized and reported in a separate publication [23]. This review was reported in accordance with the PRISMA guidelines [24] (see checklist in Additional file 1). Review team, question, scope, and eligibility criteria The review followed a protocol that was developed a priori and is available from the corresponding author upon request; methods followed standard guidelines for scoping and systematic reviews [25, 26]. The core review team consisted of seven individuals with complementary topic (i.e. food safety education) and methodological (i.e. knowledge synthesis) expertise. In addition, we engaged six knowledge-users in the review through an expert advisory committee [27]. The committee was engaged using an e-mailed questionnaire once before the review proceeded to provide input on the review scope, inclusion criteria, and search strategy, and again after completion of the scoping review stage to provide input on the article characterization results and the prioritization of articles for systematic review (risk-of-bias assessment and data extraction) and meta-analysis. The key review question was "What is the effectiveness of targeted educational interventions to improve consumer food safety knowledge, attitudes, and behaviours?" Interventions of interest were categorized into two broad categories: 1) training workshops, courses, and curricula in school, academic, and community settings; and 2) social marketing campaigns and other types of educational messaging materials, such as print media (e.g. exposure to brochures, website information, food product label information) and audio-video media (e.g. radio or TV ads). The review scope included primary research published in English, French, or Spanish, with no publication date restrictions, in any of the following document formats: peer-reviewed journal articles, research reports, dissertations, and conference abstracts or papers. Interventions that did not have an explicit food safety component were excluded (e.g. generic hand-washing not in a food handling context). Consumers were defined as those who prepare or handle food for consumption at home, including volunteer food handlers for special events (e.g. potlucks). We also included studies targeted at educators of consumers (e.g. train-the-trainer studies). Studies targeted at food handlers employed in the food service industry were excluded [28]. Search strategy and scoping review methods A comprehensive and pre-tested search strategy was implemented on May 20, 2014, in 10 bibliographic databases: Scopus, PubMed, Agricola, CAB Abstracts, Food Safety and Technology Abstracts, PsycINFO, Educational Resources Information Center (ERIC), Cumulative Index to Nursing and Allied Health Literature (CINAHL), ProQuest Public Health, and ProQuest Dissertations and Theses. The search algorithm comprised a targeted combination of food safety-related terms (e.g. food safety, food hygiene), population-setting terms (e.g. consumer, adults, home), intervention terms (e.g. program, course, campaign), and outcome terms (e.g. behaviour, knowledge, attitudes). The search was verified by hand-searching two journals (Environmental Health Review and the Journal of Nutrition Education and Behavior "Great Educational Materials" Collection), reviewing the websites of 24 relevant organizations, and reviewing the reference lists of 15 review articles and 15 relevant primary research articles. The titles and abstracts of identified citations were screened for relevance to the review question using a pre-specified and pre-tested form. The form was also used to identify review articles to be used for search verification. Potentially relevant citations were then procured as full articles, confirmed for relevance, and characterized using a pre-specified and pre-tested form consisting of 29 questions about the article type, study design, data collection methods, and details of the interventions, populations, and outcomes investigated. Full details on the search strategy, including database-specific algorithms, and a copy of the screening and characterization forms are reported in Additional files 2 and 3. Risk-of-bias assessment and data extraction In consultation with the expert advisory committee, we decided to limit further analysis to experimental studies (randomized and non-randomized controlled trials and uncontrolled before-and-after studies) conducted in North America, Europe, Australia, and New Zealand. The rationale for this decision was that these studies were deemed to provide the most relevant evidence to our main stakeholders (Canadian food safety decision-makers and practitioners). All relevant studies meeting these criteria were assessed for their risk of bias at the outcome-level and relevant outcomes were extracted using two pre-specified forms applied in sequence (Additional file 3). The risk-of-bias form contained four initial screening questions to confirm eligibility followed by up to 12 risk-of-bias criteria questions depending on study design, including an overall risk-of-bias rating for each main outcome. Each criterion was rated as low, unclear, or high risk. The risk-of-bias criteria were adapted from existing tools for randomized and non-randomized experimental studies [26, 29, 30]. Outcome data and quantitative results were then extracted from each study for each intervention-population-outcome combination reported. Citations identified in the search were uploaded to RefWorks (Thomson ResearchSoft, Philadelphia, PA) and duplicates were removed manually. Citations were imported into the web-based systematic review software DistillerSR (Evidence Partners, Ottawa, ON, Canada), which was used to conduct each stage of the scoping and systematic review (from relevance screening to data extraction). Results were exported as Microsoft Excel spreadsheets for formatting and analysis (Excel 2010, Microsoft Corporation, Redmond, WA). The relevance screening and article characterization forms were pre-tested by nine reviewers on 50 and 10 purposively-selected abstracts and articles, respectively. Reviewing proceeded when kappa scores for inclusion/exclusion agreement between reviewers was >0.8. The risk-of-bias and data extraction forms were pre-tested by three reviewers (I.Y., L.W., and S.H.) on six articles. In all cases, the pre-test results were discussed among reviewers and forms were revised and clarified as needed. Nine reviewers conducted the scoping review stages (relevance screening and article characterization) and two reviewers conducted risk-of-bias assessment and data extraction (I.Y. and S.H.). For all stages, two independent reviewers assessed each citation or article. Disagreements between reviewers were resolved by consensus, and when necessary, by judgement of a third reviewer. Relevant studies were stratified into subgroups for meta-analysis [26, 31]. Firstly, studies were stratified into three main groups of study designs: 1) randomized controlled trials (RCTs); 2) non-randomized controlled trials (NRTs); and 3) uncontrolled before-and-after studies. Secondly, data were stratified into the two intervention categories of interest (training workshops/courses and social marketing campaigns/other messaging). Data were then stratified by target population into three main categories: 1) children and youth (<18 years old); 2) adults (18 and older); and 3) educators of consumers. Within each of these subgroups, three main outcome types were considered: 1) knowledge; 2) attitudes; and 3) behaviours. Two additional theoretical construct outcomes investigated in a smaller number of studies were also assessed: 4) behavioural intentions; and 5) stages of change [32, 33]. Separate meta-analyses were then conducted in each data subgroup for dichotomous and continuous outcome measures when sufficiently reported data were available from ≥2 studies. Dichotomous analyses were conducted using the relative risk (RR) metric and continuous data were analyzed using the standardized mean difference (SMD; Hedge's g), which accounts for the variable and non-standardized outcome scales reported across studies [26, 31]. All models were conducted using the DerSimonian and Laird method for random-effects [34]. The unit of analysis was individual trials (intervention-population-outcome combinations) reported within studies. Many studies with continuous outcomes did not report required standard deviations to allow for meta-analysis; in these cases, other reported summary statistics (e.g. confidence intervals, standard errors, t values, P values, F values) were used to approximate the missing values using the formulas described in Higgins and Green (2011) [26] and implemented in CMA software (Comprehensive Meta-Analysis Version 2, Biostat, Inc., Englewood, NJ). For meta-analyses of RCTs and NRTs, some studies reported differences in changes from baseline (pre-to-post tests) between study groups; these were combined in the same analysis as studies reporting differences in final outcome measures [31, 35]. When these studies did not report the standard deviation of the mean change or other summary statistics as described above necessary to approximate this value, only final outcome measures were used in analysis if baseline measurements were similar. When baseline measurements differed, best available estimates of the pre-post correlation value were imputed from previous studies in the literature that examined similar outcomes in similar populations [26, 31]. Specifically, a pre-post correlation of 0.81 was used for knowledge and attitude outcomes [36] and a value of 0.83 was used for behaviour outcomes [37] (Additional file 4). The same imputations were conducted for all meta-analyses of SMD measures in uncontrolled before-and-after studies, as none of these studies reported pre-post correlation values necessary to conduct an appropriate paired analysis. Sensitivity analyses were conducted in each case by comparing to pre-post correlations of 0.2 and 0.9 [26, 31, 38]. Similarly, none of the uncontrolled before-and-after studies measuring dichotomous outcomes reported data in a matched format; therefore, these outcomes were analyzed as unmatched data, which has been shown to be similar and easier to interpret than matched analyses [39]. Finally, some studies reported the number of participants in >2 ordinal categories (e.g. always, usually, sometimes, never); for ease of analysis and interpretation, these outcomes were dichotomized into the most logical categories based on their comparability to other dichotomous data available in the same data subset. Some studies reported results for multiple outcomes measuring the same construct (e.g. knowledge scores) in the same group of participants. To avoid counting the same participants more than once in the same meta-analysis, we computed a combined measure of effect for each outcome in these studies [31]. The combined effect was taken as the mean of the individual measures, while the variance was calculated using the following formula [31]: $$ {V}_{\overline{Y}}={\left(\frac{1}{m}\right)}^2var\left({\displaystyle {\sum}_{j=1}^m{Y}_i}\right)={\left(\frac{1}{m}\right)}^2\left({\displaystyle {\sum}_{j=1}^m{V}_i}+{\displaystyle {\sum}_{j\ne k}}\left({r}_{jk}\sqrt{V_j}\sqrt{V_k}\right)\right), $$ where m indicates the number of outcomes being combined, V indicates the variance of the jth and kth outcomes being combined, and r refers to the correlation between each two constructs being combined. Unfortunately, a measure of the correlation (r) between each pair of constructs was only reported for one of the study outcomes combined in this manner [40]. For all other studies, we imputed plausible correlation values taken from averages reported in other relevant studies in the literature that tested or evaluated food safety knowledge, attitude, or behaviour questionnaires in similar populations and contexts [36, 40–42]. Specifically, we used average correlation values of 0.36, 0.47, and 0.62 for knowledge, attitude, and behaviour outcomes, respectively, and conducted a sensitivity analysis in each case by comparing to values of 0.2 and 0.8 to identify potential impacts on the outcomes using a range of possible values [31] (Additional file 4). In studies that compared more than one intervention and/or control group, one of the following decisions was made on a case-by-case basis depending on the nature of the groups being compared and their relevance to the review question: 1) groups were combined into a single pair-wise comparison using the formula described in Higgins and Green (2011) [26]; or 2) the control group was split into two or more groups with a smaller sample size. A table outlining the selected approach and decision in each of these cases is shown in the supplementary materials (Additional file 5). For studies that reported outcome measurements for multiple time points (e.g. pre, post, and follow-up), we used the pre-to-post measure in the meta-analysis calculation as this was most comparable to what other studies reported across all subgroups [43]. Sensitivity analyses were conducted in these cases by repeating the analysis with the pre-to-follow-up measures to explore the impact of a longer follow-up on the intervention effect. Heterogeneity in all meta-analyses was measured using I2, which indicates the proportion of variation in effect estimates across trials that is due to heterogeneity rather than sampling error [44]. Heterogeneity was considered high and average estimates of effect were not shown when I2 > 60 % [26, 44]. In these cases, a median and range of effect estimates from individual trials in the meta-analysis subgroup was shown instead, as presenting pooled meta-analysis estimates in the presence of so much variation can be misleading [45]. Meta-analysis effect estimates were considered significant if the 95 % confidence intervals (CI) excluded the null. Begg's adjusted rank correlation and Egger's regression tests were used to test for possible publication bias on meta-analysis data subsets with ≥10 trials and when heterogeneity was not significant [46]. For these tests, P < 0.05 was considered significant. All meta-analyses were conducted using CMA software. Meta-regression Meta-regression was conducted on meta-analysis data subsets with I2 > 25 % and ≥10 trials to explore possible sources of heterogeneity in the effect estimates across trials [47]. To increase power of these analyses, data were not stratified by intervention type or population subgroup; instead, these two variables were evaluated as predictors of heterogeneity in outcomes across trials. In addition, the following 15 pre-specified variables were evaluated as potential predictors in meta-regression models: publication year (continuous); document type (journal vs. other); study region (North America vs. other); food safety-specific intervention vs. inclusion of other content (e.g. nutrition) (yes vs. no); intervention development informed by a theory of behaviour change (yes vs. no) or formative research (yes vs. no); target population engaged in intervention development, implementation, and/or evaluation (yes vs. no); intervention included a digital/web-based (yes vs. no) or audio-visual (yes vs. no) component; intervention targeted high-risk (yes vs. no) or low socio-economic status (yes vs. no) populations; overall risk-of-bias rating (low vs. unclear/high); whether any outcomes were insufficiently reported to allow for meta-analysis (yes vs. no); length of participant follow-up (within two weeks post intervention/not reported vs. longer); and intervention dose (>1 vs. only one exposure/not reported). A dose effect of >1 represented interventions with multiple training sessions or lessons and messaging interventions with more than one medium or exposure type (i.e. multifaceted interventions). High-risk populations referred to infants, the elderly, the immuno-compromised, caregivers of these populations, and pregnant women. Two additional variables were also evaluated in RCT and NRT sub-groups: 1) whether the intervention was compared to a positive control group (e.g. standard training) vs. a negative control; and 2) whether the trial was analyzed using unpaired or paired (change from baseline) data. Given the limited number of trials in each meta-analysis subset, all predictors except publication year were modelled as dichotomous variables. In addition, only univariable meta-regression models were evaluated when the number of trials was 10–19. When the number of trials was ≥20, predictors were initially screened in univariable models and then added in multivariable models using a forward-selection process, up to a maximum of one predictor per 10 trials. Predictors were considered significant if 95 % CIs excluded the null. For each data subgroup, Spearman rank correlations were used to evaluate collinearity between variables prior to conducting meta-regression; if evidence of collinearity was identified (ρ ≥ 0.8), only one of the correlated variables was modelled based on its relevance. Meta-regression was conducted using Stata 13 (StataCorp, College Station, TX). Quality-of-evidence assessment Each meta-analysis data subgroup was assessed for its overall quality-of-evidence using a modified version of the Cochrane Collaboration's Grades of Recommendation, Assessment, Development and Evaluation (GRADE) approach [26, 48]. Datasets started with 2–4 points to reflect inherent differences in strength of evidence by study design: RCTs started with four points, NRTs with three, and uncontrolled before-and-after studies with two. Points were deducted or added based on the five downgrading and three upgrading criteria described in Table 1. The final GRADE rating corresponded to the remaining number of points: one = very low (the true effect is likely to be substantially different from the measured estimate); two = low (the true effect may be substantially different from the measured estimate); three = moderate (the true effect is likely to be close to the measured estimate, but there is a possibility that it is substantially different); four = high (we have strong confidence that the true effect lies close to that of the measured estimate). Table 1 Modified GRADE approach for evaluating the quality of evidence of meta-analysis data subgroups Review flow chart and risk-of-bias results A flow chart of the scoping and systematic review process is shown in Fig. 1. From 246 articles considered relevant in the scoping review, 77 met the inclusion criteria for this systematic review (Fig. 1). A citation list of these 77 articles is reported in Additional file 6. The 77 articles reported on 79 unique study designs, including 17 RCTs, 12 NRTs, and 50 uncontrolled before-and-after studies. Most studies (82 %, n = 65) were conducted in the United States, compared to 14 % (n = 11) in Europe, 3 % (n = 2) in Australia, and 1 % (n = 1) in Canada. A summary table of the key population, intervention, comparison, and outcome characteristics of each study is shown in Additional file 7. Full descriptive results for the scoping review stages (relevance screening and article characterization) are reported in a separate publication [23]. Scoping and systematic review flow-chart. Languages excluded during article characterization included: Chinese (n = 11), Korean (8), Portuguese (5), Japanese (5), Italian (2), German (2), Turkish (2), Polish (1), Lithuanian (1), and Hebrew (1). Note: Two of the 77 relevant articles reported more than one study design The risk-of-bias ratings are shown stratified by study design in Table 2, with detailed results of the within-study assessments shown in Additional file 8. Many RCTs did not provide sufficient details on their methods of random sequence generation and allocation concealment. Blinding criteria was also unclear for many studies across all designs (Table 2). Some unclear and high risk ratings were noted due to incomplete outcome data and selective reporting (Table 2). Many uncontrolled before-and-after studies (17/50) also did not provide details on the validity and reliability of outcome measurement instruments, leading to an unclear rating for that criterion. Table 2 Risk-of-bias rating summary for studies investigating the effectiveness of food safety education interventions for consumers Meta-analysis results The meta-analysis results for RCTs and NRTs are shown in Table 3. All RCT meta-analyses were significantly heterogeneous except for the effect of messaging materials (instructional video and written messages) on behavioural intentions in adults in two small studies, which showed a positive intervention effect (SMD = 0.36, 95 % CI: 0.02, 0.69; 'moderate' GRADE rating). All other outcomes showed positive median effects across trials (Table 3). The effect of community- and school-based educational training interventions on behaviour outcomes in children and youth received the only 'high' GRADE rating. Other behaviour, knowledge, and attitude outcomes received 'low' and 'very low' GRADE ratings. For meta-analyses of NRTs, educational training and course interventions had a positive average estimate of effect on attitudes (SMD = 0.26, 95 % CI: 0.10, 0.43; 'moderate' GRADE rating) and behaviours (SMD = 0.37, 95 % CI: 0.08, 0.66; 'low' GRADE rating) in adults. Both categories of interventions showed heterogeneous but positive median effects across trials for other outcomes, with 'low' and 'very low' GRADE ratings (Table 3). Table 3 Random-effects meta-analysis results of randomized and non-randomized controlled trials The meta-analysis results for uncontrolled before-and-after studies are shown in Table 4. All analyses were significantly heterogeneous, except for the effect of educational training and course interventions on improving the behaviours of educators of consumers in two small studies (SMD = 0.44, 95 % CI: 0.33, 0.54). All other intervention, population, and outcome combinations showed positive median effects across trials (Table 4); however, due to risk of bias, heterogeneity, and inconsistencies all meta-analyses of uncontrolled before-and-after studies received a 'very low' GRADE rating. It was not possible to assess publication bias statistically in any meta-analysis subgroup. Forest plots of each meta-analysis are shown in Additional file 9 and the detailed GRADE assessments for each subgroup are shown in Additional file 10. Table 4 Random-effects meta-analysis results of uncontrolled before-and-after studies Meta-regression results Meta-regression was possible for seven data subgroups: behaviour outcomes in RCTs with the SMD measure, and knowledge, behaviour, and attitude outcomes reported in uncontrolled before-and-after studies for both RR and SMD measures. Significant predictors of between-trial variation were identified for three of these models (Table 5). For the RCT-behaviour outcome, studies that delivered more than one training session or provided messaging materials through more than one medium or exposure type (i.e. multifaceted interventions) found a higher average intervention effect (SMD = 0.68) compared to studies that included only one training session or provided messaging materials through only one medium or exposure (Table 5). For dichotomous knowledge outcomes, uncontrolled before-and-after studies that were published in sources other than journals articles (i.e. theses and reports) reported an average estimate of intervention effect that was 2.01 times more effective than studies published in journal articles (Table 5). For dichotomous behaviour outcomes, uncontrolled before-and-after studies that reported the target population was engaged in the intervention development, implementation, and/or evaluation reported an average estimate of intervention effect that was 1.47 times more effective than studies that did not engage their target population (Table 5). Table 5 Meta-regression results of the impact of selected study-level variables on the meta-analysis estimates The sensitivity analysis of imputing different correlation values for combining multiple outcomes in a study revealed that the analyses were robust to these values and changing the correlations had a negligible impact on the results (Additional file 11). However, for RCTs and NRTs of continuous behaviour outcomes, and for all uncontrolled before-and-after study continuous outcomes, sensitivity analyses revealed that selection of the imputed pre-post correlation in some cases changed the significance of estimates or changed estimates by >20 % (Additional file 12). In these cases, uncertainty in the meta-analyses estimates due to imputation of the pre-post correlation value was accounted for by appropriately downgrading the estimates in the GRADE assessment (Table 1). No consistent trend or impact on average meta-analysis estimates was noted when comparing pre-to-post vs. pre-to-follow-up measurements in studies where both sets of data were available (Additional file 13). This review used a structured and transparent approach to identify and synthesize available evidence on the effectiveness of food safety education for consumers. We identified 17 RCTs (Additional file 6), which provide the highest evidence for determining causality and intervention effectiveness because the randomization process helps to control for unmeasured confounders that could otherwise influence the intervention effect [26, 49, 50]. However, we also decided a priori to include non-randomized designs in this review, including uncontrolled before-and-after studies, to allow a more comprehensive and complete assessment of the available evidence in this area, recognizing that RCTs may not be feasible for many large-scale food safety education interventions [26, 50, 51]. For example, two RCTs of the effectiveness of the Expanded Food and Nutrition Education Program (EFNEP) to improve nutrition and food safety outcomes in low-income youth and adults used a 'delayed intervention' group instead of a traditional control group for this reason, reporting that key program staff and implementers were more likely to participate knowing that both groups would receive the intervention at the conclusion of the study [42, 52]. Even in this case, Townsend et al. (2006) noted that some control groups chose not to comply with their group assignment and still offered the intervention during their study [42], which highlights some of the practical challenges in implementing traditional RCTs in this area. Eleven of the 17 RCTs in this review did not specify their method of randomization, and many RCTs and NRTs did not specify their method of sequence allocation or measures taken to blind participants, study personnel, and outcome assessors to the group allocation status, resulting in several unclear ratings for these risk-of-bias criteria (Table 2). The first criterion is important to ensure a proper randomization process is used that will balance unmeasured confounding variables across groups [26]. The blinding criteria noted above are important to prevent against differential treatment and assessment of outcomes in participants based on possible knowledge of their group assignment, particularly for subjective outcomes such as attitudes and self-reported behaviours [26]. However, we recognize that blinding is challenging and often not feasible to implement in the context of educational interventions [53], and we did not downgrade the overall risk-of-bias rating for study outcomes based solely on unclear ratings for these criteria. For some criteria high risk-of-bias ratings were noted for RCTs and NRTs mostly due to incomplete outcome data and selective reporting resulting from a large and imbalanced proportion of drop-outs in one of the intervention groups [54, 55], exclusion of some results from analysis [56, 57], omission of quantitative results for some non-significant findings [40, 54, 57], and in one case because the similarity of baseline characteristics between intervention groups could not be determined [58]. Future experimental research investigating the effectiveness of food safety education interventions should aim to conduct and report methods and findings in accordance with appropriate guidelines for RCTs (CONSORT) and NRTs (TREND) [59, 60]. An extension to the CONSORT guidelines is also planned for social and psychological interventions [53]. Two large, well-conducted RCTs (high GRADE rating) found that food safety education training and course interventions are effective at improving behaviour outcomes in children and youth (Table 3). Specifically, both Townsend et al. (2006) and Quick et al. (2013) reported that community-based EFNEP workshops and a web-based video game implemented in a classroom setting increased food safety behaviours in low-income youth and middle school children, respectively [42, 61]. Although comparatively less research was identified specifically targeting children and youth compared to adults, the evidence suggests that school and after-school programs could be an important intervention point to enhance the food safety behaviours of consumers at a young age. Two small RCTs (moderate GRADE rating) found that a dialogical (i.e. engaging) video message and an instructional written and graphical message about Salmonella improved food safety behavioural intentions in adults [62, 63], indicating that food safety messaging interventions may be effective for these outcomes. Behaviour outcomes provide a more direct measure of intervention effectiveness compared to knowledge and attitudes; however, most of the studies analyzed in this review measured self-reported behaviours, which can be subject to social desirability bias and can be overestimated compared to observed practices [64, 65]. Nevertheless, several researchers have reported consistent agreement between self-reported and observed behaviours, and between behavioural intentions and observed behaviours, in consumers [37, 66, 67]. The agreement between these measures likely depends at least partially on the validity and reliability of the measurement instrument used. Given that self-reported behaviour outcomes are more feasible to measure in practice, future primary research collecting these outcomes should use measurement tools that have been appropriately assessed for their psychometric properties and have good agreement with observed behaviours to ensure validity and reliability of the findings. A moderate GRADE rating was determined for the meta-analysis of two NRT studies on the impact of educational training and course interventions on attitude outcomes in adults. Both studies were university-based, and investigated the impacts of social media training, distance education, and a traditional classroom lecture to improve food safety attitude scores in university students and staff [68, 69]. Changes in attitudes are important precursors to behaviour change, as they help to shape an individual's views of the importance and need for change and impact their behavioural intentions [32, 33]. Although RCTs and NRTs captured in this review reported beneficial median intervention effects for other intervention-population-outcome combinations, the confidence in these results was less reliable and future studies are likely to change the magnitude and possibly the direction of the conclusions. Fifty of the 79 total relevant studies in this review (63 %) used an uncontrolled before-and-after study design (i.e. pre-post testing in the same population with no separate control group). Although these studies on average found consistent positive effects for all intervention-population-outcome combinations, results were very heterogeneous. In addition, all outcomes reported in these studies received a very low GRADE rating, and many received an unclear overall risk-of-bias rating due to limited reporting of methodological details for one or more criteria. A major limitation of these studies is that the lack of a separate control group limits our ability to draw causal inferences about intervention effectiveness given the potential for secular changes and other external variables to influence the results between pre- and post-tests [49, 50]. Therefore, the results of these studies should not be used directly to inform decision-making on food safety education program development or implementation; instead, the primary utility of these studies lies in their ability to show 'proof of concept' for an intervention effect to inform more robust experimental designs [26, 49, 50]. As noted above, proof of concept was demonstrated for a wide variety of education interventions in multiple consumer populations, including educators, for all investigated outcomes, indicating that future research should build on these interventions ideally through well-conducted RCTs. A significant intervention dose effect was identified in meta-regression for behaviour outcomes in RCTs. This result provides support that food safety training interventions with more than one session or lesson and media campaigns and messaging interventions that provide materials through more than one medium or exposure type (i.e. multifaceted interventions) can enhance consumer safe-food handling behaviour change. This finding corresponds with those of some individual studies captured within this review. For example, in an evaluation of a social media-based intervention in college students, Mayer et al. (2012) reported that exposure to the social media component (Facebook website) for at least 15 min/week, particularly when combined with a traditional course lecture, resulted in improved food safety knowledge, attitude, and behaviour outcomes [69]. In addition, several other studies reported that food safety outcomes improved in consumers with a greater number of training sessions administered [70, 71] or with exposure to multiple intervention messaging materials [72–74], although in some cases a threshold level was reached beyond which additional exposures (e.g. lessons) did not result in further improvements to the measured outcomes. Future RCTs on the effectiveness of food safety interventions for consumers should investigate further the potential impact of dose on intervention effectiveness. Significant predictors of between-study heterogeneity were identified in two of the meta-regression models of outcomes in uncontrolled before-and-after studies. Studies published in a source other than a peer-reviewed journal (i.e. theses and reports) were more likely to report a beneficial intervention effect for dichotomous knowledge outcomes. This finding may indicate a publication bias, which usually indicates that authors are more likely to publish positive and significant results in peer-reviewed journal articles, but in this case could reflect that findings were not subsequently published in a peer-reviewed journal due to a lack of perceived importance of the results or ability or desire to publish [75]. This finding highlights the importance of including gray literature sources such as theses and reports in systematic reviews and meta-analysis to ensure a more complete assessment of the available evidence. The other significant meta-regression finding indicated that studies that engaged their target population in the development, implementation, and/or evaluation of the intervention were more likely to report a beneficial intervention effect for dichotomous behaviour outcomes. This result corresponds with a recent systematic review that found that interventions using community engagement approaches positively impacted health behaviours and outcomes in a variety of different public health contexts [76]. Moreover, previous research has shown that consumers prefer food safety education interventions that are interactive and engaging [77, 69]. Food safety behaviours are often subdivided into specific behavioural constructs such as personal hygiene, adequate cooking of foods, avoiding cross-contamination, keeping foods at safe temperatures, and avoiding food from unsafe sources [78]. However, our ability to investigate these concepts in detail was limited by the availability and reporting of primary research in the various data subsets, as many studies only reported overall scores or scales. In addition, for similar reasons, attitudes were not further subdivided into key constructs from relevant behaviour change theories such as the Theory of Planned Behaviour, The Stages of Change Theory (Transtheoretical Model), and the Heath Belief Model [32, 33, 79]. For example, constructs such as self-efficacy, perceived behavioural control, risk perceptions (e.g. perceived susceptibility/severity of illness), and subjective norms have all been associated more specifically with intended and reported food safety behaviours [80, 81, 67]. Future experimental research should investigate and report further on various theoretical constructs and their relationship with specific food safety behaviours. Most of the meta-analysis data subgroups contained significant heterogeneity that was unexplainable by variables examined in meta-regression models. Due to the limited availability of studies within each subgroup, our power to identify potential predictors of between-trial heterogeneity in meta-regression was limited. There are several additional population, intervention, outcome, and study design characteristics that could have influenced this heterogeneity but we were not able to investigate in this analysis. For example, the wide variety of outcome measurement instruments and scales used across studies could have contributed to this variation. For this reason, we used the SMD outcome measure in meta-analyses of continuous data; although this measure does not allow us to determine whether heterogeneity between trials is a true reflection of different participant outcomes or due to differences in how the outcomes were measured [26, 38]. Another limitation of this review is that correlation values for most studies were not reported and we had to impute plausible values from other comparable studies to allow for meta-analysis. Sensitivity analyses indicated this was a potential concern for some outcomes of studies that used an imputed value of the pre-post correlation. Based on our findings, correlation values are often not reported in primary research articles in this research area, but with increasing opportunities to publish supplementary materials online, we encourage primary research authors to make these data available in future publications. Finally, it is possible that we could have missed some relevant studies if they were not captured by our search algorithm. However, we implemented a comprehensive verification strategy in an attempt to minimize this potential bias. The effectiveness of food safety education interventions to improve consumer knowledge, attitude, and behaviour outcomes was evaluated in multiple experimental study designs conducted in developed countries. We identified a moderate to high confidence in intervention effectiveness for some outcomes in RCTs and NRTs, including: community- and school-based educational training on behaviours of children and youth; video and written instructional messaging on behavioural intentions in adults; and university-based education on attitudes of students and staff. While most RCTs and NRTs indicated a positive intervention effect for other outcomes, risk-of-bias and reporting limitations and the presence of significant heterogeneity between studies resulted in low and very low confidence in these findings. Meta-regression results showed a positive dose-response effect on behaviour outcomes in RCTs and a positive impact of engaging the target population in the intervention on knowledge outcomes in uncontrolled before-and-after studies, warranting further investigation. Many different education interventions were found to be effective in uncontrolled before-and-after studies at improving consumer food safety outcomes in a variety of contexts; future research should build upon this knowledge with well-conducted and reported RCTs. Future research is also needed to investigate further the factors contributing to the heterogeneity in intervention effectiveness across studies. Scallan E, Griffin PM, Angulo FJ, Tauxe RV, Hoekstra RM. Foodborne illness acquired in the United States—unspecified agents. Emerg Infect Dis. 2011;17:16–22. Scallan E, Hoekstra RM, Angulo FJ, Tauxe RV, Widdowson MA, Roy SL, et al. Foodborne illness acquired in the United States—major pathogens. Emerg Infect Dis. 2011;17:7–15. Hoffmann S, Batz MB, Morris Jr JG. Annual cost of illness and quality-adjusted life year losses in the United States due to 14 foodborne pathogens. J Food Prot. 2012;75:1292–302. Thomas MK, Murray R, Flockhart L, Pintar K, Pollari F, Fazil A, et al. Estimates of the burden of foodborne illness in Canada for 30 specified pathogens and unspecified agents, circa 2006. Foodborne Pathog Dis. 2013;10:639–48. Thomas MK, Majowicz SE, Pollari F, Sockett PN. Burden of acute gastrointestinal illness in Canada, 1999–2007: interim summary of NSAGI activities. Can Commun Dis Rep. 2008;34:8–15. Vrbova L, Johnson K, Whitfield Y, Middleton D. A descriptive study of reportable gastrointestinal illnesses in Ontario, Canada, from 2007 to 2009. BMC Public Health. 2012;12:970. Keegan VA, Majowicz SE, Pearl DL, Marshall BJ, Sittler N, Knowles L, et al. Epidemiology of enteric disease in C-EnterNet's pilot site - Waterloo region, Ontario, 1990 to 2004. Can J Infect Dis Med Microbiol. 2009;20:79–87. Redmond EC, Griffith CJ. Consumer food handling in the home: a review of food safety studies. J Food Prot. 2003;66:130–61. European Food Safety Authority, European Centre for Disease Prevention and Control. The European Union summary report on trends and sources of zoonoses, zoonotic agents and food-borne outbreaks in 2013. EFSA J. 2015;13:3991. Redmond EC, Griffith CJ. Consumer perceptions of food safety risk, control and responsibility. Appetite. 2004;43:309–13. Nesbitt A, Thomas MK, Marshall B, Snedeker K, Meleta K, Watson B, et al. Baseline for consumer food safety knowledge and behaviour in Canada. Food Control. 2014;38:157–73. Patil SR, Cates S, Morales R. Consumer food safety knowledge, practices, and demographic differences: findings from a meta-analysis. J Food Prot. 2005;68:1884–94. Fein SB, Lando AM, Levy AS, Teisl MF, Noblet C. Trends in U.S. consumers' safe handling and consumption of food and their risk perceptions, 1988 through 2010. J Food Prot. 2011;74:1513–23. Haines RJ. Report of the Meat Regulatory and Inspection Review. Farm to Fork: A Strategy for Meat Safety in Ontario. Toronto: Queen's Printer for Ontario; 2004. http://www.attorneygeneral.jus.gov.on.ca/english/about/pubs/meatinspectionreport. Government of Canada: Report of the independent investigator into the 2008 listeriosis outbreak. http://epe.lac-bac.gc.ca/100/206/301/aafc-aac/listeriosis_review/2012-06-28/www.listeriosis-listeriose.investigation-enquete.gc.ca/lirs_rpt_e.pdf. Munro D, Le Vallée J-C, Stuckey J. Improving Food Safety in Canada: Toward a More Risk-Responsive System. Ottawa: The Conference Board of Canada; 2012. http://www.conferenceboard.ca/e-library/abstract.aspx?did=4671. United States Department of Agriculture: Strategic Performance Working Group: Salmonella action plan. http://www.fsis.usda.gov/wps/wcm/connect/aae911af-f918-4fe1-bc42-7b957b2e942a/SAP-120413.pdf?MOD=AJPERES. Byrd-Bredbenner C, Berning J, Martin-Biggers J, Quick V. Food safety in home kitchens: a synthesis of the literature. Int J Environ Res Public Health. 2013;10:4060–85. Milton A, Mullan B. Consumer food safety education for the domestic environment: A systematic review. Br Food J. 2010;112:1003–22. Jacob C, Mathiasen L, Powell D. Designing effective messages for microbial food safety hazards. Food Control. 2010;21:1–6. Campbell ME, Gardner CE, Dwyer JJ, Isaacs SM, Krueger PD, Ying JY. Effectiveness of public health interventions in food safety: A systematic review. Can J Public Health. 1998;89:197–202. Mann V, DeWolfe J, Hart R, Hollands H, LaFrance R, Lee M, et al. The effectiveness of food safety interventions. Hamilton: Effective Public Health Practice Project; 2001. http://old.hamilton.ca/phcs/ephpp/Research/Full-Reviews/FoodSafetyReview.pdf Sivaramalingam B, Young I, Pham MT, Waddell L, Greig J, Mascarenhas M, et al. Scoping review of research on the effectiveness of food-safety education interventions directed at consumers. Foodborne Pathog Dis. 2015;12:561–70. Moher D, Liberati A, Tetzlaff J, Altman DG, Altman D, Antes G, et al. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009;6, e1000097. Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8:19–32. Higgins JPT, Green S, editors. Cochrane handbook for systematic reviews of interventions. Version 5.1.0. The Cochrane Collaboration. 2011. www.cochrane-handbook.org. Keown K, Van Eerd D, Irvin E. Stakeholder engagement opportunities in systematic reviews: Knowledge transfer for policy and practice. J Contin Educ Health Prof. 2008;28:67–72. Soon JM, Baines R, Seaman P. Meta-analysis of food safety training on hand hygiene knowledge and attitudes among food handlers. J Food Prot. 2012;75:793–804. Cochrane Effective Practice and Organisation of Care Group: Suggested risk of bias criteria for EPOC reviews. http://epoc.cochrane.org/sites/epoc.cochrane.org/files/uploads/14%20Suggested%20risk%20of%20bias%20criteria%20for%20EPOC%20reviews%202013%2008%2012_0.pdf Effective Public Health Practice Project: Quality assessment tool for quantitative studies. http://www.ephpp.ca/tools.html Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to meta-analysis. Chichester UK: John Wiley & Sons, Ltd.; 2009. Prochaska JO, Velicer WF. The transtheoretical model of health behavior change. Am J Health Promot. 1997;12:38–48. Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50:179–211. DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials. 1986;7:177–88. Lathyris DN, Trikalinos TA, Ioannidis JP. Evidence from crossover trials: empirical evaluation and comparison against parallel arm trials. Int J Epidemiol. 2007;36:422–30. Medeiros LC, Hillers VN, Chen G, Bergmann V, Kendall P, Schroeder M. Design and development of food safety knowledge and attitude scales for consumer food safety education. J Am Diet Assoc. 2004;104:1671–7. Kendall PA, Elsbernd A, Sinclair K, Schroeder M, Chen G, Bergmann V, et al. Observation versus self-report: validation of a consumer food behavior questionnaire. J Food Prot. 2004;67:2578–86. Abrams KR, Gillies CL, Lambert PC. Meta-analysis of heterogeneously reported trials assessing change from baseline. Stat Med. 2005;24:3823–44. Zou GY. One relative risk versus two odds ratios: implications for meta-analyses involving paired and unpaired binary data. Clin Trials. 2007;4:25–31. Fraser AM. An evaluation of safe food handling knowledge, practices and perceptions of Michigan child care providers. PhD thesis. Michigan State University, Department of Food Science and Human Nutrition. 1995. Byrd-Bredbenner C, Wheatley V, Schaffner D, Bruhn C, Blalock L, Maurer J. Development of food safety psychosocial questionnaires for young adults. J Food Sci Educ. 2007;6:30–7. Townsend MS, Johns M, Shilts MK, Farfan-Ramirez L. Evaluation of a USDA nutrition education program for low-income youth. J Nutr Educ Behav. 2006;38:30–41. Peters JL, Mengersen KL. Meta-analysis of repeated measures study designs. J Eval Clin Pract. 2008;14:941–50. Higgins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327:557–60. Higgins JP, Thompson SG. Quantifying heterogeneity in a meta-analysis. Stat Med. 2002;21:1539–58. Sterne JA, Sutton AJ, Ioannidis JP, Terrin N, Jones DR, Lau J, et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ. 2011;343:d4002. Thompson SG, Higgins JPT. How should meta-regression analyses be undertaken and interpreted? Stat Med. 2002;21:1559–73. Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, et al. GRADE guidelines: 1. Introduction - GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64:383–94. Bhattacharyya OK, Estey EA, Zwarenstein M. Methodologies to evaluate the effectiveness of knowledge translation interventions: a primer for researchers and health care managers. J Clin Epidemiol. 2011;64:32–40. Rychetnik L, Frommer M, Hawe P, Shiell A. Criteria for evaluating evidence on public health interventions. J Epidemiol Community Health. 2002;56:119–27. Flay B, Biglan A, Boruch R, Castro F, Gottfredson D, Kellam S, et al. Standards of evidence: criteria for efficacy, effectiveness and dissemination. Prev Sci. 2005;6:151–75. Dollahite JS, Pijai EI, Scott-Pierce M, Parker C, Trochim W. A randomized controlled trial of a community-based nutrition education program for low-income parents. J Nutr Educ Behav. 2014;46:102–9. Montgomery P, Mayo-Wilson E, Hopewell S, Macdonald G, Moher D, Grant S. Developing a reporting guideline for social and psychological intervention trials. Am J Public Health. 2013;103:1741–6. Hovis A, Harris KK. O27 A WIC internet class versus a traditional WIC class: lessons in food safety education and evaluation [abstract]. J Nutr Educ Behav. 2007;39:S101. Fajardo-Lira C, Heiss C. Comparing the effectiveness of a supplemental computer-based food safety tutorial to traditional education in an introductory food science course. J Food Sci Educ. 2006;5:31–3. Kosa KM, Cates SC, Godwin SL, Ball M, Harrison RE. Effectiveness of educational interventions to improve food safety practices among older adults. J Nutr Gerontol Geriatr. 2011;30:369–83. Ehiri JE, Morris GP, McEwen J. Evaluation of a food hygiene training course in Scotland. Food Control. 1997;8:137–47. Nauta MJ, Fischer ARH, Van Asselt ED, De Jong AEI, Frewer LJ, De Jonge R. Food safety in the domestic environment: the effect of consumer risk information on human disease risks. Risk Anal. 2008;28:179–92. Schulz KF, Altman DG, Moher D, CONSORT Group. CONSORT statement: updated guidelines for reporting parallel group randomized trials. Ann Intern Med. 2010;2010(152):726–32. Des Jarlais DC, Lyles C, Crepaz N, TREND Group. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health. 2004;94:361–6. Quick V, Corda KW, Chamberlin B, Schaffner DW, Byrd‐Bredbenner C. Ninja kitchen to the rescue. Br Food J. 2013;115:686–99. Engel DA. Applying dialogical design methods to video: enhancing expert food safety communication. PhD thesis. Cornell University. 2003. Trifiletti E, Crovato S, Capozza D, Visintin EP, Ravarotto L. Evaluating the effects of a message on attitude and intention to eat raw meat: salmonellosis prevention. J Food Prot. 2012;75:394–9. Dharod JM, Perez-Escamilla R, Paciello S, Bermudez-Millan A, Venkitanarayanan K, Damio G. Comparison between self-reported and observed food handling behaviors among Latinas. J Food Prot. 2007;70:1927–32. DeDonder S, Jacob CJ, Surgeoner BV, Chapman B, Phebus R, Powell DA. Self‐reported and observed behavior of primary meal preparers and adolescents during preparation of frozen, uncooked, breaded chicken products. Br Food J. 2009;111:915–29. Abbot JM, Byrd-Bredbenner C, Schaffner D, Bruhn CM, Blalock L. Comparison of food safety cognitions and self-reported food-handling behaviors with observed food safety behaviors of young adults. Eur J Clin Nutr. 2009;63:572–9. Milton AC, Mullan BA. An application of the theory of planned behavior – a randomized controlled food safety pilot intervention for young adults. Health Psychol. 2012;31:250–9. Unusan N. E‐mail delivery of hygiene education to university personnel. Nutr Food Sci. 2007;37:37–41. Mayer AB, Harrison JA. Safe eats: an evaluation of the use of social media for food safety education. J Food Prot. 2012;75:1453–63. Nierman LG. A longitudinal study o the retention of foods and nutrition knowledge and practice of participants from the Michigan Expanded Food and Nutrition Education Program, PhD thesis. Michigan State University: Department of Adult and Continuing Education; 1986. Cragun EC. The number of lessons needed to maximize behavior change among Community Nutrition Education Program (CNEP) participants. MSc thesis: Oklahoma State University, Graduate College; 2006. Dharod JM, Perez-Escamilla R, Bermudez-Millan A, Segura-Perez S, Damio G. Influence of the Fight BAC! food safety campaign on an urban Latino population in Connecticut. J Nutr Educ Behav. 2004;36:128–32. Lynch RA, Dale Steen M, Pritchard TJ, Buzzell PR, Pintauro SJ. Delivering food safety education to middle school students using a web-based, interactive, multimedia, computer program. J Food Sci Educ. 2008;7:35–42. Redmond EC, Griffith CJ. A pilot study to evaluate the effectiveness of a social marketing‐based consumer food safety initiative using observation. Br Food J. 2006;108:753–70. Dwan K, Gamble C, Williamson PR, Kirkham JJ, Reporting Bias Group. Systematic review of the empirical evidence of study publication bias and outcome reporting bias - an updated review. PLoS One. 2013;8, e66844. O'Mara-Eves A, Brunton G, McDaid D, Oliver S, Kavanagh J, Jamal F, et al. Community engagement to reduce inequalities in health: a systematic review, meta-analysis and economic analysis. Public Health Res. 2013;1:1–525. Byrd-Bredbenner C, Abbot JM, Quick V. Food safety knowledge and beliefs of middle school children: implications for food safety educators. J Food Sci Educ. 2010;9:19–30. Medeiros L, Hillers V, Kendall P, Mason A. Evaluation of food safety education for consumers. J Nutr Educ. 2001;33 Suppl 1:S27–34. Glanz K, Rimer BK, Viswanath K. Health Behavior and Health Education: Theory, Research, and Practice. 4th ed. San Francisco: Jossey-Bass; 2008. Shapiro MA, Porticella N, Jiang LC, Gravani RB. Predicting intentions to adopt safe home food handling practices, applying the theory of planned behavior. Appetite. 2011;56:96–103. Takeuchi MT, Edlefsen M, McCurdy SM, Hillers VN. Educational intervention enhances consumers' readiness to adopt food thermometer use when cooking small cuts of meat: an application of the transtheoretical model. J Food Prot. 2005;68:1874–83. We thank Judy Inglis and Janet Harris for input on the search strategy; Carl Uhland, Lei Nogueira Borden, and Malcolm Weir for assistance with the scoping review stages (relevance screening and article characterization); and the Public Health Agency of Canada library staff for assistance obtaining articles. We also thank the members of the expert advisory committee for their valued input on this review: Ken Diplock, Daniel Fong, Jessica Morris, Dr. Mike Cassidy, Barbara Marshall, and Andrea Nesbitt. This study was funded by the Laboratory for Foodborne Zoonoses, Public Health Agency of Canada. Laboratory for Foodborne Zoonoses, Public Health Agency of Canada, 160 Research Lane, Suite 206, Guelph, ON, N1G 5B2, Canada Ian Young, Lisa Waddell, Shannon Harding, Judy Greig, Mariola Mascarenhas, Bhairavi Sivaramalingam & Mai T. Pham Department of Population Medicine, University of Guelph, 50 Stone Road, Guelph, ON, N1G 2W1, Canada Lisa Waddell, Bhairavi Sivaramalingam, Mai T. Pham & Andrew Papadopoulos Lisa Waddell Shannon Harding Judy Greig Mariola Mascarenhas Bhairavi Sivaramalingam Mai T. Pham Andrew Papadopoulos Correspondence to Andrew Papadopoulos. All authors contributed to the conception and design of the study and read and approved the final manuscript. IY and BS implemented the search strategy. IY, LW, SH, JG, MM, and BS contributed to reviewing for the scoping review stages. IY, LW, and SH designed and pre-tested the risk-of-bias and data extraction forms. IY and SH conducted risk-of-bias assessment and data extraction. IY led the project management, implementation, analysis, and write-up. PRISMA checklist. (DOCX 30 kb) Full details of the search strategy. (DOCX 33 kb) A copy of all review forms. (DOCX 63 kb) Correlation values from previous studies. (XLSX 11 kb) List of studies with more than two intervention and control groups. (DOCX 25 kb) Citation list of all 77 relevant articles. (XLS 44 kb) Summary table of PICO characteristics for each relevant article. (XLS 65 kb) Detailed within-study risk-of-bias assessment results. (XLS 92 kb) Forest plots for each meta-analysis subgroup. (DOCX 527 kb) Additional file 10: Detailed GRADE assessment results. (XLS 31 kb) Sensitivity analysis of imputing different correlations for combining multiple outcome measures within a study. (XLS 29 kb) Sensitivity analysis of imputing different pre-post correlations for paired meta-analyses. (XLS 27 kb) Sensitivity analysis of comparing meta-analysis estimates for pre-post vs. pre-follow-up measurements. (XLS 26 kb) Young, I., Waddell, L., Harding, S. et al. A systematic review and meta-analysis of the effectiveness of food safety education interventions for consumers in developed countries. BMC Public Health 15, 822 (2015). https://doi.org/10.1186/s12889-015-2171-x Behavioural Intention Behaviour Outcome Improve Food Safety
CommonCrawl
Optimal synchronization control of multiple euler-lagrange systems via event-triggered reinforcement learning Electromagnetic waves described by a fractional derivative of variable and constant order with non singular kernel doi: 10.3934/dcdss.2020369 Complex systems with impulsive effects and logical dynamics: A brief overview Bangxin Jiang 1, , Bowen Li 2, and Jianquan Lu 1,, School of Mathematics, Southeast University, Nanjing 210096, China School of Information Science and Engineering, Southeast University, Nanjing 210096, China * Corresponding author: Jianquan Lu Received September 2019 Revised January 2020 Published May 2020 Figure(6) In the past decades, complex systems with impulsive effects and logical dynamics have received much attention in both the natural and social sciences. This historical survey briefly introduces relevant studies on impulsive differential systems (IDSs) and logical networks (LNs), respectively. To begin with, we investigate five aspects of IDSs containing fundamental theory, Lyapunov stability, input-to-state stability, hybrid impulses and delay-dependent impulses. Next, we compactly summarize the research status of some problems of LNs including controllability, stability and stabilization, observability and current research. Moreover, some significant applications of proposed results are illustrated. Finally, based on this overview, we further discuss some future work on complex systems with impulsive effects and logical dynamics. Keywords: Complex systems, impulsive effect, logical network, stability analysis, controllability, observability. Mathematics Subject Classification: Primary: 34A37, 06E30; Secondary: 93D20, 93B05, 93B07. Citation: Bangxin Jiang, Bowen Li, Jianquan Lu. Complex systems with impulsive effects and logical dynamics: A brief overview. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020369 T. Akutsu, M. Hayashida, W. Ching and M. Ng, Control of Boolean networks: Hardness results and algorithms for tree structured networks, Journal of Theoretical Biology, 244 (2007), 670-679. doi: 10.1016/j.jtbi.2006.09.023. Google Scholar J. Benford and J. Swegle, Applications of high power microwaves, 1992 9th International Conference on High-Power Particle Beams, (1992). doi: 10.1109/PLASMA.1992.697928. Google Scholar H. Chen and J. Liang, Local synchronization of interconnected Boolean networks with stochastic disturbances, IEEE Transactions on Neural Networks and Learning Systems, 31 (2019), 452-463. Google Scholar W.-H. Chen, S. Luo and W. X. Zheng, Impulsive synchronization of reaction–diffusion neural networks with mixed delays and its application to image encryption, IEEE Transactions on Neural Networks and Learning Systems, 27 (2016), 2696-2710. doi: 10.1109/TNNLS.2015.2512849. Google Scholar W.-H. Chen, D. Wei and W. Zheng, Delayed impulsive control of Takagi-Sugeno fuzzy delay systems, IEEE Transactions on Fuzzy Systems, 21 (2013), 516-526. doi: 10.1109/TFUZZ.2012.2217147. Google Scholar W.-H. Chen and W.-X. Zheng, Exponential stability of nonlinear time-delay systems with delayed impulse effects, Automatica, 47 (2011), 1075-1083. doi: 10.1016/j.automatica.2011.02.031. Google Scholar D. Cheng, Semi-tensor product of matrices and its application to Morgens problem, Science in China Series: Information Sciences, 44 (2001), 195-212. Google Scholar D. Cheng, Disturbance decoupling of Boolean control networks, IEEE Transactions on Automatic Control, 56 (2011), 2-10. doi: 10.1109/TAC.2010.2050161. Google Scholar D. Cheng, F. He, H. Qi and T. Xu, Modeling, analysis and control of networked evolutionary games, IEEE Transactions on Automatic Control, 60 (2015), 2402-2415. doi: 10.1109/TAC.2015.2404471. Google Scholar D. Cheng, C. Li and F. He, Observability of Boolean networks via set controllability approach, Systems & Control Letters, 115 (2018), 22-25. doi: 10.1016/j.sysconle.2018.03.004. Google Scholar D. Cheng and H. Qi, Controllability and observability of Boolean control networks, Automatica, 45 (2009), 1659-1667. doi: 10.1016/j.automatica.2009.03.006. Google Scholar D. Cheng, H. Qi, Z. Li and J. Liu, Stability and stabilization of Boolean networks, International Journal of Robust and Nonlinear Control, 21 (2011), 134-156. doi: 10.1002/rnc.1581. Google Scholar D. Cheng, H. Qi and Z. Liu, From STP to game-based control, Science China Information Sciences, 61 (2018), 010201. doi: 10.1007/s11432-017-9265-2. Google Scholar W. Ching, S. Zhang, Y. Jiao, T. Akutsu, N. Tsing and A. Wong, Optimal control policy for probabilistic Boolean networks with hard constraints, IET Systems Biology, 3 (2009), 90-99. doi: 10.1049/iet-syb.2008.0120. Google Scholar S. Dashkovskiy and P. Feketa, Input-to-state stability of impulsive systems and their networks, Nonlinear Analysis: Hybrid Systems, 26 (2017), 190-200. doi: 10.1016/j.nahs.2017.06.004. Google Scholar S. Dashkovskiy and A. Mironchenko, Input-to-state stability of nonlinear impulsive systems, SIAM Journal on Control and Optimization, 51 (2013), 1962–1987. doi: 10.1137/120881993. Google Scholar E. Dubrova, Finding matching initial states for equivalent NLFSRs in the Fibonacci and the Galois configurations, IEEE Transactions on Information Theory, 56 (2010), 2961-2966. doi: 10.1109/TIT.2010.2046250. Google Scholar B. Faryabi, A. Datta and E. Dougherty, On approximate stochastic control in genetic regulatory networks, IET Systems Biology, 1 (2007), 361-368. doi: 10.1049/iet-syb:20070015. Google Scholar E. Fornasini and M. Valcher, Observability, reconstructibility and state observers of Boolean control networks, IEEE Transactions on Automatic Control, 58 (2013), 1390-1401. doi: 10.1109/TAC.2012.2231592. Google Scholar L. Gao, D. Wang and G. Wang, Further results on exponential stability for impulsive switched nonlinear time-delay systems with delayed impulse effects, Applied Mathematics and Computation, 268 (2015), 186-200. doi: 10.1016/j.amc.2015.06.023. Google Scholar K. Gopalsamy and B. Zhang, On delay differential equations with impulses, Journal of Mathematical Analysis and Applications, 139 (1989), 110-122. doi: 10.1016/0022-247X(89)90232-1. Google Scholar Z.-H. Guan, G. Chen and T. Ueta, On impulsive control of a periodically forced chaotic pendulum system, IEEE Transactions on Automatic Control, 45 (2000), 1724-1727. doi: 10.1109/9.880633. Google Scholar Z.-H. Guan and N. Liu, Generating chaos for discrete time-delayed systems via impulsive control, Chaos, 20 (2010), 013135. doi: 10.1063/1.3266929. Google Scholar Z.-H. Guan, Z.-W. Liu, G. Feng and W. Yan-Wu, Synchronization of complex dynamical networks with time-varying delays via impulsive distributed control, IEEE Transactions on Circuits and Systems I: Regular Papers, 57 (2010), 2182-2195. Google Scholar G. Zhao, Y. Wang and H. Li, A matrix approach to the modeling and analysis of networked evolutionary games with finite memories, IEEE/CAA Journal of Automatica Sinica, 5 (2018), 818-826. doi: 10.1109/JAS.2016.7510259. Google Scholar Y. Guo, Observability of Boolean control networks using parallel extension and set reachability, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 6402-6408. doi: 10.1109/TNNLS.2018.2826075. Google Scholar Y. Guo, P. Wang, W. Gui and C. Yang, Set stability and set stabilization of Boolean control networks based on invariant subsets, Automatica, 61 (2015), 106-112. doi: 10.1016/j.automatica.2015.08.006. Google Scholar D. Haltara and G. Ankhbayar, Using the maximum principle of impulse control for ecology-economical models, Ecological Modelling, 216 (2008), 150-156. doi: 10.1016/j.ecolmodel.2008.03.025. Google Scholar W. He, G. Chen, Q. L. Han and F. Qian, Network-based leader-following consensus of nonlinear multi-agent systems via distributed impulsive control, Information Sciences, 380 (2017), 145-158. Google Scholar J. P. Hespanha, D. Liberzon and A. R. Teel, Lyapunov conditions for input-to-state stability of impulsive systems, Automatica, 44 (2008), 2735-2744. doi: 10.1016/j.automatica.2008.03.021. Google Scholar J. Hu, G. Sui, X. Lu and X. Li, Fixed-time control of delayed neural networks with impulsive perturbations, Nonlinear Analysis: Modelling and Control, 23 (2018), 904-920. doi: 10.15388/NA.2018.6.6. Google Scholar M.-J. Hu, J.-W. Xiao, R.-B. Xiao and W.-H. Chen, Impulsive effects on the stability and stabilization of positive systems with delays, Journal of the Franklin Institute, 354 (2017), 4034-4054. doi: 10.1016/j.jfranklin.2017.03.019. Google Scholar C. Huang, J. Lu, D. W. Ho, G. Zhai and J. Cao, Stabilization of probabilistic Boolean networks via pinning control strategy, Information Sciences, 510 (2020), 205-217. doi: 10.1016/j.ins.2019.09.029. Google Scholar A. Ignatyev and A. Soliman, Asymptotic stability and instability of the solutions of systems with impulse action, Mathematical Notes, 80 (2006), 491-499. doi: 10.1007/s11006-006-0167-7. Google Scholar G. Jia, M. Meng and J. Feng, Function perturbation of mix-valued logical networks with impacts on limit sets, Neurocomputing, 207 (2016), 428-436. Google Scholar B. Jiang, J. Lu, X. Li and J. Qiu, Input/output-to-state stability of nonlinear impulsive delay systems based on a new impulsive inequality, International Journal of Robust and Nonlinear Control, 29 (2019), 6164-6178. doi: 10.1002/rnc.4712. Google Scholar B. Jiang, J. Lu, J. Lou and J. Qiu, Synchronization in an array of coupled neural networks with delayed impulses: Average impulsive delay method, Neural Networks, 121 (2020), 452-460. doi: 10.1016/j.neunet.2019.09.019. Google Scholar S. Kauffman, Metabolic stability and epigenesis in randomly constructed genetic nets, Journal of Theoretical Biology, 22 (1969), 437-467. doi: 10.1016/0022-5193(69)90015-0. Google Scholar A. Khadra, X. Liu and X. Shen, Impulsively synchronizing chaotic systems with delay and applications to secure communication, Automatica, 41 (2005), 1491-1502. doi: 10.1016/j.automatica.2005.04.012. Google Scholar J. Ladyman, J. Lambert and K. Wiesner, What is a complex system?, European Journal for Philosophy of Science, 3 (2013), 33-67. doi: 10.1007/s13194-012-0056-8. Google Scholar V. Lakshmikantham and S. Leela, On perturbing Lyapunov functions, Mathematical Systems Theory, 10 (1976), 85-90. doi: 10.1007/BF01683265. Google Scholar V. Lakshmikantham, D. D. Bainov and P. S. Simeonov, Theory of Impulsive Differential Equations, vol. 6, World Scientific, 1989. doi: 10.1142/0906. Google Scholar D. Laschov and M. Margaliot, Controllability of Boolean control networks via perron-frobenius theory, Automatica J. IFAC, 48 (2012), 1218-1223. doi: 10.1016/j.automatica.2012.03.022. Google Scholar D. Laschov, M. Margaliot and G. Even, Observability of Boolean networks: A graph-theoretic approach, Automatica, 49 (2013), 2351-2362. doi: 10.1016/j.automatica.2013.04.038. Google Scholar Y.-J. Lee, J. H. Olof and N. Karin, Spatio-temporal dynamics of impulse responses to figure motion in optic flow neurons, Plos One, 10 (2015), e0126265. doi: 10.1371/journal.pone.0126265. Google Scholar B. Li, Y. Liu, K. Kou and L. Yu, Event-triggered control for the disturbance decoupling problem of Boolean control networks, IEEE Transactions on Cybernetics, 48 (2018), 2764-2769. doi: 10.1109/TCYB.2017.2746102. Google Scholar B. Li, Y. Liu, J. Lou, J. Lu and J. Cao, The robustness of outputs with respect to disturbances for Boolean control networks, IEEE Transactions on Neural Networks and Learning Systems, 31 (2020), 1046-1051. doi: 10.1109/TNNLS.2019.2910193. Google Scholar B. Li, J. Lu, Y. Liu and Z. Wu, The outputs robustness of Boolean control networks via pinning control, IEEE Transactions on Control of Network Systems, 7 (2020), 201-209. doi: 10.1109/TCNS.2019.2913543. Google Scholar B. Li, J. Lu, Y. Liu and W. Zheng, The local convergence of Boolean networks with disturbances, IEEE Transactions on Circuits and Systems II: Express Briefs, 66 (2019), 667-671. doi: 10.1109/TCSII.2018.2857841. Google Scholar B. Li, J. Lu, J. Zhong and Y. Liu, Fast-time stability of temporal Boolean networks, IEEE Transactions on Neural Networks and Learning Systems, 30 (2019), 2285-2294. doi: 10.1109/TNNLS.2018.2881459. Google Scholar F. Li, Pinning control design for the stabilization of Boolean networks, IEEE Transactions on Neural Networks and Learning Systems, 27 (2016), 1585-1590. doi: 10.1109/TNNLS.2015.2449274. Google Scholar F. Li, Pinning control design for the synchronization of two coupled Boolean networks, IEEE Transactions on Circuits and Systems II: Express Briefs, 63 (2016), 309-313. doi: 10.1109/TCSII.2015.2482658. Google Scholar F. Li, Stability of Boolean networks with delays using pinning control, IEEE Transactions on Control of Network Systems, 5 (2018), 179-185. doi: 10.1109/TCNS.2016.2585861. Google Scholar F. Li, H. Li, L. Xie and Q. Zhou, On stabilization and set stabilization of multivalued logical systems, Automatica, 80 (2017), 41-47. doi: 10.1016/j.automatica.2017.01.032. Google Scholar F. Li and J. Sun, Observability analysis of Boolean control networks with impulsive effects, IET Control Theory & Applications, 5 (2011), 1609-1616. doi: 10.1049/iet-cta.2010.0558. Google Scholar F. Li and J. Sun, Stability and stabilization of Boolean networks with impulsive effects, Systems & Control Letters, 61 (2012), 1-5. doi: 10.1016/j.sysconle.2011.09.019. Google Scholar F. Li, J. Sun and Q. Wu, Observability of Boolean control networks with state time delays, IEEE Transactions on Neural Networks, 22 (2011), 948-954. Google Scholar F. Li and Y. Tang, Set stabilization for switched Boolean control networks, Automatica, 78 (2017), 223-230. doi: 10.1016/j.automatica.2016.12.007. Google Scholar F. Li, H. Yan and H. Karimi, Single-input pinning controller design for reachability of Boolean networks, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 3264-3269. Google Scholar H. Li and Y. Wang, Boolean derivative calculation with application to fault detection of combinational circuits via the semi-tensor product method, Automatica, 48 (2012), 688-693. doi: 10.1016/j.automatica.2012.01.021. Google Scholar H. Li and Y. Wang, Controllability analysis and control design for switched Boolean networks with state and input constraints, SIAM Journal on Control and Optimization, 53 (2015), 2955-2979. doi: 10.1137/120902331. Google Scholar H. Li and Y. Wang, Further results on feedback stabilization control design of Boolean control networks, Automatica, 83 (2017), 303-308. doi: 10.1016/j.automatica.2017.06.043. Google Scholar H. Li and Y. Wang, Lyapunov-based stability and construction of Lyapunov functions for Boolean networks, SIAM Journal on Control and Optimization, 55 (2017), 3437-3457. doi: 10.1137/16M1092581. Google Scholar H. Li, Y. Wang and Z. Liu, Stability analysis for switched Boolean networks under arbitrary switching signals, IEEE Transactions on Automatic Control, 59 (2014), 1978-1982. doi: 10.1109/TAC.2014.2298731. Google Scholar H. Li, Y. Wang and L. Xie, Output tracking control of Boolean control networks via state feedback: Constant reference signal case, Automatica, 59 (2015), 54-59. doi: 10.1016/j.automatica.2015.06.004. Google Scholar H. Li, Y. Wang, L. Xie and D. Cheng, Disturbance decoupling control design for switched Boolean control networks, Systems & Control Letters, 72 (2014), 1-6. doi: 10.1016/j.sysconle.2014.07.008. Google Scholar P. Guo, Y. Wang and H. Li, Stable degree analysis for strategy profiles of evolutionary networked games, Science China Information Sciences, 59 (2016), 052204. doi: 10.1007/s11432-015-5376-9. Google Scholar H. Li, Y. Wang and P. Guo, Output reachability analysis and output regulation control design of Boolean control networks, Science China Information Sciences, 60 (2017), 022202. doi: 10.1007/s11432-015-0611-4. Google Scholar R. Li, M. Yang and T. Chu, State feedback stabilization for Boolean control networks, IEEE Transactions on Automatic Control, 58 (2013), 1853-1857. doi: 10.1109/TAC.2013.2238092. Google Scholar R. Li, M. Yang and T. Chu, State feedback stabilization for probabilistic Boolean networks, Automatica, 50 (2014), 1272-1278. doi: 10.1016/j.automatica.2014.02.034. Google Scholar S. Li, X. Song and A. Li, Strict practical stability of nonlinear impulsive systems by employing two Lyapunov-like functions, Nonlinear Analysis Series B: Real World Applications, 9 (2008), 2262-2269. doi: 10.1016/j.nonrwa.2007.08.003. Google Scholar X. Li, J. Lu, J. Qiu, X. Chen, X. Li and F. Alsaadi, Set stability for switched Boolean networks with open-loop and closed-loop switching signals, Science China Information Sciences, 61 (2018), 092207. Google Scholar X. Li, Further analysis on uniform stability of impulsive infinite delay differential equations, Applied Mathematics Letters, 25 (2012), 133-137. doi: 10.1016/j.aml.2011.08.001. Google Scholar X. Li and M. Bohner, An impulsive delay differential inequality and applications, Computers & Mathematics with Applications, 64 (2012), 1875-1881. doi: 10.1016/j.camwa.2012.03.013. Google Scholar X. Li, M. Bohner and C. K. Wang, Impulsive differential equations: Periodic solutions and applications, Automatica, 52 (2015), 173-178. doi: 10.1016/j.automatica.2014.11.009. Google Scholar X. Li, J. Cao and D. W. C. Ho, Impulsive control of nonlinear systems with time-varying delay and applications, IEEE Transactions on Cybernetics, 50 (2020), 2661-2673. doi: 10.1109/TCYB.2019.2896340. Google Scholar X. Li and F. Deng, Razumikhin method for impulsive functional differential equations of neutral type, Chaos Solitons and Fractals, 101 (2017), 41-49. doi: 10.1016/j.chaos.2017.05.018. Google Scholar X. Li, D. W. C. Ho and J. Cao, Finite-time stability and settling-time estimation of nonlinear impulsive systems, Automatica, 99 (2019), 361-368. doi: 10.1016/j.automatica.2018.10.024. Google Scholar X. Li, P. Li and Q. G. Wang, Input/output-to-state stability of impulsive switched systems, Systems & Control Letters, 116 (2018), 1-7. doi: 10.1016/j.sysconle.2018.04.001. Google Scholar X. Li, D. O'Regan and H. Akca, Global exponential stabilization of impulsive neural networks with unbounded continuously distributed delays, IMA Journal of Applied Mathematics, 80 (2015), 85-99. doi: 10.1093/imamat/hxt027. Google Scholar X. Li, R. Rakkiyappan and C. Pradeep, Robust $\mu-$stability analysis of Markovian switching uncertain stochastic genetic regulatory networks with unbounded time-varying delays, Communications in Nonlinear Science and Numerical Simulation, 17 (2012), 3894-3905. doi: 10.1016/j.cnsns.2012.02.008. Google Scholar X. Li, J. Shen and R. Rakkiyappan, Persistent impulsive effects on stability of functional differential equations with finite or infinite delay, Applied Mathematics and Computation, 329 (2018), 14-22. doi: 10.1016/j.amc.2018.01.036. Google Scholar X. Li and S. Song, Stabilization of delay systems: Delay-dependent impulsive control, IEEE Transactions on Automatic Control, 62 (2017), 406-411. doi: 10.1109/TAC.2016.2530041. Google Scholar X. Li, S. Song and J. Wu, Exponential stability of nonlinear systems with delayed impulses and applications, IEEE Transactions on Automatic Control, 64 (2019), 4024-4034. doi: 10.1109/TAC.2019.2905271. Google Scholar X. Li, X. Yang and T. Huang, Persistence of delayed cooperative models: Impulsive control method, Applied Mathematics and Computation, 342 (2019), 130-146. doi: 10.1016/j.amc.2018.09.003. Google Scholar X. Li, X. Zhang and S. Song, Effect of delayed impulses on input-to-state stability of nonlinear systems, Automatica, 76 (2017), 378-382. doi: 10.1016/j.automatica.2016.08.009. Google Scholar Y. Li, B. Li, Y. Liu and J. Lu, Set stability and stabilization of switched Boolean networks with state-based switching, IEEE Access, 6 (2018), 35624-35630. doi: 10.1109/ACCESS.2018.2851391. Google Scholar Y. Li, R. Liu, J. Lou, J. Lu, Z. Wang and Y. Liu, Output tracking of Boolean control networks driven by constant reference signal, IEEE Access, 7 (2019), 112572-112577. doi: 10.1109/ACCESS.2019.2934740. Google Scholar Y. Li, H. Li, X. Xu and Y. Li, Semi-tensor product approach to minimal-agent consensus control of networked evolutionary games, IET Control Theory & Applications, 12 (2018), 2269-2275. doi: 10.1049/iet-cta.2018.5230. Google Scholar Y. Li, Impulsive synchronization of stochastic neural networks via controlling partial states, Neural Processing Letters, 46 (2017), 59-69. doi: 10.1007/s11063-016-9568-0. Google Scholar Y. Li, J. Lou, Z. Wang and F. E. Alsaadi, Synchronization of dynamical networks with nonlinearly coupling function under hybrid pinning impulsive controllers, Journal of the Franklin Institute, 355 (2018), 6520-6530. doi: 10.1016/j.jfranklin.2018.06.021. Google Scholar Z. Li, W. Zhang, G. He and J.-A. Fang, Synchronisation of discrete-time complex networks with delayed heterogeneous impulses, Applications, 9 (2015), 2648-2656. doi: 10.1049/iet-cta.2014.1281. Google Scholar J. Liang, H. Chen and J. Lam, An improved criterion for controllability of boolean control networks, IEEE Transactions on Automatic Control, 62 (2017), 6012-6018. doi: 10.1109/TAC.2017.2702008. Google Scholar L. Lin, J. Cao and L. Rutkowski, Robust event-triggered control invariance of probabilistic Boolean control networks, IEEE Transactions on Neural Networks and Learning Systems, 31 (2020), 1060-1065. doi: 10.1109/TNNLS.2019.2917753. Google Scholar H. Liu, Y. Liu, Y. Li, Z. Wang and F. Alsaadi, Observability of Boolean networks via STP and graph methods, IET Control Theory & Applications, 13 (2019), 1031-1037. doi: 10.1049/iet-cta.2018.5279. Google Scholar J. Liu and Z. Zhao, Multiple solutions for impulsive problems with non-autonomous perturbations $\star$, Applied Mathematics Letters, 64 (2017), 143-149. doi: 10.1016/j.aml.2016.08.020. Google Scholar J. Liu and X. Li, Impulsive stabilization of high-order nonlinear retarded differential equations, Applications of Mathematics, 58 (2013), 347-367. doi: 10.1007/s10492-013-0017-3. Google Scholar R. Liu, J. Lu, Y. Liu, J. Cao and Z. Wu, Delayed feedback control for stabilization of Boolean control networks with state delay, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 3283-3288. Google Scholar R. Liu, J. Lu, W. Zheng and J. Kurths, Output feedback control for set stabilization of Boolean control networks, IEEE Transactions on Neural Networks and Learning Systems Google Scholar X. Liu and J. Zhu, On potential equations of finite games, Automatica, 68 (2016), 245-253. doi: 10.1016/j.automatica.2016.01.074. Google Scholar X. Liu, Stability results for impulsive differential systems with applications to population growth models, Dynamics and Stability of Systems, 9 (1994), 163-174. doi: 10.1080/02681119408806175. Google Scholar X. Liu and G. Ballinger, Existence and continuability of solutions for differential equations with delays and state-dependent impulses, Nonlinear Analysis, 51 (2002), 633-647. doi: 10.1016/S0362-546X(01)00847-1. Google Scholar X. Liu and K. Zhang, Input-to-state stability of time-delay systems with delay-dependent impulses, IEEE Transactions on Automatic Control, 65 (2012), 1676-1682. doi: 10.1109/TAC.2019.2930239. Google Scholar Y. Liu, J. Cao, B. Li and J. Lu, Normalization and solvability of dynamic-algebraic Boolean networks, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 3301-3306. Google Scholar Y. Liu, J. Cao, L. Sun and J. Lu, Sampled-data state feedback stabilization of Boolean control networks, Neural Computation, 28 (2016), 778-799. doi: 10.1162/NECO_a_00819. Google Scholar Y. Liu, H. Chen, J. Lu and B. Wu, Controllability of probabilistic Boolean control networks based on transition probability matrices, Automatica, 52 (2015), 340-345. doi: 10.1016/j.automatica.2014.12.018. Google Scholar Y. Liu, B. Li, H. Chen and J. Cao, Function perturbations on singular Boolean networks, Automatica, 84 (2017), 36-42. doi: 10.1016/j.automatica.2017.06.035. Google Scholar Y. Liu, B. Li and J. Lou, Disturbance decoupling of singular Boolean control networks, IEEE/ACM Transactions on Computational Biology and Bioinformatics, 13 (2016), 1194-1200. doi: 10.1109/TCBB.2015.2509969. Google Scholar Y. Liu, B. Li, J. Lu and J. Cao, Pinning control for the disturbance decoupling problem of Boolean networks, IEEE Transactions on Automatic Control, 62 (2017), 6595-6601. doi: 10.1109/TAC.2017.2715181. Google Scholar Y. Liu, L. Sun, J. Lu and J. Liang, Feedback controller design for the synchronization of Boolean control networks, IEEE Transactions on Neural Networks and Learning Systems, 27 (2016), 1991-1996. doi: 10.1109/TNNLS.2015.2461012. Google Scholar Z. Liu, Y. Wang and H. Li, New approach to derivative calculation of multi-valued logical functions with application to fault detection of digital circuits, IET ControlTheory & Applications, 8 (2014), 554-560. doi: 10.1049/iet-cta.2013.0104. Google Scholar J. Lu, H. Li, Y. Liu and F. Li, Survey on semi-tensor product method with its applications in logical networks and other finite-valued systems, IET Control Theory & Applications, 11 (2017), 2040-2047. doi: 10.1049/iet-cta.2016.1659. Google Scholar J. Lu, M. Li, T. Huang, Y. Liu and J. Cao, The transformation between the Galois NLFSRs and the Fibonacci NLFSRs via semi-tensor product of matrices, Automatica, 96 (2018), 393-397. doi: 10.1016/j.automatica.2018.07.011. Google Scholar J. Lu, M. Li, Y. Liu, D. Ho and J. Kurths, Nonsingularity of grain-like cascade FSRs via semi-tensor product, Science China Information Sciences, 61 (2018), 010204, 12 pp. doi: 10.1007/s11432-017-9269-6. Google Scholar B. Li and J. Lu, Boolean-network-based approach for the construction of filter generators, Science China Information Sciences, in press. Google Scholar J. Lu, L. Sun, Y. Liu, D. Ho and J. Cao, Stabilization of Boolean control networks under aperiodic sampled-data control, SIAM Journal on Control and Optimization, 56 (2018), 4385-4404. doi: 10.1137/18M1169308. Google Scholar J. Lu, J. Zhong, D. W. C. Ho, Y. Tang and J. Cao, On controllability of delayed Boolean control networks, SIAM Journal on Control and Optimization, 54 (2016), 475-494. doi: 10.1137/140991820. Google Scholar J. Lu, J. Zhong, C. Huang and J. Cao, On pinning controllability of Boolean control networks, IEEE Transactions on Automatic Control, 61 (2016), 1658-1663. doi: 10.1109/TAC.2015.2478123. Google Scholar J. Lu, D. W. C. Ho and J. Cao, A unified synchronization criterion for impulsive dynamical networks, Automatica, 46 (2010), 1215-1221. doi: 10.1016/j.automatica.2010.04.005. Google Scholar J. Lu, J. Kurths, J. Cao, N. Mahdavi and C. Huang, Synchronization control for nonlinear stochastic dynamical networks: Pinning impulsive strategy, IEEE Transactions on Neural Networks and Learning Systems, 23 (2012), 285-292. Google Scholar J. Lu, R. Liu, J. Lou and Y. Liu, Pinning stabilization of Boolean control networks via a minimum number of controllers, IEEE Transactions on Cybernetics, (2019), 1–9. doi: 10.1109/TCYB.2019.2944659. Google Scholar J. Lu, J. Yang, J. Lou and J. Qiu, Event-triggered sampled feedback synchronization in an array of output-coupled Boolean control networks, IEEE Transactions on Cybernetics, (2019), 1–6. doi: 10.1109/TCYB.2019.2939761. Google Scholar Y. Mao, L. Wang, Y. Liu, J. Lu and Z. Wang, Stabilization of evolutionary networked games with length-r information, Applied Mathematics and Computation, 337 (2018), 442-451. doi: 10.1016/j.amc.2018.05.027. Google Scholar M. Meng, J. Lam, J. Feng and K. Cheung, Stability and guaranteed cost analysis of time-triggered Boolean networks, IEEE Transactions on Neural Networks and Learning Systems, 29 (2018), 3893-3899. doi: 10.1109/TNNLS.2017.2737649. Google Scholar M. Meng, J. Lam, J. Feng and K. Cheung, Stability and stabilization of Boolean networks with stochastic delays, IEEE Transactions on Automatic Control, 64 (2019), 790-796. Google Scholar M. Meng, L. Liu and G. Feng, Stability and ${L}_1$ gain analysis of Boolean networks with Markovian jump parameters, IEEE Transactions on Automatic Control, 62 (2017), 4222-4228. doi: 10.1109/TAC.2017.2679903. Google Scholar V. Milman and A. Myshkis, On the stability of motion in the presence of impulses, Sib. Math. J, 1 (1960), 233-237. Google Scholar P. Naghshtabrizi, J. P. Hespanha and A. R. Teel, Exponential stability of impulsive systems with application to uncertain sampled-data systems, Systems & Control Letters, 57 (2008), 378-385. doi: 10.1016/j.sysconle.2007.10.009. Google Scholar P. Naghshtabrizi, J. P. Hespanha and A. R. Teel, Stability of delay impulsive systems with application to networked control systems, Transactions of the Institute of Measurement and Control, 32 (2010), 511-528. doi: 10.1109/ACC.2007.4282847. Google Scholar F. G. Pfeiffer and M. O. Foerg, On the structure of multiple impact systems, Nonlinear Dynamics, 42 (2005), 101-112. doi: 10.1007/s11071-005-1910-4. Google Scholar I. Shmulevich, E. R. Dougherty, S. Kim and W. Zhang, Probabilistic Boolean networks: A rule-based uncertainty model for gene regulatory networks, Bioinformatics, 18 (2002), 261-274. doi: 10.1093/bioinformatics/18.2.261. Google Scholar P. S. Simeonov and D. D. Bainov, Exponential stability of the solutions of singularly perturbed systems with impulse effect, J. Math. Anal. Appl., 151 (1990), 462-487. doi: 10.1016/0022-247X(90)90161-8. Google Scholar X. Song and A. Li, Stability and boundedness criteria of nonlinear impulsive systems employing perturbing Lyapunov functions $\star$, Applied Mathematics and Computation, 217 (2011), 10166-10174. doi: 10.1016/j.amc.2011.05.011. Google Scholar E. D. Sontag, Smooth stabilization implies coprime factorization, IEEE Transactions on Automatic Control, 34 (1989), 435-443. doi: 10.1109/9.28018. Google Scholar E. D. Sontag, Comments on integral variants of ISS, Systems & Control Letters, 34 (1998), 93-100. doi: 10.1016/S0167-6911(98)00003-6. Google Scholar I. Stamova and G. Stamov, Stability analysis of impulsive functional systems of fractional order, Communications in Nonlinear Science and Numerical Simulation, 19 (2014), 702-709. doi: 10.1016/j.cnsns.2013.07.005. Google Scholar I. M. Stamova and G. T. Stamov, Lyapunov-Razumikhin method for impulsive functional differential equations and applications to the population dynamics, Journal of Computational and Applied Mathematics, 130 (2001), 163-171. doi: 10.1016/S0377-0427(99)00385-4. Google Scholar J. Sun, Q. L. Han and X. Jiang, Impulsive control of time-delay systems using delayed impulse and its application to impulsive master-slave synchronization, Physics Letters A, 372 (2008), 6375-6380. doi: 10.1016/j.physleta.2008.08.067. Google Scholar L. Sun, J. Lu and W. Ching, Switching-based stabilization of aperiodic sampled-data Boolean control networks with all modes unstable, Frontiers of Information Technology & Electronic Engineering, 21 (2020), 260-267. doi: 10.1631/FITEE.1900312. Google Scholar L. Tong, Y. Liu, Y. Li, J. Lu, Z. Wang and F. Alsaadi, Robust control invariance of probabilistic Boolean control networks via event-triggered control, IEEE Access, 6 (2018), 37767-37774. doi: 10.1109/ACCESS.2018.2828128. Google Scholar G. Wang and Y. Shen, Second-order cluster consensus of multi-agent dynamical systems with impulsive effects, Communications in Nonlinear Science and Numerical Simulation, 19 (2014), 3220-3228. doi: 10.1016/j.cnsns.2014.02.021. Google Scholar N. Wang, X. Li, J. Lu and F. E. Alsaadi, Unified synchronization criteria in an array of coupled neural networks with hybrid impulses, Neural Networks, 101 (2018), 25-32. doi: 10.1016/j.neunet.2018.01.017. Google Scholar Y. Wang and H. Li, On definition and construction of lyapunov functions for Boolean networks, in Proceedings of the 10th World Congress on Intelligent Control and Automation, IEEE, (2012), 1247–1252. doi: 10.1109/WCICA.2012.6358072. Google Scholar Y. Wang, J. Lu, J. Liang, J. Cao and M. Perc, Pinning synchronization of nonlinear coupled Lur'e networks under hybrid impulses, IEEE Transactions on Circuits and Systems II: Express Briefs, 66 (2019), 432-436. Google Scholar Y. Wang, J. Lu, J. Lou, C. Ding, F. E. Alsaadi and T. Hayat, Synchronization of heterogeneous partially coupled networks with heterogeneous impulses, Neural Processing Letters, 48 (2018), 557-575. doi: 10.1007/s11063-017-9735-y. Google Scholar Y. Wang, J. Lu and Y. Lou, Halanay-type inequality with delayed impulses and its applications, Science China Information Sciences, 62 (2019), 192206, 10 pp. doi: 10.1007/s11432-018-9809-y. Google Scholar E. Weiss and M. Margaliot, A polynomial-time algorithm for solving the minimal observability problem in conjunctive Boolean networks, IEEE Transactions on Automatic Control, 64 (2019), 2727-2736. doi: 10.1109/TAC.2018.2882154. Google Scholar Q. Wu, J. Zhou and L. Xiang, Impulses-induced exponential stability in recurrent delayed neural networks, Neurocomputing, 74 (2011), 3204-3211. doi: 10.1016/j.neucom.2011.05.001. Google Scholar Y. Wu and T. Shen, Policy iteration approach to control residual gas fraction in ic engines under the framework of stochastic logical dynamics, IEEE Transactions on Control Systems Technology, 25 (2017), 1100-1107. doi: 10.1109/TCST.2016.2587247. Google Scholar Y. Wu and T. Shen, A finite convergence criterion for the discounted optimal control of stochastic logical networks, IEEE Transactions on Automatic Control, 63 (2018), 262-268. doi: 10.1109/TAC.2017.2720730. Google Scholar D. Xie, H. Peng, L. Li and Y. Yang, Semi-tensor compressed sensing, Digital Signal Processing, 58 (2016), 85-92. doi: 10.1016/j.dsp.2016.07.003. Google Scholar F. Xu, L. Dong, D. Wang, X. Li and R. Rakkiyappan, Globally exponential stability of nonlinear impulsive switched systems, Mathematical Notes, 97 (2015), 803-810. doi: 10.1134/S0001434615050156. Google Scholar M. Xu, Y. Liu, J. Lou, Z.-G. Wu and J. Zhong, Set stabilization of probabilistic boolean control networks: A sampled-data control approach, IEEE Transactions on Cybernetics, (2019), 1–8. doi: 10.1109/TCYB.2019.2940654. Google Scholar J. Yang, J. Lu, L. Li, Y. Liu, Z. Wang and F. Alsaadi, Event-triggered control for the synchronization of Boolean control networks, Nonlinear Dynamics, 96 (2019), 1335-1344. doi: 10.1007/s11071-019-04857-2. Google Scholar M. Yang, R. Li and T. Chu, Controller design for disturbance decoupling of Boolean control networks, Automatica, 49 (2013), 273-277. doi: 10.1016/j.automatica.2012.10.010. Google Scholar T. Yang, Impulsive Control Theory Lecture Notes in Control and Information Sciences, Springer Science and Business Media, 2001. Google Scholar X. Yang, B. Chen, Y. Li, Y. Liu and F. E. Alsaadi, Stabilization of dynamic-algebraic Boolean control networks via state feedback control, Journal of the Franklin Institute, 355 (2018), 5520-5533. doi: 10.1016/j.jfranklin.2018.05.049. Google Scholar X. Yang and J. Lu, Finite-time synchronization of coupled networks with markovian topology and impulsive effects, IEEE Transactions on Automatic Control, 61 (2016), 2256-2261. doi: 10.1109/TAC.2015.2484328. Google Scholar Z. Yang and D. Xu, Stability analysis and design of impulsive control systems with time delay, IEEE Transactions on Automatic Control, 52 (2007), 1448-1454. doi: 10.1109/TAC.2007.902748. Google Scholar Y. Yu, J. Feng, J. Pan and D. Cheng, Block decoupling of Boolean control networks, IEEE Transactions on Automatic Control, 64 (2019), 3129-3140. doi: 10.1109/TAC.2018.2880411. Google Scholar K. Zhang and L. Zhang, Observability of Boolean control networks: A unified approach based on finite automata, IEEE Transactions on Automatic Control, 61 (2016), 2733-2738. doi: 10.1109/TAC.2015.2501365. Google Scholar K. Zhang, L. Zhang and L. Xie, Finite automata approach to observability of switched Boolean control networks, Nonlinear Analysis: Hybrid Systems, 19 (2016), 186-197. doi: 10.1016/j.nahs.2015.10.002. Google Scholar L. Zhang and K. Zhang, Controllability and observability of Boolean control networks with time-variant delays in states, IEEE Transactions on Neural Networks and Learning Systems, 24 (2013), 1478-1484. doi: 10.1109/TNNLS.2013.2246187. Google Scholar W. Zhang, C. Li, T. Huang and X. He, Synchronization of memristor-based coupling recurrent neural networks with time-varying delays and impulses, IEEE Transactions on Neural Networks and Learning Systems, 26 (2015), 3308-3313. doi: 10.1109/TNNLS.2015.2435794. Google Scholar X. Zhang and X. Li, Input-to-state stability of nonlinear systems with distributed-delayed impulses, IET Control Theory and Applications, 11 (2017), 81–89. doi: 10.1049/iet-cta.2016.0469. Google Scholar Y. Zhang and Y. Liu, Nonlinear second-order multi-agent systems subject to antagonistic interactions without velocity constraints, Applied Mathematics and Computation, 364 (2020), 124667, 14 pp. doi: 10.1016/j.amc.2019.124667. Google Scholar J. Zhong, D. Ho, J. Lu and W. Xu, Global robust stability and stabilization of Boolean network with disturbances, Automatica, 84 (2017), 142-148. doi: 10.1016/j.automatica.2017.07.013. Google Scholar J. Zhong, B. Li, Y. Liu and W. Gui, Output feedback stabilizer design of Boolean networks based on network structure, Frontiers of Information Technology & Electronic Engineering, 21 (2020), 247-259. doi: 10.1631/FITEE.1900229. Google Scholar J. Zhong and D. Lin, Stability of nonlinear feedback shift registers, Science China Information Sciences, 59 (2016), 1-12. doi: 10.1109/ICInfA.2014.6932738. Google Scholar J. Zhong and D. Lin, On minimum period of nonlinear feedback shift registers in grain-like structure, IEEE Transactions on Information Theory, 64 (2018), 6429-6442. doi: 10.1109/TIT.2018.2849392. Google Scholar J. Zhong, Y. Liu, K. Kou, L. Sun and J. Cao, On the ensemble controllability of Boolean control networks using STP method, Applied Mathematics and Computation, 358 (2019), 51-62. doi: 10.1016/j.amc.2019.03.059. Google Scholar J. Zhong, J. Lu, T. Huang and D. Ho, Controllability and synchronization analysis of identical-hierarchy mixed-valued logical control networks, IEEE Transactions on Cybernetics, 47 (2017), 3482-3493. doi: 10.1109/TCYB.2016.2560240. Google Scholar J. Zhong, J. Lu, Y. Liu and J. Cao, Synchronization in an array of output-coupled Boolean networks with time delay, IEEE Transactions on Neural Networks and Learning Systems, 25 (2014), 2288-2294. doi: 10.1109/TNNLS.2014.2305722. Google Scholar B. Zhu, X. Xia and Z. Wu, Evolutionary game theoretic demand-side management and control for a class of networked smart grid, Automatica, 70 (2016), 94-100. doi: 10.1016/j.automatica.2016.03.027. Google Scholar Q. Zhu, Y. Liu, J. Lu and J. Cao, Observability of Boolean control networks, Science China Information Sciences, 61 (2018), 092201, 12 pp. doi: 10.1007/s11432-017-9135-4. Google Scholar Q. Zhu, Y. Liu, J. Lu and J. Cao, On the optimal control of Boolean control networks, SIAM Journal on Control and Optimization, 56 (2018), 1321-1341. doi: 10.1137/16M1070281. Google Scholar Q. Zhu, Y. Liu, J. Lu and J. Cao, Further results on the controllability of Boolean control networks, IEEE Transactions on Automatic Control, 64 (2019), 440-442. doi: 10.1109/TAC.2018.2830642. Google Scholar S. Zhu, Y. Liu, J. Lou, J. Lu and F. Alsaadi, Sampled-data state feedback control for the set stabilization of Boolean control networks, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 50 (2020), 1580-1589. doi: 10.1109/TSMC.2018.2852703. Google Scholar S. Zhu, J. Lou, Y. Liu, Y. Li and Z. Wang, Event-triggered control for the stabilization of probabilistic Boolean control networks, Complexity, 2018 (2018), 1-7. doi: 10.1155/2018/9259348. Google Scholar S. Zhu, J. Lu and Y. Liu, Asymptotical stability of probabilistic Boolean networks with state delays, IEEE Transactions on Automatic Control, 65 (2020), 1779-1784. doi: 10.1109/TAC.2019.2934532. Google Scholar Figure 1. An example of GAII $ T_{ga} = +\infty $ Figure 2. Behaviors of IDS (13) with initial value $ x_0 = 0.1 $ Figure 3. Behaviors of system (20) with different impulsive gain Figure 4. Behaviors of System (25) with different values of $ \nu $ and $ h $ Figure 5. Cryptosystem based on impulsive synchronization Figure 6. Two kinds of configurations for FSRs based on BNs Soniya Singh, Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of second order impulsive systems with state-dependent delay in Banach spaces. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020103 Hua Shi, Xiang Zhang, Yuyan Zhang. Complex planar Hamiltonian systems: Linearization and dynamics. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020406 Awais Younus, Zoubia Dastgeer, Nudrat Ishaq, Abdul Ghaffar, Kottakkaran Sooppy Nisar, Devendra Kumar. On the observability of conformable linear time-invariant control systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020444 Yicheng Liu, Yipeng Chen, Jun Wu, Xiao Wang. Periodic consensus in network systems with general distributed processing delays. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2021002 Touria Karite, Ali Boutoulout. Global and regional constrained controllability for distributed parabolic linear systems: RHUM approach. Numerical Algebra, Control & Optimization, 2020 doi: 10.3934/naco.2020055 Kuntal Bhandari, Franck Boyer. Boundary null-controllability of coupled parabolic systems with Robin conditions. Evolution Equations & Control Theory, 2021, 10 (1) : 61-102. doi: 10.3934/eect.2020052 Hui Lv, Xing'an Wang. Dissipative control for uncertain singular markovian jump systems via hybrid impulsive control. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 127-142. doi: 10.3934/naco.2020020 Biao Zeng. Existence results for fractional impulsive delay feedback control systems with Caputo fractional derivatives. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021001 Do Lan. Regularity and stability analysis for semilinear generalized Rayleigh-Stokes equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021002 Guillaume Cantin, M. A. Aziz-Alaoui. Dimension estimate of attractors for complex networks of reaction-diffusion systems applied to an ecological model. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020283 Shin-Ichiro Ei, Hiroshi Ishii. The motion of weakly interacting localized patterns for reaction-diffusion systems with nonlocal effect. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 173-190. doi: 10.3934/dcdsb.2020329 Gheorghe Craciun, Jiaxin Jin, Casian Pantea, Adrian Tudorascu. Convergence to the complex balanced equilibrium for some chemical reaction-diffusion systems with boundary equilibria. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1305-1335. doi: 10.3934/dcdsb.2020164 Andrew Comech, Scipio Cuccagna. On asymptotic stability of ground states of some systems of nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1225-1270. doi: 10.3934/dcds.2020316 A. M. Elaiw, N. H. AlShamrani, A. Abdel-Aty, H. Dutta. Stability analysis of a general HIV dynamics model with multi-stages of infected cells and two routes of infection. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020441 Adrian Viorel, Cristian D. Alecsa, Titus O. Pinţa. Asymptotic analysis of a structure-preserving integrator for damped Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020407 Zsolt Saffer, Miklós Telek, Gábor Horváth. Analysis of Markov-modulated fluid polling systems with gated discipline. Journal of Industrial & Management Optimization, 2021, 17 (2) : 575-599. doi: 10.3934/jimo.2019124 Gervy Marie Angeles, Gilbert Peralta. Energy method for exponential stability of coupled one-dimensional hyperbolic PDE-ODE systems. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020108 Jonathan J. Wylie, Robert M. Miura, Huaxiong Huang. Systems of coupled diffusion equations with degenerate nonlinear source terms: Linear stability and traveling waves. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 561-569. doi: 10.3934/dcds.2009.23.561 Bangxin Jiang Bowen Li Jianquan Lu
CommonCrawl
View all Publication Data Table 2 from Acharya, Shreyasi et al. $\Lambda_\mathrm{c}^+$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV Phys.Lett. INSPIREhttp://inspirehep.net/record/1696315 A measurement of the production of prompt $\Lambda_{\rm {c}}^{+}$ baryons in Pb-Pb collisions at $\sqrt{s_{\rm {NN}}} = 5.02$ TeV with the ALICE detector at the LHC is reported. The $\Lambda_{\rm {c}}^{+}$ and $\Lambda_{\rm {c}}^{-}$ were reconstructed at midrapidity ($|y| < 0.5$) via the hadronic decay channel $\Lambda_{\rm {c}}^{+} \to {\rm {p}} {\rm {K}}^{0}_{\rm {S}}$ (and charge conjugate) in the transverse momentum and centrality intervals 6 < $p_{\rm {T}}$ <12 GeV/${\it {c}}$ and 0-80%. The $\Lambda_{\rm {c}}^{+}$/${\rm D}^{0}$ ratio, which is sensitive to the charm quark hadronisation mechanisms in the medium, is measured and found to be larger than the ratio measured in minimum-bias pp collisions at $\sqrt{s} = 7$ TeV and in p-Pb collisions at $\sqrt{s_{\rm {NN}}} = 5.02$ TeV. In particular, the values in p-Pb and Pb-Pb collisions differ by about two standard deviations of the combined statistical and systematic uncertainties. The $\Lambda_{\rm {c}}^{+}$/${\rm D}^{0}$ ratio is also compared with model calculations including different implementations of charm quark hadronisation. The measured ratio is reproduced by models implementing a pure coalescence scenario, while adding a fragmentation contribution leads to an underestimation. The $\Lambda_{\rm {c}}^{+}$ nuclear modification factor, $R_\mathrm{AA}$, is also presented. The measured values of the $R_\mathrm{AA}$ of $\Lambda_{\rm {c}}^{+}$, ${\rm D}_{\rm {s}}$ and non-strange D mesons are compatible within the combined statistical and systematic uncertainties. They show, however, a hint of a hierarchy $R_\mathrm{AA}^{{\rm D}^{0}}<R_\mathrm{AA}^{{\rm D}_{\rm {s}}}<R_\mathrm{AA}^{\Lambda_{\rm {c}}^{+}}$, conceivable with a contribution of recombination mechanisms to charm hadron formation in the medium. Table 1 Table 2
CommonCrawl
Isolated singularity for semilinear elliptic equations DCDS Home Existence and uniqueness of similarity solutions of a generalized heat equation arising in a model of cell migration July 2015, 35(7): 3217-3238. doi: 10.3934/dcds.2015.35.3217 A nonlocal dispersal logistic equation with spatial degeneracy Jian-Wen Sun 1, , Wan-Tong Li 1, and Zhi-Cheng Wang 1, School of Mathematics and Statistics, Key Laboratory of Applied Mathematics and Complex Systems, Lanzhou University, Lanzhou, Gansu 730000, China, China, China Received April 2014 Revised November 2014 Published January 2015 In this paper, we study the nonlocal dispersal Logistic equation \begin{equation*} \begin{cases} u_t=Du+\lambda m(x)u-c(x)u^p &\text{ in }{\Omega}\times(0,+\infty),\\ u(x,0)=u_0(x)\geq0&\text{ in }{\Omega}, \end{cases} \end{equation*} where $\Omega\subset\mathbb{R}^N$ is a bounded domain, $\lambda>0$ and $p>1$ are constants. $Du(x,t)=\int_{\Omega}J(x-y)(u(y,t)-u(x,t))dy$ represents the nonlocal dispersal operator with continuous and nonnegative dispersal kernel $J$, $m\in C(\bar{\Omega})$ and may change sign in $\Omega$. The function $c$ is nonnegative and has a degeneracy in some subdomain of $\Omega$. We establish the existence and uniqueness of positive stationary solution and also consider the effect of degeneracy of $c$ on the long-time behavior of positive solutions. Our results reveal that the necessary condition to guarantee a positive stationary solution and the asymptotic behaviour of solutions are quite different from those of the corresponding reaction-diffusion equation. Keywords: existence and uniqueness, stationary solution, asymptotic behavior., Nonlocal dispersal, principal eigenvalue. Mathematics Subject Classification: Primary: 35B40, 35K57; Secondary: 92D2. Citation: Jian-Wen Sun, Wan-Tong Li, Zhi-Cheng Wang. A nonlocal dispersal logistic equation with spatial degeneracy. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 3217-3238. doi: 10.3934/dcds.2015.35.3217 H. Amann, Fixed point equations and nonlinear eigenvalue problems in ordered Banach spaces,, SIAM Rev., 18 (1976), 620. doi: 10.1137/1018114. Google Scholar F. Andreu-Vaillo, J. M. Mazón, J. D. Rossi and J. Toledo-Melero, Nonlocal Diffusion Problems, Mathematical Surveys and Monographs,, AMS, (2010). doi: 10.1090/surv/165. Google Scholar P. Bates, On some nonlocal evolution equations arising in materials science, Nonlinear Dynamics and Evolution Equations,, in Fields Inst. Commun., 48 (2006), 13. Google Scholar P. Bates, X. Chen and A. Chmaj, Traveling waves of bistable dynamics on a lattice,, SIAM J. Math. Anal., 35 (2003), 520. doi: 10.1137/S0036141000374002. Google Scholar P. Bates, X. Chen and A. Chmaj, Heteroclinic solutions of a van der Waals model with indefinite nonlocal interactions,, Calc. Var. Partial Differential Equations, 24 (2005), 261. doi: 10.1007/s00526-005-0308-y. Google Scholar P. Bates and A. Chmaj, A discrete convolution model for phase transitions,, Arch. Ration. Mech. Anal., 150 (1999), 281. doi: 10.1007/s002050050189. Google Scholar P. Bates and A. Chmaj, An integrodifferential model for phase transitions: Stationary solutions in higher space dimensions,, J. Statist. Phys., 95 (1999), 1119. doi: 10.1023/A:1004514803625. Google Scholar P. Bates, P. Fife, X. Ren and X. Wang, Travelling waves in a convolution model for phase transitions,, Arch. Ration. Mech. Anal., 138 (1997), 105. doi: 10.1007/s002050050037. Google Scholar P. Bates and G. Zhao, Existence, uniquenss, and stability of the stationary solution to a nonlocal evolution equation arising in population dispersal,, J. Math. Anal. Appl., 332 (2007), 428. doi: 10.1016/j.jmaa.2006.09.007. Google Scholar H. Berestycki and N. Rodríguez, Non-local reaction-diffusion equations with a barrier,, preprint, (2013). Google Scholar R. S. Cantrell and C. Cosner, Diffusive logistic equations with indefinite weights: Population models in a disrupted environments,, Proc. Roy. Soc. Edinburgh, 112 (1989), 293. doi: 10.1017/S030821050001876X. Google Scholar E. Chasseigne, M. Chaves and J. D. Rossi, Asymptotic behavior for nonlocal diffusion equation,, J. Math. Pures Appl., 86 (2006), 271. doi: 10.1016/j.matpur.2006.04.005. Google Scholar A. Chmaj and X. Ren, Homoclinic solutions of an integral equation: Existence and stability,, J. Differential Equations, 155 (1999), 17. doi: 10.1006/jdeq.1998.3571. Google Scholar S. N. Chow and J. K. Hale, Methods of Bifurcation Theory,, Springer-Verlag, (1982). Google Scholar C. Cortazar, M. Elgueta, J. D. Rossi and N. Wolanski, Boundary fluxes for nonlocal diffusion,, J. Differential Equations, 234 (2007), 360. doi: 10.1016/j.jde.2006.12.002. Google Scholar C. Cortazar, M. Elgueta, J. D. Rossi and N. Wolanski, How to approximate the heat equation with Neumann boundary conditions by nonlocal diffusion problems,, Arch. Ration. Mech. Anal., 187 (2008), 137. doi: 10.1007/s00205-007-0062-8. Google Scholar J. Coville, On a simple criterion for the existence of a principal eigenfunction of some nonlocal operators,, J. Differential Equations, 249 (2010), 2921. doi: 10.1016/j.jde.2010.07.003. Google Scholar J. Coville, Harnack type inequality for positive solution of some integral equation,, Ann. Mat. Pura Appl., 191 (2012), 503. doi: 10.1007/s10231-011-0193-2. Google Scholar J. Coville, Nonlocal refuge model with a partial control,, Discrete Contin. Dyn. Syst., 35 (2015), 1421. doi: 10.3934/dcds.2015.35.1421. Google Scholar J. Coville, J. Dávila and S. Martínez, Existence and uniqueness of solutions to nonlocal equation with monostable nonlinearity,, SIAM J. Math. Anal., 39 (2008), 1693. doi: 10.1137/060676854. Google Scholar J. Coville, J. Dávila and S. Martínez, Pulsating fronts for nonlocal dispersion and KPP nonlinearity,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 30 (2013), 179. doi: 10.1016/j.anihpc.2012.07.005. Google Scholar J. Coville and L. Dupaigne, On a non-local equation arising in population dynamics,, Proc. Roy. Soc. Edinburgh Sect. A, 137 (2007), 727. doi: 10.1017/S0308210504000721. Google Scholar M. G. Crandall and P. H. Rabinowitz, Bifurcation from simple eigenvalues,, J. Funct. Anal., 8 (1971), 321. doi: 10.1016/0022-1236(71)90015-2. Google Scholar Y. Du and Y. Yamada, On the long-time limit of positive solutions to the degenerate logistic equation,, Discrete Contin. Dyn. Syst., 25 (2009), 123. doi: 10.3934/dcds.2009.25.123. Google Scholar P. Fife, Some nonclassical trends in parabolic and parabolic-like evolutions,, in Trends in Nonlinear Analysis, (2003), 153. Google Scholar J. M. Fraile, P. Koch Medina, J. López-Gómez and S. Merino, Elliptic eigenvalue problems and unbounded continua of positive solutions of a semilinear elliptic equation,, J. Differential Equations, 127 (1996), 295. doi: 10.1006/jdeq.1996.0071. Google Scholar J. García-Melián, R. Gómez-Reñasco, J. López-Gómez and J. C. Sabina de Lis, Pointwise growth and uniqueness of positive solutions for a class of sublinear elliptic problems where bifurcation from infinity occurs,, Arch. Ration. Mech. Anal., 145 (1998), 261. doi: 10.1007/s002050050130. Google Scholar J. García-Melián and J. D. Rossi, A logistic equation with refuge and nonlocal diffusion,, Commun. Pure Appl. Anal., 8 (2009), 2037. doi: 10.3934/cpaa.2009.8.2037. Google Scholar J. García-Melián and J. D. Rossi, Maximum and antimaximum principles for some nonlocal diffusion operators,, Nonlinear Anal., 71 (2009), 6116. doi: 10.1016/j.na.2009.06.004. Google Scholar M. Grinfeld, G. Hines, V. Hutson and K. Mischaikow, Non-local dispersal,, Differential Integral Equations, 18 (2005), 1299. Google Scholar V. Hutson, S. Martinez, K. Mischaikow and G. T. Vickers, The evolution of dispersal,, J. Math. Biol., 47 (2003), 483. doi: 10.1007/s00285-003-0210-1. Google Scholar V. Hutson, W. Shen and G. T. Vickers, Spectral theory for nonlocal dispersal with periodic or almost-periodic time dependence,, Rocky Mountain J. Math., 38 (2008), 1147. doi: 10.1216/RMJ-2008-38-4-1147. Google Scholar L. Ignat, J. D. Rossi and A. San Antolin, Lower and upper bounds for the first eigenvalue of nonlocal diffusion problems in the whole space,, J. Differential Equations, 252 (2012), 6429. doi: 10.1016/j.jde.2012.03.011. Google Scholar C. Y. Kao, Y. Lou and W. Shen, Random dispersal vs non-local dispersal,, Discrete Contin. Dyn. Syst., 26 (2010), 551. doi: 10.3934/dcds.2010.26.551. Google Scholar C. Y. Kao, Y. Lou and W. Shen, Evolution of mixed dispersal in periodic environments,, Discrete Contin. Dyn. Syst. Ser. B, 17 (2012), 2047. doi: 10.3934/dcdsb.2012.17.2047. Google Scholar W. T. Li, Y. J. Sun and Z. C. Wang, Entire solutions in the Fisher-KPP equation with nonlocal dispersal,, Nonlinear Anal. Real World Appl., 11 (2010), 2302. doi: 10.1016/j.nonrwa.2009.07.005. Google Scholar T. Ouyang, On the positive solutions of semilinear equations $\Delta u+\lambda u-hu^p=0$,, Trans. Amer. Math. Soc., 331 (1992), 503. doi: 10.2307/2154124. Google Scholar S. Pan, W. T. Li and G. Lin, Travelling wave fronts in nonlocal reaction-diffusion systems and applications,, Z. Angew. Math. Phys., 60 (2009), 377. doi: 10.1007/s00033-007-7005-y. Google Scholar N. Rawal and W. Shen, Criteria for the existence and lower bounds of principal eigenvalues of time periodic nonlocal dispersal operators and applications,, J. Dynam. Differential Equations, 24 (2012), 927. doi: 10.1007/s10884-012-9276-z. Google Scholar W. Shen and A. Zhang, Spreading speeds for monostable equations with nonlocal dispersal in space periodic habitats,, J. Differential Equations, 249 (2010), 747. doi: 10.1016/j.jde.2010.04.012. Google Scholar W. Shen and A. Zhang, Stationary solutions and spreading speeds of nonlocal monostable equations in space periodic habitats,, Proc. Amer. Math. Soc., 140 (2012), 1681. doi: 10.1090/S0002-9939-2011-11011-6. Google Scholar W. Shen and A. Zhang, Effects of spatial variations and dispersal strategies on the spreading speeds of monostable models in periodic habitats,, Rocky Mountain J. Math., (). Google Scholar J. W. Sun, W. T. Li and F. Y. Yang, Approximate the Fokker-Planck equation by a class of nonlocal dispersal problems,, Nonlinear Anal., 74 (2011), 3501. doi: 10.1016/j.na.2011.02.034. Google Scholar Y. J. Sun, W. T. Li and Z. C. Wang, Entire solutions in nonlocal dispersal equations with bistable nonlinearity,, J. Differential Equations, 251 (2011), 551. doi: 10.1016/j.jde.2011.04.020. Google Scholar K. Taira, Diffusive logistic equations in population dynamics,, Adv. Differential Equations, 7 (2002), 237. Google Scholar X. Wang, Metastability and stability of patterns in a convolution model for phase transitions,, J. Differential Equations, 183 (2002), 434. doi: 10.1006/jdeq.2001.4129. Google Scholar J. R. L. Webb. Uniqueness of the principal eigenvalue in nonlocal boundary value problems. Discrete & Continuous Dynamical Systems - S, 2008, 1 (1) : 177-186. doi: 10.3934/dcdss.2008.1.177 Kun Li, Jianhua Huang, Xiong Li. Asymptotic behavior and uniqueness of traveling wave fronts in a delayed nonlocal dispersal competitive system. Communications on Pure & Applied Analysis, 2017, 16 (1) : 131-150. doi: 10.3934/cpaa.2017006 Wenxian Shen, Xiaoxia Xie. On principal spectrum points/principal eigenvalues of nonlocal dispersal operators and applications. Discrete & Continuous Dynamical Systems - A, 2015, 35 (4) : 1665-1696. doi: 10.3934/dcds.2015.35.1665 Fei-Ying Yang, Wan-Tong Li, Jian-Wen Sun. Principal eigenvalues for some nonlocal eigenvalue problems and applications. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 4027-4049. doi: 10.3934/dcds.2016.36.4027 Fouad Hadj Selem, Hiroaki Kikuchi, Juncheng Wei. Existence and uniqueness of singular solution to stationary Schrödinger equation with supercritical nonlinearity. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4613-4626. doi: 10.3934/dcds.2013.33.4613 Ghendrih Philippe, Hauray Maxime, Anne Nouri. Derivation of a gyrokinetic model. Existence and uniqueness of specific stationary solution. Kinetic & Related Models, 2009, 2 (4) : 707-725. doi: 10.3934/krm.2009.2.707 Bhargav Kumar Kakumani, Suman Kumar Tumuluri. Asymptotic behavior of the solution of a diffusion equation with nonlocal boundary conditions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 407-419. doi: 10.3934/dcdsb.2017019 Fang-Di Dong, Wan-Tong Li, Jia-Bing Wang. Asymptotic behavior of traveling waves for a three-component system with nonlocal dispersal and its application. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6291-6318. doi: 10.3934/dcds.2017272 Telma Silva, Adélia Sequeira, Rafael F. Santos, Jorge Tiago. Existence, uniqueness, stability and asymptotic behavior of solutions for a mathematical model of atherosclerosis. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 343-362. doi: 10.3934/dcdss.2016.9.343 Tomás Caraballo, Xiaoying Han. A survey on Navier-Stokes models with delays: Existence, uniqueness and asymptotic behavior of solutions. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1079-1101. doi: 10.3934/dcdss.2015.8.1079 G. Deugoué, T. Tachim Medjo. The Stochastic 3D globally modified Navier-Stokes equations: Existence, uniqueness and asymptotic behavior. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2593-2621. doi: 10.3934/cpaa.2018123 Carmen Cortázar, Manuel Elgueta, Fernando Quirós, Noemí Wolanski. Asymptotic behavior for a nonlocal diffusion equation on the half line. Discrete & Continuous Dynamical Systems - A, 2015, 35 (4) : 1391-1407. doi: 10.3934/dcds.2015.35.1391 Bernard Brighi, S. Guesmia. Asymptotic behavior of solution of hyperbolic problems on a cylindrical domain. Conference Publications, 2007, 2007 (Special) : 160-169. doi: 10.3934/proc.2007.2007.160 Gabriele Bonanno, Pasquale Candito, Roberto Livrea, Nikolaos S. Papageorgiou. Existence, nonexistence and uniqueness of positive solutions for nonlinear eigenvalue problems. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1169-1188. doi: 10.3934/cpaa.2017057 Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Global attractor for a nonlocal p-Laplacian equation without uniqueness of solution. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1801-1816. doi: 10.3934/dcdsb.2017107 Guo-Bao Zhang, Fang-Di Dong, Wan-Tong Li. Uniqueness and stability of traveling waves for a three-species competition system with nonlocal dispersal. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1511-1541. doi: 10.3934/dcdsb.2018218 Getachew K. Befekadu, Panos J. Antsaklis. On noncooperative $n$-player principal eigenvalue games. Journal of Dynamics & Games, 2015, 2 (1) : 51-63. doi: 10.3934/jdg.2015.2.51 Zongming Guo, Xiaohong Guan, Yonggang Zhao. Uniqueness and asymptotic behavior of solutions of a biharmonic equation with supercritical exponent. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2613-2636. doi: 10.3934/dcds.2019109 Yan Zhang. Asymptotic behavior of a nonlocal KPP equation with an almost periodic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 5183-5199. doi: 10.3934/dcds.2016025 Jian-Wen Sun Wan-Tong Li Zhi-Cheng Wang
CommonCrawl
$\arcsin$ written as $\sin^{-1}(x)$ I know that different people follow different conventions, but whenever I see $\arcsin(x)$ written as $\sin^{-1}(x)$, I find myself thinking it wrong, since $\sin^{-1}(x)$ should be $\csc(x)$, and not possibly confused with another function. Does anyone say it's bad practice to write $\sin^{-1}(x)$ for $\arcsin(x)$? trigonometry notation faq J. M. is a poor mathematician bobobobobobobobo $\begingroup$ Why do you object to this but not to $f^{-1}(x)$? Or do you object to that too? $\endgroup$ – joriki Apr 1 '11 at 13:40 $\begingroup$ All calculators I had, even soviet one, used $\sin^{-1}$ notation on a keyboard. That's how I easily got used to the notation. $\endgroup$ – Yrogirg May 23 '12 at 12:12 $\begingroup$ Whenever I see $sin^{-1}x$ I find myself thinking it wrong. Almost no one in my country will write like that to say arcsin $\endgroup$ – phuclv Sep 16 '14 at 15:21 $\begingroup$ Also see meaning of powers on trig functions, in particular the answer math.stackexchange.com/a/920967/139123 $\endgroup$ – David K May 28 '15 at 22:12 The notation for trigonometric functions is "traditional", which is to say that it is not the way we would invent notation today. $\sin^{-1}(x)$ means the inverse sine, as you mentioned, rather than a reciprocal. So $\sin^{-1}(x)$ is not an abbreviation for $(\sin(x))^{-1}$. Instead it's notation for $(\sin^{-1})(x)$, in the same way that $f^{-1}(x)$ means the inverse function of $f$, applied to $x$. But $\sin^2(x)$ means $(\sin(x))^2$, rather than $\sin(\sin(x))$. In other contexts, like dynamical systems, if I have a function $f$, the notation $f^2$ means $f \circ f$. This is compatible with the $f^{-1}$ notation, if we take juxtaposition of functions to mean composition: $f^{-1}f^{3}$ will be $f^{2}$ as desired. So the traditional notation for sine is actually a mixture of two different systems: $-1$ denotes an inverse, not a power, while positive integer exponents denote powers, not iterated compositions. This is simply a fact of life, like an irregular conjugation of a verb. As with other languages, the things that we use most often are the ones that are likely to remain irregular. That doesn't mean that they are incorrect, however, as long as other speakers of the language know what they mean. Moreover, if you wanted to reform the system, there would be an equally strong argument for changing $\sin^2$ to mean $\sin \circ \sin$. This is already slowly happening with $\log$; I think that the usage of $\log^2(x)$ to mean $(\log(x))^2$ is slowly decreasing, because people tend to confuse it with $\log(\log(x))$. That confusion is less likely with $\sin$ because $\sin(\sin(x))$ arises so rarely in practice, unlike $\log(\log(x))$. Carl MummertCarl Mummert $\begingroup$ Sorry, I misunderstood. I don't think it does come up, which is why we can get by with the traditional notation. But $\log(\log(x))$ comes up in computer science often enough to get people confused about $\log^2(x)$. $\endgroup$ – Carl Mummert Apr 1 '11 at 15:17 $\begingroup$ Just a note that symbolic mathematics packages such as MuPad consider $\cos^{-1}(x)$ as $\sec(x)$ and require you write $\arccos(x)$ $\endgroup$ – bobobobo Apr 2 '11 at 19:16 $\begingroup$ Interesting. Wolfram Alpha treats it as inverse sine. If a student in my class wrote $\cos^{-1}$ for secant I'd deduct some points, so I guess MuPad doesn't get perfect marks. $\endgroup$ – Carl Mummert Apr 2 '11 at 19:32 $\begingroup$ One notation I like is the parenthesized-exponent for composition, so that $\log^{(2)}(x)$ is $\log(\log(x))$ and $\sin^{(-1)}(x)$ is $\arcsin(x)$; this shows up in CS now and again just to alleviate the log-log confusion. $\endgroup$ – Steven Stadnicki Oct 19 '11 at 15:04 $\begingroup$ @StevenStadnicki That overloads the derivative notation. Though it is certainly not common to put that sort of notation on things like $\log$ or $\sin$, it clearly wouldn't work for a generic function $f$, where we already have $f^{(2)}(x)=\frac{d^2f}{dx^2}$. I think a reasonable notation would be to circle the exponent if it means composition, like this: $f^\textcircled{2}$ (which makes intuitive sense with our current notation for composition). $\endgroup$ – process91 Nov 28 '11 at 23:41 Not the answer you're looking for? Browse other questions tagged trigonometry notation faq or ask your own question. Notation for powers of trigonometric functions What 's the differece between $\cot(x)$ and $\arctan(x)$? Notation of inverse trigonometric functions and exponentiation Clarification for inverse trigonometric fuctions. Difference between $\sin^{-1}(x)$ and $\frac1{\sin(x)}$? What is the difference between $\operatorname{Arcsin}$, $\operatorname{arcsin}$, $\operatorname{Sin}^{-1}$, and $\sin^{-1}$? Write each pair of equations as a single equation in $x$ and $y$. Is $\tan^{-1}$ for $\arctan$ on my calculator wrong? What is the order of operations in trig functions? Graph $y=1/\sin x$ $\sin$ vs. $sin$ - history and usage Has anyone seen $N(M)$ in the context of linear algebra/matrix notation? How is the derivative of $x[4\sin(2x)+6\cos(2x)]$ the expression $(4-12x)\sin(2x)+(8x+6)\cos(2x)$ Isn't writing sin x instead of sin (x) wrong? Find the limit of $(1-\cos x)/(x\sin x)$ as $x \to 0$ Does it make sense to talk about minimization of an objective function over function spaces?
CommonCrawl
Black hole thermodynamics August 31, 2015. An introduction to black hole thermodynamics, based on a talk given to undergraduates at the University of Melbourne. 2. Black holes and the area theorem 3. Curvature and the zeroth law 4. The laws of black hole thermodynamics 5. Hawking radiation 5. The generalised second law One of the most remarkable discoveries of late 20th century physics is that a black hole behaves like a box of hot gas; it has thermodynamic properties like temperature, entropy and energy. But unlike a hot gas, where we have a good microscopic understanding of how these properties arise, we are still trying to figure out the corresponding microscopic physics of the black hole! This is one of the holy grails of theoretical physics. We will focus on the beautiful thermal properties of black holes, and leave the mysteries of quantum gravity for another day. To start off with, what is a black hole? No doubt you've heard about them. Loosely speaking, black holes are regions of spacetime where gravity is so strong that light, and therefore massive particles, can't escape. There is a better analogy in terms of traffic. One of Einstein's insights from special relativity was the existence of a universal speed limit: the speed of light $c$, which is the same in all reference frames. Einstein's general theory of relativity adds gravity into the mix, and allows the direction of light to change locally in interesting ways. But the speed limit remains in place: you can never locally travel faster than light. It is the one traffic rule of the universe! We can visualise this speed limit using light cones, which I hope you remember from your special relativity class. At each point, we imagine shining a flashlight in all directions, and graphing where the light travels. The result is a cone (in two spatial dimensions, and a sort of "hypercone" in higher dimensions). Obeying the speed limit means that you can only move within the cone traced out by the rays from the flashlight. Remaining still corresponds to travelling along the axis of the cone, while speeding means your velocity vector is tilted with respect to the axis; there is a maximum tilt as you hit the edge of the cone, corresponding to travelling at the speed of light. Speeding through spacetime. In special relativity, there is a light cone at every point of spacetime, and they all point in the same direction. In order for this to make sense, we need to have coordinates that cover all of spacetime. In your special relativity class, you will have seen that time $t$ and spatial directions $\vec{x}$ do the job. The difference between special and general relativity. In general relativity, we are no longer guaranteed to have well-defined global coordinates, which I've suggested above by drawing spacetime as a blob rather than a neat Cartesian plane. But there is always a light cone, though its direction can change, and the universal speed limit always applies. In terms of our traffic analogy, a black hole is a one-way street. The simplest black hole is a spherical one-way street, sitting at the origin. It was discovered by German physicist Karl Schwarzschild, in 1915, while he was serving on the Russian front. We can picture what's happening by drawing a vertical axis for time $t$ measured outside the black hole, and a horizontal axis for radial distance from the black hole. Light cones tip over as they approach the black sphere. At each point, we can draw a light cone. Far away from the black hole, light cones are upright, and space looks neatly divisible into time $t$ and the radial direction $r$, just like flat space. We therefore think of $t$ and $r$ as the time and radial coordinates of someone very far from the black hole, often called an asymptotic observer. But as we get closer to $r= 0$, the light cones tip over more and more, until eventually, at the dotted line, no part of the light cone points in the negative $r$ direction. Any direction you point your flashlight, the light ends up closer to the black hole! The dotted line is called the Schwarzschild radius, or horizon radius. A consequence of the universal speed limit is that once you cross the Schwarzschild radius, you can never return. However, note that you can still receive signals. The light cone outlined by your flashlight is called the future light cone, and it tells you where you can send signals (including yourself). The past light cone tells you where you can receive signals from; think of it as a radar, rather than a flashlight. As we can see, from the picture below, even after you fall into the black hole, you can still watch your favourite TV shows beamed from Earth. Radar and flashlight at the event horizon. The horizon extends in time as well as space; in two spatial dimensions, for instance, it sweeps out as a tube. To find the area of a black hole at a given time $t$, we intersect the tube with a slice of spacetime at constant $t$, and calculate the area $A$ of that intersection. Area $A$ of event horizon tube. Let's find the area for a spherical black hole in four dimensions (since that's the universe we live in). By a pedagogically happy fluke, we can use classical reasoning and get the right answer! So, imagine a star of mass $M$ collapses spherically in on itself to form a black hole. At any point in time, from symmetry the horizon will be a sphere. Our dodgy classical assumption is that the horizon radius $r_\text{H}$ is where the escape velocity equals the speed of light $c$; this is the Newtonian analogy to the light cone completely tipping over. You may remember that the escape velocity is just the velocity needed to escape the Newtonian gravitational field of an object, but with no kinetic energy left over. That's just enough gas to get to infinity. Mathematically, if $m$ is the mass of a test particle whose velocity approaches $c$, and $r_\text{H}$ the radius of the event horizon, then kinetic plus potential energy at the horizon is \[K + U = \frac{1}{2}mc^2 - \frac{GMm}{r_\text{H}} = 0 \quad \Longrightarrow \quad r_\text{H} = \frac{2GM}{c^2}.\] This is also called the Schwarzschild radius, and agrees with Schwarzschild's calculation in general relativity. The area is then \[A = 4\pi r_\text{H}^2 = \frac{16\pi G^2 M^2}{c^2} \propto M^2.\] Since the area is proportional to mass squared, and things can only go into the black hole, it is plausible that the mass and hence area must increase. Of course, we have been dealing with a spherically symmetric, static black hole, and throwing matter into it can easily violate both assumptions. Stephen Hawking generalised this result to something called the area theorem, which shows that the area of any type of black hole — not just a static, spherically symmetric one — can only increase with time, whatever physical process occurs. You might be wondering why the reasoning for Schwarzschild black holes doesn't apply to other black holes. If stuff can only go in, the mass of the black hole has to increase, and the area should increase, right? There are two reasons this doesn't cut it. First, the total mass of an arbitrary black hole just isn't well-defined. But more interestingly, even when it is, the relationship between mass and area can be non-trivial. For instance, consider a spinning or Kerr black hole. It turns out you can extract energy from a spinning black hole using a slingshot maneuver called the Penrose process: a satellite which gets close to the black hole can get a kick to its angular momentum. Since the energy of the satellite increases, the energy and hence mass of the black hole decrease. But the area still increases! This implies an interesting relationship between the mass, area and angular momentum of the black hole, which we'll say more about below. Sputnik executing a Penrose process. The fact that area increases with time hints at a connection to entropy $S$. Entropy is the disorder of a system: the number of ways it could be while looking more or less the same. If the system evolves randomly (and big systems more or less do), it will tend to move towards states where there are more ways for it to be, a larger set of possibilities. This is captured in the second law of thermodynamics: \[\delta S \geq 0.\] There are very few other quantities in physics that just get bigger, so Hawking suggested that there might be an analogy between entropy and horizon area. Another suggestive analogy comes from curvature. Curvature is a measure of gravitational tidal forces. It tells us how quickly two nearby, freely falling particles will move towards or apart from each other due to gravity. Initially parallel, freely falling baseballs in different geometries. Going back to the spherical black hole, symmetry tells us that the the curvature of space is the same at all points on the horizon at a given time. This is another property which generalises, but only to stationary black holes, that is, black holes which don't change with time. Heuristically, we can think of the patches of the event horizon as interacting, pulling and pushing each other, and exchanging curvature. A stationary black hole is an equilibrium configuration for that exchange process, where the curvature is uniformly distributed. A cartoon of patches exchanging curvature. Less heuristically, the curvature at the surface $\kappa$ can be defined as the force per unit mass the asymptotic observer (at infinity) must exert to suspend an object right at the black hole horizon. For this reason, the curvature at the horizon is called surface gravity. Suspending a box of space biscuits from infinity, with tension $\kappa$. If the surface gravity changes across the surface, then not only will space biscuits tend to accumulate at local maxima, but the horizon itself will respond. It can't be stationary after all! We see the curvature exchange argument isn't too far from the truth. Curvature exchange is a loaded metaphor, designed to make you think of temperature. The interacting patches look like a thermal system equilibrating, and indeed, when the black hole is stationary this equilibration has finished. That's two analogies so far: area/entropy, and surface gravity/temperature. In 1973, Bardeen, Hawking and Carter collected a full set and presented them as the laws of black hole thermodynamics. The fact that surface gravity is constant for a stationary black hole is called the zeroth law, since the usual zeroth law states that temperature is constant for systems in thermal equilibrium. This suggests that the surface gravity $\kappa \propto T$, where $T$ is the temperature of the black hole. The second law of black hole thermodynamics is the area theorem: the horizon area increases in any physical process. This suggests that the entropy is proportional to the area, $A \propto S$. From Boltzmann's law, \[S = k_\text{B} \log N,\] where $N$ is the number of microscopic arrangement of a system with the same gross properties, we get a nice heuristic picture of the microstates of a black hole: divide the surface into cells, and assign a black or a white token. The number of ways of doing this gives the entropy! Microstates of a black hole obtained by putting a black or white token in each cell. To round things off, we still need black hole versions of the first and third law. The usual first law says that the change in the internal energy of a system equals heat gained plus work done on the system, or \[dU = \delta Q + \text{work} = T\, dS + \text{work},\] where we used the thermodynamic definition of entropy: a small change in heat divided by the temperature at which heat exchange occurs, $dS = \delta Q/T$. Thinking back to the example of the spinning black hole, a long calculation (with no classical analogue sadly!) gives a relationship between the change in mass $M$, area $A$ and angular momentum $J$: \[dM = \left(\frac{\kappa}{8\pi G}\right) dA + \Omega \, dJ,\] where $\kappa$ is the surface gravity and $\Omega$ the angular velocity. This looks very similar to the first law! We have a change in energy on the left (since relativistically $E = Mc^2$) while the term $-\Omega \, dJ$ is precisely the work done by the system during the Penrose process, where angular momentum is converted into work. The remaining term has area and surface gravity appearing together, which are proportional to entropy and temperature. In fact, we note for future reference that \[ST = \frac{\kappa A}{8\pi G}.\] We have a pretty compelling parallel! Finally, we need a version of the third law for black holes. The third law of thermodynamics is definitely the most obscure. In its strongest form, it states that the entropy of a system vanishes as the temperature goes to absolute zero. Based on what we already know about black holes, you might naively guess that the area of a black hole should vanish (the black hole shrinks to nothing!) as the curvature at the surface goes to zero. This naive guess doesn't quite work! There are objects called extremal black holes with vanishing surface curvature but nonzero area. The simplest way to get these black holes is to add electric charge or spin to an ordinary black hole, until you can't add any more. But beware: the more charge or spin you add, the harder it gets. It becomes so hard, in fact, that you cannot build an extremal black hole from a non-extremal one in a finite number of steps! Charge a black hole up to $|Q| = M$, or spin it up to $|J| = M$, to make it extremal. This is reminiscent of a weaker form of the thermodynamic third law called the unattainability principle, which states that you can't reduce a system to absolute zero in a finite number of steps. The surface curvature $\kappa$ of a black hole behaves in exactly the same way! We can now write down the full set of laws: 0. $\kappa$ constant for stationary black holes. 0. $T$ constant in equilibrium. 1. $dM = (\kappa/8\pi G)\,dA + \Omega\, dJ$. 1. $dU = T \, dS + \text{work}$. 2. Area always increases: $\delta A \geq 0$. 2. Entropy always increases: $\delta S \geq 0$. 3. $\kappa\to 0$ takes infinite steps. 3. $T\to 0$ takes infinite steps. In 1973, most people considered these laws to be a miraculous formal correspondence, and did not view black holes as genuine thermodynamic systems. The prominent exception is Jakob Bekenstein, who we will return to below. But in 1974, Stephen Hawking had a brilliant insight… In 1974, Hawking realised that black holes really do have a temperature, and a blackbody spectrum to go with it. He famously calculated the Hawking temperature \[k_\text{B} T = \frac{\hbar c^3}{8\pi GM},\] where $k_\text{B}$ is Boltzmann's constant and $\hbar$ is Planck's constant. Where does this come from? Hawking didn't pull it out of thin air, but rather, spontaneously produced it from the vacuum. We will "derive" the result for a spherical black hole with extreme heuristic prejudice. The usual analogy, which you may have heard before, is that pairs of virtual particles can pop into existence near the event horizon. One falls into the black hole, while the other zooms off with some energy $E$. The uncertainty in the energy of these spontaneously emitted photons tells us about the effective temperature of the black hole, since something cold has a narrow distribution of energies, while something hot has a broad distribution. Virtual photon pair appearing out of the vacuum. If you know a bit about statistical physics, you can make this analogy more precise. The blackbody distribution for a gas of photons is proportional to \[f(E) = \frac{E^3}{e^{E/k_\text{B}T} - 1}.\] This drops off exponentially for $E \geq k_\text{B}T$, so we can approximate the uncertainty in the energy by \[\Delta E = k_\text{B}T.\] But we can figure out $\Delta E$ a second way, using the uncertainty principle. These virtual photons can pop up anywhere on the horizon, so the uncertainty in position is the order the circumference of the black hole, \[\Delta r = 2\pi r_\text{H} = \frac{4\pi GM}{c^2}.\] By Heisenberg's uncertainity principle, the uncertainity in momentum is order \[\Delta p \approx \frac{\hbar}{2 \Delta r} = \frac{\hbar c^2}{8\pi GM}.\] From special relativity, we know that the energy of the photon is $E = pc$, so we find the energy uncertainty \[\Delta E = c \Delta p = \frac{\hbar c^3}{8\pi GM}.\] So, we guess the black hole has a temperature given by \[k_\text{B} T = \Delta E = \frac{\hbar c^3}{8\pi GM},\] in exact agreement with Hawking's rigourous result! The upshot is that black holes are genuine thermal objects. But this leads to a puzzle: if black holes radiate, they can clearly lose energy and get smaller, violating the area theorem and hence the second law of black hole thermodynamics. What do we replace it with? To see how to generalise the second law, we first need to relate black hole and thermodynamic quantities, with constants of proportionality and all. It turns out that the surface gravity of a spherical black hole is simply related to the mass: \[\kappa = \frac{1}{4GM}.\] We can plug this into Hawking's result to get the long-awaited constant of proportionality between temperature and surface gravity: \[k_\text{B} T = \frac{\hbar c^3\kappa}{2\pi}.\] With this in hand, we can work out the explicit expression for black hole entropy. From the first law, we know that $ST = \frac{\kappa A}{8\pi G}$, and hence \[S_\text{BH} = \frac{\kappa A}{8\pi G T} = \frac{k_\text{B} c^3 A}{4 \hbar G}.\] This is the famous Bekenstein-Hawking entropy. Since it involves the four fundamental physical constants $\hbar, k_\text{B}, c, G$, something pretty profound is going on! Prior to 1974, Bekenstein was the first person to really argue that the area could be interpreted as entropy; Hawking regarded $S \propto A$ as a formal analogy only. Bekenstein carefully thought about systems involving both black holes and ordinary thermodynamic systems, defining the generalised entropy as the sum of black hole entropy and ordinary entropy of matter systems outside the black hole. Bekenstein proposed the generalised second law, which states the generalised entropy should increase for any physical process: \[\delta S_\text{gen} \geq 0, \quad S_\text{gen} = S_\text{outside} + S_\text{BH}.\] The generalised second law has many remarkable implications. In particular, it solves the puzzle raised earlier. Although the area of a black hole, and hence its entropy, decrease due to Hawking radiation, the total entropy of the universe increases. So although the area theorem is violated, the generalised second law is not. The generalised second law also provides a sort of royal road for connecting the entropy of matter systems to their geometry. We will look at simplest such relation: the Bekenstein bound, which was in fact discovered by Bekenstein, thereby providing a counterexample to Sitgler's law of eponymy. This gives a relationship between energy, size and entropy, and appears to be broadly true. Box of space biscuits in frame of asymptotic observer, and its rest frame. Imagine a small box of space biscuits, suspended above a spherical black hole of mass $M$ by the observer at infinity. The box has proper length $L$, and is held a proper distance $L$ above the horizon, and the observer measures the top of the box at $L' = \alpha r_\text{H}$ for $\alpha \ll 1$. Since the light cones tip over near the horizon, local measurements of time and length differ from those at infinity, effectively described by a Lorentz factor $\gamma = 1/\sqrt{\alpha}$, with \[L = \gamma L' = \sqrt{\alpha} r_\text{H},\] If the box has energy $E$, time dilation will lead to a reduction in energy, as viewed by the observer at infinity, with \[E' = \frac{E}{\gamma} = \frac{LE}{r_\text{H}}.\] If the observer now drop the box into the black hole, the mass increases by the apparent energy of the box $\Delta M c^2 = E'$, so the new mass is $M' = M + \Delta M$. But this changes the black hole entropy: \[\begin{align*} S_\text{BH}' & \propto A' \propto {r'}^2_\text{H} \\ & \propto (M + \Delta M)^2 \\ & \approx M^2 + 2M\Delta M \\ \Longrightarrow \quad \Delta S_\text{BH} & \propto M\Delta M \propto LE. \end{align*}\] In fact, if you restore constants you can show that $\Delta S_\text{BH} = (4\pi k_\text{B}c/\hbar)LE$. If $S$ is the entropy of the box, we find that by dropping the space biscuits into the black hole, the change in generalised entropy is \[\Delta S_\text{gen} = \frac{4\pi k_\text{B}c LE}{\hbar} - S.\] In order for this to be non-negative, we require \[S \leq \frac{4\pi k_\text{B}c LE}{\hbar}.\] This argument involved a spherical black hole; more general arguments reduce the bound by a factor of $2$. But in any event, it appears that the size and calory content of a box of biscuits controls how many microscopic configurations it could be in! We've just scratched the surface of black hole physics, and the many puzzles black holes present. But to my mind, the laws of black hole thermodynamics are the crowning achievement of the "Golden Age" of general relativity in the 60s and 70s. Hawking radiation and the generalised second law continue to provide surprises, and inspire developments in fields as diverse as quantum field theory, statistical mechanics and computer science, not to mention the search for a theory of quantum gravity. I finish with a quote from the great astrophysicist Subrahmanyan Chandrasekhar: The black holes of nature are the most perfect macroscopic objects there are in the universe: the only elements in their construction are our concepts of space and time. The laws of black hole mechanics hint at the possibility that black holes are microscopically perfect as well.
CommonCrawl
Your search: "author:"Barth, Aaron J"" Peer-reviewed only (38) BY - Attribution required (10) Scholarly Works (38 results) Investigating Properties of Active Galactic Nuclei Through Reverberation Mapping Pei, Liuyi UC Irvine Electronic Theses and Dissertations (2016) Reverberation mapping is a time-domain technique used to resolve the supermassive black hole's sphere of influence in active galactic nuclei. We carried out a nine-month reverberation mapping campaign to measure the broad-line region size and estimate the mass of the black hole in KA1858+4850, a narrow-line Seyfert 1 galaxy at redshift 0.078 and among the brightest active galaxies monitored by the Kepler mission. We obtained spectroscopic data using the Kast Spectrograph at the Lick 3-m telescope and complementary V-band images from five other ground-based telescopes. We measured the H light curve lag with respect to the V -band continuum light curve, and combined this lag with the H root-mean-square line profile width to obtain a virial estimate of MBH = 8.06 (+1.59, −1.72) × 10^6 M⊙ for the mass of the central black hole and an Eddington ratio of L/LEdd ≈ 0.2. I also used reverberation mapping to study in detail the broad line region in NGC 5548, a Seyfert 1 galaxy at redshift 0.017. Optical spectroscopic data targeting NGC 5548 were taken in 2014 as part of a larger multi-wavelength reverberation mapping campaign. The ground-based spectra spanned six months and achieved almost daily cadence with observations from five telescopes. We computed the H and He II 4686 lags relative to both the optical continuum and the UV continuum measured by the Hubble Space Telescope, and found the H–UV lag to be ∼50% longer than the H–optical lag. This suggests that the true broad-line region size is 50% larger than the size that would be inferred from optical data alone. We also measured velocity-resolved lags for H and found a complex velocity-lag structure with shorter lags in the line wings. The responsivity of both the H and He II lines decreased halfway through the campaign, an anomalous phenomenon also observed for the UV emission lines during the same monitoring period. Finally, we showed that, given the optical luminosity of NGC 5548 during our campaign, the measured H lag is a factor of five shorter than the expected value based on the past behavior of NGC 5548. To efficiently process large amounts of reverberation mapping photometry data, I developed an IDL pipeline that is able to automatically extract the aperture photometry magnitude of the AGN, calibrate the individual exposures for nightly variations using reference stars, and construct the relative optical continuum light curve combining data from multiple telescopes. This pipeline has been used in several collaborations, both to monitor AGN variability in real time and to construct photometry light curves from archival data, and its applications can be extended to time-domain studies of any variable object. No Evidence of Periodic Variability in the Light Curve of Active Galaxy J0045+41 Barth, Aaron J Stern, Daniel UC Irvine Previously Published Works (2018) Dorn-Wallenstein, Levesque, & Ruan recently presented the identification of a z=0.215 active galaxy located behind M31 and claimed the detection of multiple periodic variations in the object's light curve with as many as nine different periods. They interpreted these results as evidence for the presence of a binary supermassive black hole with an orbital separation of just a few hundred AU, and estimated the gravitational-wave signal implied by such a system. We demonstrate that the claimed periodicities are based on a misinterpretation of the null hypothesis test simulations and an error in the method used to calculate the false alarm probabilities. There is no evidence for periodicity in the data. The Luminous X-Ray Halos of Two Compact Elliptical Galaxies Buote, David A There is mounting evidence that compact elliptical galaxies (CEGs) are local analogs of the high-redshift "red nuggets" thought to represent progenitors of today's early-type galaxies (ETGs). We report the discovery of extended X-ray emission from a hot interstellar / intragroup medium in two CEGs, Mrk 1216 and PGC 032873, using shallow archival Chandra observations. We find that PGC 032873 has an average gas temperature $k_BT=0.67\pm 0.06$ keV within a radius of 15 kpc, and a luminosity $L_{\rm x} = (1.8\pm 0.2)\times 10^{41}$ erg s$^{-1}$ within a radius of 100kpc. For Mrk 1216, which is closer and more luminous $[L_{\rm x}(\rm <100~kpc) = (12.1\pm 1.9)\times 10^{41}$ erg s$^{-1}]$, we performed a spatially resolved spectral analysis in 7 annuli out to a radius of 73 kpc. Using an entropy-based hydrostatic equilibrium (HE) procedure, we obtain a good constraint on the $H$-band stellar mass-to-light ratio, $M_{\rm stars}/L_H=1.33\pm 0.21$ solar, in good agreement with stellar dynamical (SD) studies, which supports the HE approximation. We obtain a density slope $2.22\pm 0.08$ within $R_e$ consistent with other CEGs and normal local ETGs, while the dark matter (DM) fraction within $R_e$, $f_{\rm DM}=0.20\pm 0.07$, is similar to local ETGs. We place a constraint on the SMBH mass, $M_{\rm BH} = (5\pm 4)\times 10^{9}\, M_{\odot}$, with a 90% upper limit of $M_{\rm BH} = 1.4\times 10^{10}\, M_{\odot}$, consistent with a recent SD measurement. We obtain a halo concentration $(c_{200}=17.5\pm 6.7)$ and mass [$M_{200} = (9.6\pm 3.7)\times 10^{12}\, M_{\odot}$], where $c_{200}$ exceeds the mean $\Lambda$CDM value ($\approx 7$), consistent with a system that formed earlier than the general halo population. We suggest that these galaxies, which reside in group-scale halos, should be classified as fossil groups. (Abridged) The Extremely High Dark Matter Halo Concentration of the Relic Compact Elliptical Galaxy Mrk 1216 Compact elliptical galaxies (CEGs) are candidates for local analogs of the high-redshift "red nuggets" thought to represent the progenitors of today's early-type galaxies (ETGs). To address whether the structure of the dark matter (DM) halo in a CEG also reflects the extremely quiescent and isolated evolution of its stars, we use a new $\approx 122$ ks Chandra observation together with a shallow $\approx 13$ ks archival observation of the CEG Mrk 1216 to perform a hydrostatic equilibrium analysis of the luminous and relaxed X-ray plasma emission extending out to a radius $0.85r_{2500}$. We examine several DM model profiles and in every case obtain a halo concentration $(c_{200})$ that is a large positive outlier in the theoretical $\Lambda$CDM $c_{200}-M_{200}$ relation; i.e., ranging from $3.4\,\sigma - 6.3\, \sigma$ above the median $\Lambda$CDM relation in terms of the intrinsic scatter. The high value of $c_{200}$ we measure implies an unusually early formation time that firmly establishes the relic nature of the DM halo in Mrk 1216. The highly concentrated DM halo leads to a higher DM fraction and smaller total mass slope at $1R_e$ compared to nearby normal ETGs. In addition, the highly concentrated total mass profile of Mrk 1216 cannot be described by MOND without adding DM, and it deviates substantially from the Radial Acceleration Relation. Mrk 1216 contains $\approx 80\%$ of the cosmic baryon fraction within $r_{200}$. The radial profile of the ratio of cooling time to free-fall time varies within a narrow range $(t_c/t_{\rm ff}\approx 14-19)$ over a large central region ($r\le 10$ kpc) suggesting "precipitation-regulated AGN feedback" for a multiphase plasma, though presently there is little evidence for cool gas in Mrk 1216. The properties of Mrk 1216 are remarkably similar to those of the nearby fossil group NGC 6482. JAVELIN analysis of reverberation mapping data from the Lick AGN Monitoring Project 2011 Brandel, Andrew Peter In 2011 Lick Observatory carried out a 2.5 month reverberation mapping campaign using the 3 meter Shane telescope monitoring 15 low redshift galaxies. The goal was to determine the black hole mass for each of these galaxies. My job was to use the JAVELIN software package to determine the size of the Broad Line Region around each of these objects and compare the results to those calculated using other methods. Here I present my findings for the 6 targets that JAVELIN found lag times for at least one of the emission lines. I also include the results using CCF which are in the process of being published. Precision Black Hole Mass Measurement in Luminous Early-Type Galaxies Boizelle, Benjamin David The supermassive black hole (BH) census remains very incomplete at the highest masses ($\gtrsim10^9$ $M_\sun$), limiting our understanding of BH/host galaxy co-evolution for the most massive galaxies. I present analysis of circumnuclear ionized atomic and both warm and cold molecular gas kinematics, as well as results of detailed gas-dynamical modeling, for a sample of nearby, luminous early-type galaxies (ETGs). Keck OSIRIS integral-field unit and \textit{Hubble Space Telescope} (\textit{HST}) STIS spectroscopy reveal H$_2$ 1$-$0 S(1) and H$\alpha$ in approximately Keplerian rotation within \mbox{NGC 1275}. We build and optimize models of dynamically-warm disk rotation to better constrain its BH mass $M_{\rm BH}=(1.20^{+0.32}_{-0.44})\times10^9$ $M_\sun$, although significant turbulent velocity dispersion complicates model fitting. Dynamically cold molecular gas emission is free from nearly all of the systemics that plague $M_{\rm BH}$ determination when using dynamically warm tracers. I present Atacama Large Millimeter/submillimeter Array (ALMA) $\sim0\farcs3-$resolution CO(2$-$1) and continuum imaging of a sample of early-type galaxies (ETGs) that host circumnuclear dusty disks. Given their incredible data sensitivities, these dynamically cold disks yield the most sensitive possible tracers of the central gravitational potential. In several targets we detect central velocity upturns that suggest unresolved Keplerian rotation. We find these disks are formally stabilized against fragmentation and gravitational collapse. The continuum emission is in all cases dominated by a central, unresolved source and appears to be consistent with hot accretion. I present follow-up, $0\farcs1-$resolution CO(2$-$1) imaging of one of these promising ETGs -- NGC 3258 -- to highly resolve its BH sphere of influence and map out the molecular kinematics in exquisite detail. Its very regular, nearly symmetric rotation enables the most detailed thin disk modeling to date. Moderate kinematic twists are incorporated using tilted-ring formalism, and the extended mass profile is amended to allow for a variable stellar mass density profile. Preliminary model optimization returns $M_{\rm BH}=2.23\times10^9$ $M_\sun$ with an anticipated total uncertainty of around 5\%. No evidence for [O iii] variability in Mrk 142 Bentz, Misty C Using archival data from the 2008 Lick AGN Monitoring Project, Zhang & Feng (2016) claimed to find evidence for flux variations in the narrow [O III] emission of the Seyfert 1 galaxy Mrk 142 over a two-month time span. If correct, this would imply a surprisingly compact size for the narrow-line region. We show that the claimed [O III] variations are merely the result of random errors in the overall flux calibration of the spectra. The data do not provide any support for the hypothesis that the [O III] flux was variable during the 2008 monitoring period. A SEARCH FOR OPTICAL VARIABILITY OF TYPE 2 QUASARS IN SDSS STRIPE 82 Voevodkin, Alexey Carson, Daniel J Woźniak, Przemysław Hundreds of Type 2 quasars have been identified in Sloan Digital Sky Survey (SDSS) data, and there is substantial evidence that they are generally galaxies with highly obscured central engines, in accord with unified models for active galactic nuclei (AGNs). A straightforward expectation of unified models is that highly obscured Type 2 AGNs should show little or no optical variability on timescales of days to years. As a test of this prediction, we have carried out a search for variability in Type 2 quasars in SDSS Stripe 82 using difference-imaging photometry. Starting with the Type 2 AGN catalogs of Zakamska et al. and Reyes et al., we find evidence of significant g-band variability in 17 out of 173 objects for which light curves could be measured from the Stripe 82 data. To determine the nature of this variability, we obtained new Keck spectropolarimetry observations for seven of these variable AGNs. The Keck data show that these objects have low continuum polarizations (p ≲ 1% in most cases) and all seven have broad Hα and/or Mg II emission lines in their total (unpolarized) spectra, indicating that they should actually be classified as Type 1 AGNs. We conclude that the primary reason variability is found in the SDSS-selected Type 2 AGN samples is that these samples contain a small fraction of Type 1 AGNs as contaminants, and it is not necessary to invoke more exotic possible explanations such as a population of "naked" or unobscured Type 2 quasars. Aside from misclassified Type 1 objects, the Type 2 quasars do not generally show detectable optical variability over the duration of the Stripe 82 survey. © 2014. The American Astronomical Society. All rights reserved.. Constraints on the broad line region from regularized linear inversion: velocity–delay maps for five nearby active galactic nuclei Skielboe, Andreas Pancoast, Anna Treu, Tommaso Park, Daeseong Reverberation mapping probes the structure of the broad emission-line region (BLR) in active galactic nuclei (AGN). The kinematics of the BLR gas can be used to measure the mass of the central supermassive black hole. The main uncertainty affecting black hole mass determinations is the structure of the BLR. We present a new method for reverberation mapping based on regularized linear inversion (RLI) that includes modelling of the AGN continuum light curves. This enables fast calculation of velocity-resolved response maps to constrain BLR structure. RLI allows for negative response, such as when some areas of the BLR respond in inverse proportion to a change in ionizing continuum luminosity. We present time delays, integrated response functions, and velocity-delay maps for the $\rm{H}\,\beta$ broad emission line in five nearby AGN, as well as for $\rm{H}\,\alpha$ and $\rm{H}\,\gamma$ in Arp 151, using data from the Lick AGN Monitoring Project 2008. We find indications of prompt response in three of the objects (Arp 151, NGC 5548 and SBS 1116+583A) with additional prompt response in the red wing of $\rm{H}\,\beta$. In SBS 1116+583A we find evidence for a multimodal broad prompt response followed by a second narrow response at 10 days. We find no clear indications of negative response. The results are complementary to, and consistent with, other methods such as cross correlation, maximum entropy and dynamical modelling. Regularized linear inversion with continuum light curve modelling provides a fast, complementary method for velocity-resolved reverberation mapping and is suitable for use on large datasets. On the Virialization of Disk Winds: Implications for the Black Hole Mass Estimates in AGN Kashi, Amit Proga, Daniel Nagamine, Kentaro Greene, Jenny Estimating the mass of a supermassive black hole (SMBH) in an active galactic nucleus (AGN) usually relies on the assumption that the broad line region (BLR) is virialized. However, this assumption seems invalid in BLR models that consists of an accretion disk and its wind. The disk is likely Keplerian and therefore virialized. However, the wind material must, beyond a certain point, be dominated by an outward force that is stronger than gravity. Here, we analyze hydrodynamic simulations of four different disk winds: an isothermal wind, a thermal wind from an X-ray heated disk, and two line-driven winds, one with and the other without X-ray heating and cooling. For each model, we check whether gravity governs the flow properties, by computing and analyzing the volume-integrated quantities that appear in the virial theorem: internal, kinetic, and gravitational energies, We find that in the first two models, the winds are non-virialized whereas the two line-driven disk winds are virialized up to a relatively large distance. The line-driven winds are virialized because they accelerate slowly so that the rotational velocity is dominant and the wind base is very dense. For the two virialized winds, the so-called projected virial factor scales with inclination angle as $1/ \sin^2{i}$. Finally, we demonstrate that an outflow from a Keplerian disk becomes unvirialized more slowly when it conserves the gas specific angular momentum -- as in the models considered here, than when it conserves the angular velocity -- as in the so-called magneto-centrifugal winds.
CommonCrawl
Sampling distributions/Self-check assessment Use the following quiz questions to check your understanding of random variables. Note that as soon as you have indicated your response, the question is scored and feedback is provided. As feedback is provided for each option, you may find it useful to try all of the responses (both correct and incorrect) to read the feedback, as a way to better understand the concept. Sampling distributions of a sample mean The mean and sd of [math]\bar{x}[/math] In an article in the Journal of American Pediatric Health researchers claim that the weights of healthy babies born in the United States form a distribution that is nearly Normal with an average weight of 7.25 pounds and standard deviation of 1.75 pounds.[1] Suppose a researcher selects 50 random samples with 30 newborns in each sample. What is the best estimate for the mean of the sample means? That's correct. The mean of the sample means is an unbiased estimate of the population mean; therefore, the best estimate of the mean of the sample means (i.e., the mean of the sampling distribution of the mean) is 7.25 pounds. That's not quite right. This is the population standard deviation. Consider that the mean of the sample means is an unbiased estimate of the population mean. Try again. That's not quite right. This is the number of samples. Consider that the mean of the sample means is an unbiased estimate of the population mean. Try again. That's not quite right. This is the size of each of the samples. Consider that the mean of the sample means is an unbiased estimate of the population mean. Try again. unable to determine from the information provided That's not quite right. There is enough information to estimate the mean of the sampling distribution of means. Try again. What is the best estimate of the standard deviation of the sample means? That's not quite right. This is the population standard deviation. What is the standard deviation of the sampling distribution of the means? Recall that it varies depending on the sample size. Try again. That's correct. The standard deviation of the sample means is calculated by dividing the population mean by the square root of the sample size: σ/sqrt(n) = 10/sqrt(30)= 1.75/5.48= .32. That's not quite right. In the denominator of your calculation, you may have used the number of samples rather than the sample size. Try again. That's not quite right. There is enough information to determine the standard deviation of the sampling distribution of means. Try again. If we randomly selected 30 newborns from the full population of US newborns, would you be surprised if their mean weight was 8.30 pounds? Yes, a mean of 8.30 pounds would be surprising as this sample result is more than 3 standard deviations above the overall mean weight of 7.25. That's correct. As the population is Normally distributed we know that the sampling distribution will be Normally distributed, regardless of sample size, so we can use the Standard Deviation Rule to evaluate the particular sample result. Three standard deviations above 7.25 is 7.25 + 3(1.75/sqrt(30)) = 8.21. So 8.30 pounds is more than 3 standard deviations above the mean. An observation in this region occurs only .15% of the time....a rare event indeed. No, a mean of 8.30 pounds would not be surprising as this sample result is within 2 standard deviations of the overall mean weight of 7.25. That's not quite right. Although 8.30 is only 1.05 pounds greater than 7.25, and random samples do result in some variability in sample means, to determine if 8.30 is surprising, you will need to calculate the standard deviation of the sampling distribution of means, and if appropriate, use the Standard Deviation rule to assess how likely it would be to observe a mean of 8.30 in this sampling distribution. Try again. Yes, a mean of 8.30 pounds would be surprising because this sample result is 1.05 pounds greater than the overall mean weight of 7.25. That's not quite right. It is true that 8.30 pounds is 1.05 pounds greater than 7.25 pounds, but we have to assess this difference relative to the standard deviation of the sampling distribution of sample means to determine if this sample mean is surprising. You will need to calculate the standard deviation of the sampling distribution of means, and if appropriate, use the Standard Deviation rule to assess how likely it would be to observe a mean of 8.30 in this sampling distribution. Try again. No, a mean of 8.30 pounds would not be surprising because this sample result is only 1.05 pounds greater than the overall mean weight of 7.25, and random samples do result in some variability in sample means. That's not quite right. It is true that 8.30 pounds is 1.05 pounds greater than 7.25 pounds, and we do expect some variability, but we have to assess this difference relative to the standard deviation of the sampling distribution of sample means to determine if this sample mean is surprising. You will need to calculate the standard deviation of the sampling distribution of means, and if appropriate, use the Standard Deviation rule to assess how likely it would be to observe a mean of 8.30 in this sampling distribution. Try again. Regardless of the shape of the parent population, the sampling distribution of the mean approaches a normal distribution as sample size increases.[2] That's correct. Although counter-intuitive, the sampling distribution of the mean approaches a normal distribution as sample size increases. This is an important part of the "Central Limit Theorem." That's not quite right. Although counter-intuitive, the sampling distribution of the mean approaches a normal distribution as sample size increases. This is an important part of the "Central Limit Theorem." In 2009 the mean annual salary for teachers in the US was $49,720 with a standard deviation of $7200. The distribution is strongly skewed to the right. Consider the question: what is the probability that the mean annual salary of a random sample of 5 US teachers is more than $60,000? What should be considered before calculating this probability? Are there any serious concerns with calculating the probability? If not, calculate the probability. Given that the population distribution is known to be skewed, before calculating the probability we should consider whether the central limit theorem will guarantee that the distribution of sample means will be approximately Normal. However, to provide such a guarantee the central limit theorem requires a sample size larger than 5 (at least of size 30 for a strongly skewed distribution). We cannot calculate the probability using the methods based on a Normal distribution of the sample mean. '; } $(this).replaceWith(r); } ); $(this).find('.WEquizClozeWord').on('blur', function() { var all = true, caution = $(this).parents('ul').hasClass('WEquizCaution'); // filled in all the blanks for this question? $(this).parents('li').find('input').each(function() { if ($.trim($(this).val()) === '') { all = false; } }); if (all) { $(this).parents('li').find('input').each(function() { if (caution && ($.trim($(this).val().toLowerCase()).length < $(this).attr("data-qword").length)) { $(this).css('background', '#FFCC00'); } else { if ($.trim($(this).val().toLowerCase()) === $(this).attr("data-qword").toLowerCase()) { $(this).css('background', 'LightGreen'); $(this).attr('disabled', true).unbind('blur'); } else { $(this).css('background', 'LightPink'); $(this).after(' ' + $(this).attr('data-qword') + ' '); } } }); $(this).parents('li').find('.WEQcorrection').show('slow'); } }); } $('.WEquizCloze').each(livenCloze); }); In 2009 the mean annual salary for teachers in the US was $49,720 with a standard deviation of $7200. The distribution is strongly skewed to the right. What is the probability that the mean annual salary of a random sample of 65 US teachers is less than $48,000? Let's walk through the computations required to calculate this probability. Is it safe to use the Normal distribution to determine this probability? That's correct. According to the central limit theorem, the mean has approximately a Normal distribution when the sample size is large enough, and a sample size of 65 is large enough. It is safe to use the Normal distribution to calculate this probability. That's not quite right. According to the central limit theorem, the mean has approximately a Normal distribution when the sample size is large enough, and a sample size of 65 is large enough. It is safe to use the Normal distribution to calculate this probability. The mean of the sampling distribution of sample means is That's correct. According to the central limit theorem, the mean has approximately a Normal distribution with the same mean as the population; therefore, $49,720 is the mean of the distribution of the sample means. That's not quite right. This is the value which delineates the probability to be calculated, rather than the mean of the sample of means. Recall that the central limit theorem says that the mean has approximately a Normal distribution with the same mean as the population; therefore, $49,720 is the mean of the distribution of the sample means. The standard deviation of the sampling distribution of sample means is That's correct. According to the central limit theorem, the mean has approximately a Normal distribution with the same mean as the population and a standard deviation of σ/sqrt(n) = 7200/sqrt(65)= 7200/8.06= 893. $7200. That's not quite right. It looks like you selected the population standard deviation. The standard deviation of the sampling distribution of the mean is calculated using the population standard deviation and the sample size. Try again. Use the information provided to calculate the z-score. z = That's not quite right. Recall that the z-score is (value - mean)/stand dev. When calculating the z-score be sure to use the calculated standard deviation for the sampling distribution of the sample mean in the denominator. Try again. -.24 That's not quite right. Recall that the z-score is (value - mean)/stand dev. When calculating the z-score be sure 1) to subtract the population mean from the sample value of interest, and 2) to use the calculated standard deviation for the sampling distribution of the sample mean in the denominator. Try again. That's not quite right. Recall that the z-score is (value - mean)/stand dev. When calculating the z-score be sure to subtract the population mean from the sample value of interest. Try again. That's correct. The z-score for 48000 is z = (48000 - 49720/(7200/sqrt(65)). Use the z-score and a Normal Distribution calculator to determine the probability that the mean annual salary of a random sample of 65 US teachers is less than $48,000? That's correct. P(X-bar<48,000) = P(Z < -1.93) = .0268. While many teachers likely have an annual salary less than $48000, it would be unlikely for the mean salary of a sample of 65 teachers to be less than $48,000. That's not quite right. You may have found the probability for the z score of -.24. Try again. That's not quite right. You may have found the probability for the z score of .24. Try again. That's not quite right. You may have found the probability for the z score of 1.93, instead of -1.93. Try again. In 2011, scores on the critical reading portion of the SAT (SAT-CR) were approximately Normally distributed with mean μ = 496 and standard deviation σ = 114. Consider the question: What is the probability that the mean SAT-CR score of a random sample of 3 test-takers from 2011 is more than 600?? What should be considered before calculating this probability? Are there any serious concerns with calculating the probability? If not, calculate the probability. Given that the population distribution is approximately Normal, the distribution of sample means will also be approximately Normal, for any sample size. There are no concerns with calculating this probability. The mean of the sampling distribution of sample means is the same as the population mean, μ = 496. The standard deviation is σ/sqrt(n) = 114/sqrt(3) = 65.82. The z-score for 600 is z = (600 - 496)/65.82 = 1.58. P(X-bar > 600) = P(Z > 1.58) = P(Z < -1.58) = .0571. While it is common for individual students to score above 600 on the SAT-CR, it is rather unlikely (less than 6% chance) for the mean score of a sample of 3 students to be above 600. Sampling distributions for counts and proportions The mean and sd of [math]\hat{p}[/math] Out of 300 students in the school, 225 passed an exam. What would be the mean of the sampling distribution of the proportion of students who passed the exam in the school?[3] .75 (to two decimals) That's correct. The mean of the sampling distribution of [math]\hat{p}[/math] is equal to the population proportion. It is 225/300 = .75. That's not quite right. Note that the population in this scenario is the full student body of 300 students. The samples which make up the sampling distribution would be drawn from this population. Consider how the mean of the sampling distribution relates to the mean of the population. Try again. Out of 300 students in the school, 225 passed an exam. You take a sample of 10 of these students. What is the standard deviation of the distribution of sample proportions?[4] .137 (to 3 decimal places) That's correct. The standard deviation of the distribution of sample propotions = [math]\sqrt{ \frac{p(1-p)}{n}} = \sqrt{ \frac{(.75)(.25)}{10}} = .137[/math] That's not quite right. The standard deviation of the distribution of the sample distributions is can be calculated based on the population standard deviation and the sample size. Try again. According to the National Student Clearinghouse Research Center, 45 percent of all students who finished a four-year degree in 2010-11 had previously enrolled at a two-year college.[5] We wish to randomly sample students who finished a four-year degree in 2010-2011 to determine the proportion who had previously enrolled at a two-year college. For which of the following sample sizes is the Normal model a good fit for the sampling distributions of the sample proportions? That's not quite right. Neither np nor n(1-p) is greater than or equal to 10: np = (10)(0.45) = 4.5 and n(1 - p) = (10)(0.55) = 5.5. A Normal distribution is not a good fit for the sampling distribution of sample proportions. Try again. That's not quite right. One of the two conditions is not met. Both np and n(1-p) must be greater than or equal to 10: n(1 - p) = (20)(0.55) = 11 is greater than 10, but np = (20)(0.45) = 9 is less than 10. A Normal distribution is not a good fit for the sampling distribution of sample proportions. Try again. That's correct. Both np and n(1-p) are greater than 10: np = (30)(0.45) = 13.5 and n(1 - p) = (30)(0.55) = 16.5. A Normal distribution is a good fit for the sampling distribution of sample proportions. That's not quite right. If the smaller sample size met the conditions for p=.45, then this would be correct. Be sure to check that both of the following are true: np ≥ 10 and n(1 - p) ≥ 10. Try again. We decide to randomly sample 50 students who finished a four-year degree in 2010-2011 to determine the proportion who had previously enrolled at a two-year college. What is the mean and standard deviation of the sampling distribution of sample proportions? The mean of the sampling distribution is 0.45. The standard deviation is [math]\sqrt{ \frac{p(1-p)}{n}} = \sqrt{ \frac{(.45)(.55)}{50}} = .07[/math] Understanding the sampling distribution of [math]\hat{p}[/math] The test specifications for a math test require that 20% of the test questions relate to geometry. Two tests are assembled by randomly choosing test questions from a pool of over 1000 questions in which geometry questions make up 20%. The first test has 50 questions (long test) and the second test has 10 questions (short test). Which test is more likely to have more than 40% geometry questions? The long test because there are more test questions on this test, so there will be more geometry and questions. That's not quite right. While it is true that a longer test will have more geometry questions, there are overall more test questions on the test. We are interested in the proportion of questions which relate to geometry, not the overall count. Try again. The long test because there is more variability in the proportion of geometry questions among larger samples. That's not quite right. Recall that the larger the sample size the less variable the sampling distribution. Try again. The short test because there is more variability in the proportion of geometry questions among smaller samples. That's correct. When samples are small, there is more variability among the samples, so it is more likely to get sample results further from p=.20 in the short test. As both tests are based on random samples, they have the same chance of having 40% geometry questions. That's not quite right. In random sampling, the variability of the sample results is directly related to the size of the sample. Try again. From a pool of 1000 questions, in which geometry questions make up 20%, the test developers randomly sample 50 questions 5 times to make 5 long tests (50 questions each). (The same question may be included on more than 1 test.) The following sequences show the percent of geometry questions on each of the 5 tests. Which sequence is the most plausible? 22%, 20%, 32%, 18%, 25% That's correct. We can assume the sampling distribution is Normally distributed as both np and n(1-p) are greater than or equal to 10. Using the standard deviation rule, we would expect that about 2/3rds of the p-hats would be within 1 standard deviation of the mean p=.20. The standard deviation is about .06. Of the 5 samples (tests), 4 are within .06 of p=.20. 5%, 73%, 22%, 88%, 56% That's not quite right. If it is safe to assume that the sampling distribution is Normally distributed, then we can use the standard deviation rule to establish the bounds for the most likely sample results: we would expect about 2/3rds of the sample proportions to be within 1 standard deviation of the mean. In these results, 3 of the samples are over 3 standard deviations from the mean. Try again. That's not quite right. When randomly sampling from a population with mean proportion p=.20, we expect that many of the samples drawn will be the same or close to the population proportion, however it is rather unlikely that all 5 of the tests would have exactly 20% geometry questions. It is more likely that the percent of geometry questions in the 5 tests is mostly close to 20%. Try again. It is not safe to use the standard deviation rule to evaluate these results. That's not quite right. In fact, we can assume the sampling distribution is Normally distributed as both np and n(1-p) are greater than or equal to 10: np = (50)(.20) = 10 and p(1-p) = (50)(.8) = 40. Using the standard deviation rule, we would expect that about 2/3rds of the p-hats would be within 1 standard deviation of the mean p=.20. Try again. Using Normal distribution calculations with the sampling distribution of [math]\hat{p}[/math] The National Institute of Mental Health reports that approximately 10 percent of American adults suffer from depression or a depressive illness.[7] A random sample of 210 American adults is obtained. What can we assume about the sampling distribution of the sample proportion, [math]\hat{p}[/math]? Given that both np and n(1-p) are greater than or equal to 10, np = (210)(.1) = 21 and n(1-p) = (210)(.9) = 189, we can assume that the sampling distribution is a Normal distribution with mean μ[math]\hat{p}[/math]=.10 and σ[math]\hat{p}[/math] = sqrt(p(1-p)/n) = sqrt(.1(1-.1)/210) = .02. If the sampling distribution has a Normal distribution, we can use the standard deviation rule to better understand the distribution. What interval is almost certain (probability .997) to contain the sample proportion of adults who suffer from depression or depressive illness? As we can assume that the sampling distribution is Normally distributed, the standard deviation rule says that 99.7% of observations fall within 3 standard deviations above and below the mean. For a sampling distribution with mean μ[math]\hat{p}[/math]=.1 and σ[math]\hat{p}[/math] = .02: .1 + 3*.02 = .16 and .1 - 3*.02 = .04. There is roughly a 99.7% chance that the sample proportion falls in the interval (.04, .16). (This question needs reworking) For what percent of samples of 210 adults from the population, would we expect to find 35 or more adults with depression or a depressive illness? Answer: .15% (to two decimal places) That's correct. In the last question we established that in the sampling distribution 99.7% of samples will have a sample proportion between .04 and .16. As 35 out of the 210 adults in the sample is .16, we conclude that the area above the proportion in this sample is the upper tail beyond 3 standard deviations above the mean. According to the standard deviation rule, sample proportions greater than 0.16 will occur 0.15% of the time: (100% - 99.7%) / 2 = 0.15%. That's correct. In the last question we established that in the sampling distribution 99.7% of samples will have a sample proportion between .04 and .16. As 35 out of the 210 adults in the sample is .17, we conclude that the area above the proportion in this sample is in the upper tail beyond 3 standard deviations above the mean. In particular it is the area above z=(0.167-0.10)/ 0.02 = 3.35. The area under the Normal curve above 3.35 is .04% That's not quite right. In the last question we established that in the sampling distribution 99.7% of samples will have a sample proportion between .04 and .16. How does the sample proportion in this sample compare with this range? Use the standard deviation rule to determine the area under the curve which corresponds to the area in the problem. Try again. That's not quite right. In the last question we established that in the sampling distribution 99.7% of samples will have a sample proportion between .04 and .16. How does the sample proportion in this sample compare with this range? Calculate the z-score and determine the area under the curve (in percents). Try again. What is the probability that at least 25 in 210 (proportion .12) adults suffer from depression or a depressive illness: P(p-hat > .12)? (Note that p-hat = .12 is 1 standard deviation (.02) above the mean (.1), which means you can use the standard deviation rule to estimate this probability.) Answer: .16 (to two decimal places) That's correct. The standard deviation rule tells us that there is a 68% chance that the sample proportion will be within 1 standard deviation of the mean: between .08 and .12. P(p-hat > .12) = (1 -.68)/2 = .16. That's not quite right. The standard deviation rule tells us that there is a 68% chance that the sample proportion will be within 1 standard deviation of the mean: between .08 and .12. The probability that we are interested in is represented by the area under the curve above .12. Try again. ↑ Question adapted from Ebook Problem Set - Normal Std, Problem 20 in Probability and Statistics EBook, from UCLA Statistics Online Computational Resource (SOCR), Retrieved 12 November 2012. ↑ Adapted from Central Limit Theorem Demonstration at Online Statistics Education: An Interactive Multimedia Course of Study. Project Leader: David M. Lane, Rice University. Retrieved 25 November 2012. ↑ Adapted from Sampling distribution of p at Online Statistics Education: An Interactive Multimedia Course of Study. Project Leader: David M. Lane, Rice University. Retrieved 1 December 2012. ↑ http://www.studentclearinghouse.info/snapshot/docs/SnapshotReport6-TwoYearContributions.pdf ↑ Obtained from Susan Dean and Barbara Illowsky, Hypothesis Testing of Single Mean and Single Proportion: Practice 3 in Collaborative Statistics at Connexions. Retrieved 3 December 2012. Retrieved from "https://wikieducator.org/index.php?title=Sampling_distributions/Self-check_assessment&oldid=832756" Statistics quizzes
CommonCrawl
Earth and Environmental Sciences (94) Physics And Astronomy (94) flm;flow control;instability control Journal of Fluid Mechanics (94) Ryan Test (92) test society (2) On the mechanism of open-loop control of thermoacoustic instability in a laminar premixed combustor Amitesh Roy, Sirshendu Mondal, Samadhan A. Pawar, R. I. Sujith Journal: Journal of Fluid Mechanics / Volume 884 / 10 February 2020 Published online by Cambridge University Press: 03 December 2019, A2 Print publication: 10 February 2020 We identify mechanisms through which open-loop control of thermoacoustic instability is achieved in a laminar combustor and characterize them using synchronization theory. The thermoacoustic system comprises two nonlinearly coupled damped harmonic oscillators – acoustic and unsteady heat release rate (HRR) field – each possessing different eigenfrequencies. The frequency of the preferred mode of HRR oscillations is less than the third acoustic eigenfrequency where thermoacoustic instability develops. We systematically subject the limit-cycle oscillations to an external harmonic forcing at different frequencies and amplitudes. We observe that forcing at a frequency near the preferred mode of the HRR oscillator leads to a greater than 90 % decrease in the amplitude of the limit-cycle oscillations through the phenomenon of asynchronous quenching. Concurrently, there is a resonant amplification in the amplitude of HRR oscillations. Further, we show that the flame dynamics plays a key role in controlling the frequency at which quenching is observed. Most importantly, we show that forcing can cause asynchronous quenching either by imposing out-of-phase relation between pressure and HRR oscillations or by inducing period-2 dynamics in pressure oscillations while period-1 in HRR oscillations, thereby causing phase drifting between the two subsystems. In each of the two cases, acoustic driving is very low and hence thermoacoustic instability is suppressed. We show that the characteristics of forced synchronization of the pressure and HRR oscillations are significantly different. Thus, we find that the simultaneous characterization of the two subsystems is necessary to quantify completely the nonlinear response of the forced thermoacoustic system. Experimental analysis of the effect of local base blowing on three-dimensional wake modes M. Lorite-Díez, J. I. Jiménez-González, L. Pastur, C. Martínez-Bazán, O. Cadot Journal: Journal of Fluid Mechanics / Volume 883 / 25 January 2020 Published online by Cambridge University Press: 28 November 2019, A53 Print publication: 25 January 2020 Wake modes of a three-dimensional blunt-based body near a wall are investigated at a Reynolds number $Re=10^{5}$ . The targeted modes are the static symmetry-breaking mode and two antisymmetric periodic modes. The static mode orientation is aligned with the horizontal major $y$ -axis of the base and randomly switches between a positive $P$ and a negative $N$ state leading to long-time bistable dynamics of the turbulent wake. The modifications of these modes are studied when continuous blowing is applied at different locations through four slits along the base edges (denoted L for left, R for right, T for top and B for bottom) in either four single asymmetric configurations or two double symmetric configurations (denoted LR and TB). Two regimes, referred to as mass and momentum, are clearly identifiable for all configurations. The mass regime, which is fairly insensitive to blowing momentum and location, is characterized by the growth of the recirculating bubble as the total injected flow rate is increased, and is associated with a base drag reduction and interpreted as resulting from the equilibrium between mass fluxes feeding and emptying the recirculating region. A simple budget model is shown to be in agreement with entrainment velocities measured for isolated turbulent mixing layers. The strength of the static mode is reduced up to 20 % when the bubble length is maximum, whereas no change in the periodic mode frequencies is found. On the other hand, the momentum regime is characterized by the deflating of the recirculating bubble, leading to base drag increase, and it is interpreted by the free shear layer forcing, which increases the entrainment velocity, thus emptying the recirculating bubble. In this regime the static mode orientation is imposed by the blowing symmetry. Lateral L and R (respectively top/bottom T and B) blowing configurations select $P$ or $N$ states in the horizontal (respectively vertical) direction, while bistable dynamics persists for the symmetric LR and TB configurations. The shape of periodic modes follows the changes in wake static orientation. The transition between the two regimes is governed by both the total injected flow rate and the location of the injection. Forced synchronization of quasiperiodic oscillations in a thermoacoustic system Yu Guan, Vikrant Gupta, Minping Wan, Larry K. B. Li Journal: Journal of Fluid Mechanics / Volume 879 / 25 November 2019 Published online by Cambridge University Press: 27 September 2019, pp. 390-421 Print publication: 25 November 2019 In self-excited combustion systems, the application of open-loop forcing is known to be an effective strategy for controlling periodic thermoacoustic oscillations, but it is not known whether and under what conditions such a strategy would work on thermoacoustic oscillations that are not simply periodic. In this study, we experimentally examine the effect of periodic acoustic forcing on a prototypical thermoacoustic system consisting of a ducted laminar premixed flame oscillating quasiperiodically on an ergodic $\mathbb{T}^{2}$ torus at two incommensurate natural frequencies, $f_{1}$ and $f_{2}$ . Compared with that of a classical period-1 system, complete synchronization of this $\mathbb{T}_{1,2}^{2}$ system is found to occur via a more intricate route involving three sequential steps: as the forcing amplitude, $\unicode[STIX]{x1D716}_{f}$ , increases at a fixed forcing frequency, $f_{f}$ , the system transitions first (i) to ergodic $\mathbb{T}_{1,2,f}^{3}$ quasiperiodicity; then (ii) to resonant $\mathbb{T}_{1,f}^{2}$ quasiperiodicity as the weaker of the two natural modes, $f_{2}$ , synchronizes first, leading to partial synchronization; and finally (iii) to a $P1_{f}$ limit cycle as the remaining natural mode, $f_{1}$ , also synchronizes, leading to complete synchronization. The minimum $\unicode[STIX]{x1D716}_{f}$ required for partial and complete synchronization decreases as $f_{f}$ approaches either $f_{1}$ or $f_{2}$ , resulting in two primary Arnold tongues. However, when forced at an amplitude above that required for complete synchronization, the system can transition out of $P1_{f}$ and into $\mathbb{T}_{1,2,f}^{3}$ or $\mathbb{T}_{2,f}^{2}$ . The optimal control strategy is to apply off-resonance forcing at a frequency around the weaker natural mode ( $f_{2}$ ) and at an amplitude just sufficient to cause $P1_{f}$ , because this produces the largest reduction in thermoacoustic amplitude via asynchronous quenching. Analysis of the Rayleigh index shows that this reduction is physically caused by a disruption of the positive coupling between the unsteady heat release rate of the flame and the $f_{1}$ and $f_{2}$ acoustic modes. If the forcing is applied near the stronger natural mode ( $f_{1}$ ), however, resonant amplification can occur. We then phenomenologically model this $\mathbb{T}_{1,2}^{2}$ thermoacoustic system as two reactively coupled van der Pol oscillators subjected to external sinusoidal forcing, and find that many of its synchronization features – such as the three-step route to $P1_{f}$ , the double Arnold tongues, asynchronous quenching and resonant amplification – can be qualitatively reproduced. This shows that these features are not limited to our particular system, but are universal features of forced self-excited oscillators. This study extends the applicability of open-loop control from classical period-1 systems with just a single time scale to ergodic $\mathbb{T}^{2}$ quasiperiodic systems with two incommensurate time scales. Extension of classical stability theory to viscous planar wall-bounded shear flows Harry Lee, Shixiao Wang Journal: Journal of Fluid Mechanics / Volume 877 / 25 October 2019 Published online by Cambridge University Press: 02 September 2019, pp. 1134-1162 Print publication: 25 October 2019 A viscous extension of Arnold's inviscid theory for planar parallel non-inflectional shear flows is developed and a viscous Arnold's identity is obtained. Special forms of the viscous Arnold's identity have been revealed that are closely related to the perturbation's enstrophy identity derived by Synge (Proceedings of the Fifth International Congress for Applied Mechanics, 1938, pp. 326–332, John Wiley) (see also Fraternale et al., Phys. Rev. E, vol. 97, 2018, 063102). Firstly, an alternative derivation of the perturbation's enstrophy identity for strictly parallel shear flows is acquired based on the viscous Arnold's identity. The alternative derivation induces a weight function. Thereby, a novel weighted perturbation's enstrophy identity is established, which extends the previously known enstrophy identity to include general streamwise translation-invariant shear flows. Finally, the validity of the enstrophy identity for parallel shear flows is rigorously examined and established under global nonlinear dynamics imposed with two classes of wall boundary conditions. As an application of the enstrophy identity, we quantitatively investigate the mechanism of linear instability/stability within the normal modal framework. The investigation reveals a subtle interaction between a critical layer and its adjacent boundary layer, which determines the stability nature of the disturbance. As an implementation of the relaxed wall boundary conditions imposed for the enstrophy identity, a control scheme is proposed that transitions the wall settings from the no-slip condition to the free-slip condition, through which a flow is stabilized quickly in an early stage of the transition. Spatial stability analysis of subsonic corrugated jets F. C. Lajús, A. Sinha, A. V. G. Cavalieri, C. J. Deschamps, T. Colonius Published online by Cambridge University Press: 08 August 2019, pp. 766-791 The linear stability of high-Reynolds-number corrugated jets is investigated by solving the compressible Rayleigh equation linearized about the time-averaged flow field. A Floquet ansatz is used to account for periodicity of this base flow in the azimuthal direction. The origin of multiple unstable solutions, which are known to appear in these non-circular configurations, is traced through gradual perturbations of a parametrized base-flow profile. It is shown that all unstable modes are corrugated jet continuations of the classical Kelvin–Helmholtz modes of circular jets, highlighting that the same instability mechanism, modified by corrugations, leads to the growth of disturbances in such flows. It is found that under certain conditions the eigenvalues may form saddles in the complex plane and display axis switching in their eigenfunctions. A parametric study is also conducted to understand how penetration and number of corrugations impact stability. The effect of these geometric properties on growth rates and phase speeds of the multiple unstable modes is explored, and the results provide guidelines for the development of nozzle configurations that more effectively modify the Kelvin–Helmholtz instability. Feedback control of Marangoni convection in a thin film heated from below Anna E. Samoilova, Alexander Nepomnyashchy We use linear proportional control for the suppression of the Marangoni instability in a thin film heated from below. Our keen interest is focused on the recently revealed oscillatory mode caused by a coupling of two long-wave monotonic instabilities, the Pearson and deformational ones. Shklyaev et al. (Phys. Rev. E, vol. 85, 2012, 016328) showed that the oscillatory mode is critical in the case of a substrate of very low conductivity. To stabilize the no-motion state of the film, we apply two linear feedback control strategies based on the heat flux variation at the substrate. Strategy (I) uses the interfacial deflection from the mean position as the criterion of instability onset. Within strategy (II) the variable that describes the instability is the deviation of the measured temperatures from the desired, conductive values. We perform two types of calculations. The first one is the linear stability analysis of the nonlinear amplitude equations that are derived within the lubrication approximation. The second one is the linear stability analysis that is carried out within the Bénard–Marangoni problem for arbitrary wavelengths. Comparison of different control strategies reveals feedback control by the deviation of the free surface temperature as the most effective way to suppress the Marangoni instability. Optimal perturbations for controlling the growth of a Rayleigh–Taylor instability Ali Kord, Jesse Capecelatro A discrete adjoint-based method is employed to control multi-mode Rayleigh–Taylor (RT) instabilities via strategic manipulation of the initial interfacial perturbations. We seek to find to what extent mixing and growth can be enhanced at late stages of the instability and which modes are targeted to achieve this. Three objective functions are defined to quantify RT mixing and growth: (i) variance of mole fraction, (ii) a kinetic energy norm based on the vertical velocity and (iii) variations of mole fraction with respect to the unperturbed initial state. The sensitivity of these objective functions to individual amplitudes of the initial perturbations are studied at various stages of the RT instability. The most sensitive wavenumber during the early stages of the instability closely matches the most unstable wavenumber predicted by linear stability theory. It is also shown that randomly changing the initial perturbations has little effect at early stages, but results in large variations in both RT growth and its sensitivity at later times. The sensitivity obtained from the adjoint solution is employed in gradient-based optimization to both suppress and enhance the objective functions. The adjoint-based optimization framework was capable of completely suppressing mixing by shifting all of the interfacial perturbation energy to the highest modes such that diffusion dominates. The optimal initial perturbations for enhancing the objective functions were found to have a broadband spectrum resulting in non-trivial coupling between modes and depends on the particular objective function being optimized. The objective functions were increased by as much as a factor of nine in the self-similar late-stage growth regime compared to an interface with a uniform distribution of modes, corresponding to a 32% increase in the bubble growth parameter and 54% increase in the mixing width. It was also found that the interfacial perturbations optimized at early stages of the instability are unable to predict enhanced mixing at later times, and thus optimizing late-time multi-mode RT instabilities requires late-time sensitivity information. Finally, it was found that the optimized distribution of interfacial perturbations obtained from two-dimensional simulations was capable of enhancing the objective functions in three-dimensional flows. As much as 51% and 99% enhancement in the bubble growth parameter and mixing width, respectively, was achieved, even greater than what was reached in two dimensions. Stability of Poiseuille flow of a Bingham fluid overlying an anisotropic and inhomogeneous porous layer Sourav Sengupta, Sirshendu De Journal: Journal of Fluid Mechanics / Volume 874 / 10 September 2019 Print publication: 10 September 2019 Modal and non-modal stability analyses are performed for Poiseuille flow of a Bingham fluid overlying an anisotropic and inhomogeneous porous layer saturated with the same fluid. In the case of modal analysis, the resultant Orr–Sommerfeld type eigenvalue problem is formulated and solved via the Chebyshev collocation method, using QZ decomposition. It is found that no unstable eigenvalues are present for the problem, indicating that the flow is linearly stable. Therefore, non-modal analysis is attempted in order to observe the short-time response. For non-modal analysis, the initial value problem is solved, and the response of the system to initial conditions is assessed. The aim is to evaluate the effects on the flow stability of porous layer parameters in terms of depth ratio (ratio of the fluid layer thickness $d$ to the porous layer thickness $d_{m}$ ), Bingham number, Darcy number and slip coefficient. The effects of anisotropy and inhomogeneity of the porous layer on flow transition are also investigated. In addition, the shapes of the optimal perturbations are constructed. The mechanism of transient growth is explored to comprehend the complex interplay of various factors that lead to intermediate amplifications. The present analysis is perhaps the first attempt at analysing flow stability of viscoplastic fluids over a porous medium, and would possibly lead to better and efficient designing of flow environments involving such flow. Flow instabilities in the wake of a circular cylinder with parallel dual splitter plates attached Rui Wang, Yan Bao, Dai Zhou, Hongbo Zhu, Huan Ping, Zhaolong Han, Douglas Serson, Hui Xu In this paper, instabilities in the flow over a circular cylinder of diameter $D$ with dual splitter plates attached to its rear surface are numerically investigated using the spectral element method. The key parameters are the splitter plate length $L$ , the attachment angle $\unicode[STIX]{x1D6FC}$ and the Reynolds number $Re$ . The presence of the plates was found to significantly modify the flow topology, leading to substantial changes in both the primary and secondary instabilities. The results showed that the three instability modes present in the bare circular cylinder wake still exist in the wake of the present configurations and that, in general, the occurrences of modes A and B are delayed, while the onset of mode QP is earlier in the presence of the splitter plates. Furthermore, two new synchronous modes, referred to as mode A $^{\prime }$ and mode B $^{\prime }$ , are found to develop in the wake. Mode A $^{\prime }$ is similar to mode A but with a quite long critical wavelength. Mode B $^{\prime }$ shares the same spatio-temporal symmetries as mode B but has a distinct spatial structure. With the exception of the case of $L/D=0.25$ , mode A $^{\prime }$ persists for all configurations investigated here and always precedes the transition through mode A. The onset of mode B $^{\prime }$ occurs for $\unicode[STIX]{x1D6FC}>20^{\circ }$ with $L/D=1.0$ and for $L/D>0.5$ with $\unicode[STIX]{x1D6FC}=60^{\circ }$ . The characteristics of all the transition modes are analysed, and their similarities and differences are discussed in detail in comparison with the existing modes. In addition, the physical mechanism responsible for the instability mode B $^{\prime }$ is proposed. The weakly nonlinear feature of mode B $^{\prime }$ , as well as that of mode A $^{\prime }$ , is assessed by employing the Landau model. Finally, selected three-dimensional simulations are performed to confirm the existence of these two new modes and to investigate the nonlinear evolution of the three-dimensional modes. Convection regimes induced by local boundary heating in a liquid–gas system Victoria B. Bekezhanova, A. S. Ovcharova Journal: Journal of Fluid Mechanics / Volume 873 / 25 August 2019 Published online by Cambridge University Press: 24 June 2019, pp. 441-458 Print publication: 25 August 2019 In the framework of the complete formulation of the conjugate problem, the liquid–gas flow structure arising upon local heating using thermal sources is investigated numerically. The two-layer system is confined by solid impermeable walls. The Navier–Stokes equations in the Boussinesq approximation in the 'streamfunction–vorticity' variables are used to describe the media motion. The dynamic conditions at the interface are formulated in terms of the tangential and normal velocities, while the temperature conditions at the external boundaries of the system take into account the presence of local heaters. The influence of the number of heaters and heating modes on the dynamics and character of the appearing convective regimes is analysed. The steady and commutated heating modes for one and two heaters arranged at the lower boundary are investigated. The heating initiates convective and thermocapillary mechanisms causing the fluid motion. Transient regimes with the successive formation of two-vortex, quadruple-vortex and two-vortex flows are observed before the stabilization of the system in the uniform heating mode. A stable thermocapillary deflection appears at the interface above the heater. The commutated mode of heating entails oscillations of the interface with a change in the deflection form and the formation of travelling vortices in the fluids. The impact of particular mechanisms on the flow patterns is analysed. The paper presents typical distributions of the velocity and temperature fields in the system and the position of the interface for the considered cases. Dynamic interactions of multiple wall-mounted flexible flaps Joseph O'Connor, Alistair Revell Journal: Journal of Fluid Mechanics / Volume 870 / 10 July 2019 Published online by Cambridge University Press: 08 May 2019, pp. 189-216 Print publication: 10 July 2019 Coherent waving interactions between vegetation and fluid flows are known to emerge under conditions associated with the mixing layer instability. A similar waving motion has also been observed in flow control applications, where passive slender structures are used to augment bluff body wakes. While their existence is well reported, the mechanisms which govern this behaviour, and their dependence on structural properties, are not yet fully understood. This work investigates the coupled interactions of a large array of slender structures in an open-channel flow, via numerical simulation. A direct modelling approach, whereby the individual structures are fully resolved, is realised via a lattice Boltzmann-immersed boundary-finite element model. For steady flow conditions at low–moderate Reynolds number, the response of the array is measured over a range of mass ratio and bending rigidity, spanning two orders of magnitude, and the ensuing response is characterised. The results show a range of behaviours which are classified into distinct states: static, regular waving, irregular waving and flapping. The regular waving regime is found to occur when the natural frequency of the array approaches the estimated frequency of the mixing layer instability. Furthermore, after normalising with respect to the natural frequency of the array, the frequency response across the examined parameter space collapses onto a single curve. These findings indicate that the coherent waving mode is in fact a coupled instability, as opposed to a purely fluid-driven response, and that this specific regime is triggered by a lock-in between the fluid and structural natural frequencies. Linear iterative method for closed-loop control of quasiperiodic flows Colin Leclercq, Fabrice Demourant, Charles Poussot-Vassal, Denis Sipp Published online by Cambridge University Press: 08 April 2019, pp. 26-65 This work proposes a feedback-loop strategy to suppress intrinsic oscillations of resonating flows in the fully nonlinear regime. The frequency response of the flow is obtained from the resolvent operator about the mean flow, extending the framework initially introduced by McKeon & Sharma (J. Fluid Mech., vol. 658, 2010, pp. 336–382) to study receptivity mechanisms in turbulent flows. Using this linear time-invariant model of the nonlinear flow, modern control methods such as structured ${\mathcal{H}}_{\infty }$ -synthesis can be used to design a controller. The approach is successful in damping self-sustained oscillations associated with specific eigenmodes of the mean-flow spectrum. Despite excellent performance, the linear controller is however unable to completely suppress flow oscillations, and the controlled flow is effectively attracted towards a new dynamical equilibrium. This new attractor is characterized by a different mean flow, which can in turn be used to design a second controller. The method can then be iterated on subsequent mean flows, until the coupled system eventually converges to the base flow. An intuitive parallel can be drawn with Newton's iteration: at each step, a linearized model of the flow response to a perturbation of the input is sought, and a new linear controller is designed, aiming at further reducing the fluctuations. The method is illustrated on the well-known case of two-dimensional incompressible open-cavity flow at Reynolds number $Re=7500$ , where the fully developed flow is initially quasiperiodic (2-torus state). The base flow is reached after five iterations. The present work demonstrates that nonlinear control problems may be solved without resorting to nonlinear reduced-order models. It also shows that physically relevant linear models can be systematically derived for nonlinear flows, without resorting to black-box identification from input–output data; the key ingredient being frequency-domain models based on the linearized Navier–Stokes equations about the mean flow. Applicability to amplifier flows and turbulent dynamics has, however, yet to be investigated. Resolvent-analysis-based design of airfoil separation control Chi-An Yeh, Kunihiko Taira Journal: Journal of Fluid Mechanics / Volume 867 / 25 May 2019 We use resolvent analysis to design active control techniques for separated flows over a NACA 0012 airfoil. Spanwise-periodic flows over the airfoil at a chord-based Reynolds number of $23\,000$ and a free-stream Mach number of $0.3$ are considered at two post-stall angles of attack of $6^{\circ }$ and $9^{\circ }$ . Near the leading edge, localized unsteady thermal actuation is introduced in an open-loop manner with two tunable parameters of actuation frequency and spanwise wavelength. To provide physics-based guidance for the effective choice of these control input parameters, we conduct global resolvent analysis on the baseline turbulent mean flows to identify the actuation frequency and wavenumber that provide large perturbation energy amplification. The present analysis also considers the use of a temporal filter to limit the time horizon for assessing the energy amplification to extend resolvent analysis to unstable base flows. We incorporate the amplification and response mode from resolvent analysis to provide a metric that quantifies momentum mixing associated with the modal structure. This metric is compared to the results from a large number of three-dimensional large-eddy simulations of open-loop controlled flows. With the agreement between the resolvent-based metric and the enhancement of aerodynamic performance found through large-eddy simulations, we demonstrate that resolvent analysis can predict the effective range of actuation frequency as well as the global response to the actuation input. We believe that the present resolvent-based approach provides a promising path towards mean flow modification by capitalizing on the dominant modal mixing. Drag reduction and instabilities of flows in longitudinally grooved annuli H. V. Moradi, J. M. Floryan Journal: Journal of Fluid Mechanics / Volume 865 / 25 April 2019 Print publication: 25 April 2019 The primary and secondary laminar flows in annuli with longitudinal grooves and driven by pressure gradients have been analysed. There exist geometric configurations reducing pressure losses in primary flows in spite of an increase of the wall wetted area. The parameter ranges when such flows exist have been determined using linear stability theory. Two types of secondary flows have been identified. The first type has the form of the classical travelling waves driven by shear and modified by the grooves. The axisymmetric waves dominate for sufficiently large radii of the annuli while different spiral waves dominate for small radii. The secondary flow topology is unique in the former case and has the form of axisymmetric rings propagating in the axial direction. Topologies in the latter case are not unique, as spiral waves with left and right twists can emerge under the same conditions, resulting in flow structures varying from spatial rings to rhombic forms. The most intense motion of this type occurs near the walls. The second type of secondary flow has the form of travelling waves driven by inertial effects with characteristics very distinct from the shear waves. Its critical Reynolds number increases proportionally to $S^{-2}$ , where $S$ denotes the groove amplitude, while the amplification rates increase proportionally to $S^{2}$ . These waves exist only if $S$ is above a well-defined minimum and their axisymmetric forms dominate, with the most intense motion occurring near the annulus mid-section. Geometries that give preference to the latter waves have been identified. It is shown that the drag-reducing topographies stabilize the classical travelling waves; these waves are driven by viscous shear, so reduction of this shear decreases their amplification. The same topographies destabilize the new waves; these waves are driven by an inviscid mechanism associated with the formation of circumferential inflection points, and an increase of the groove amplitude increases their amplification. The flow conditions when the presence of grooves can be ignored, i.e. the annuli can be treated as being hydraulically smooth, have been determined. Forced synchronization and asynchronous quenching of periodic oscillations in a thermoacoustic system Sirshendu Mondal, Samadhan A. Pawar, R. I. Sujith Published online by Cambridge University Press: 01 February 2019, pp. 73-96 We perform an experimental and theoretical study to investigate the interaction between an external harmonic excitation and a self-excited oscillatory mode ( $f_{n0}$ ) of a prototypical thermoacoustic system, a horizontal Rijke tube. Such an interaction can lead to forced synchronization through the routes of phase locking or suppression. We characterize the transition in the synchronization behaviour of the forcing and the response signals of the acoustic pressure while the forcing parameters, i.e. amplitude ( $A_{f}$ ) and frequency ( $f_{f}$ ) of forcing are independently varied. Further, suppression is categorized into synchronous quenching and asynchronous quenching depending upon the value of frequency detuning ( $|\,f_{n0}-f_{f}|$ ). When the applied forcing frequency is close to the natural frequency of the system, the suppression in the amplitude of the self-excited oscillation is known as synchronous quenching. However, this suppression is associated with resonant amplification of the forcing signal, leading to an overall increase in the response amplitude of oscillations. On the other hand, an almost 80 % reduction in the root mean square value of the response oscillation is observed when the system is forced for a sufficiently large value of the frequency detuning (only for $f_{f}<f_{n0}$ ). Such a reduction in amplitude occurs due to asynchronous quenching where resonant amplification of the forcing signal does not occur, as the frequency detuning is significantly high. Further, the results from a reduced-order model developed for a horizontal Rijke tube show a qualitative agreement with the dynamics observed in experiments. The relative phase between the acoustic pressure ( $p^{\prime }$ ) and the heat release rate ( $\dot{q}^{\prime }$ ) oscillations in the model explains the occurrence of maximum reduction in the pressure amplitude due to asynchronous quenching. Such a reduction occurs when the positive coupling between $p^{\prime }$ and $\dot{q}^{\prime }$ is disrupted and their interaction results in overall acoustic damping, although both of them oscillate at the forcing frequency. Our study on the phenomenon of asynchronous quenching thus presents new possibilities to suppress self-sustained oscillations in fluid systems in general. Sensitivity analysis and passive control of the secondary instability in the wake of a cylinder F. Giannetti, S. Camarri, V. Citro The stability properties of selected flow configurations, usually denoted as base flows, can be significantly altered by small modifications of the flow, which can be caused, for instance, by a non-intrusive passive control. This aspect is amply demonstrated in the literature by ad hoc sensitivity studies which, however, focus on configurations characterised by a steady base flow. Nevertheless, several flow configurations of interest are characterised by a time-periodic base flow. To this purpose, we propose here an original theoretical framework suitable to quantify the effects of base-flow variations in the stability properties of saturated time-periodic limit cycles. In particular, starting from a Floquet analysis of the linearised Navier–Stokes equations and using adjoint methods, it is possible to estimate the variation of a selected Floquet exponent caused by a generic structural perturbation of the base-flow equations. This link is expressed concisely using the adjoint operators coming from the analysis, and the final result, when applied to spatially localised disturbances, is used to build spatial sensitivity and control maps. These maps identify the regions of the flow where the placement of a infinitesimal small object produces the largest effect on the Floquet exponent and may also provide a quantification of this effect. Such analysis brings useful insights both for passive control strategies and for further characterising the investigated instability. As an example of application, the proposed analysis is applied here to the three-dimensional flow instabilities in the wake past a circular cylinder. This is a classical problem which has been widely studied in the literature. Nevertheless, by applying the proposed analysis we derive original results comprising a further characterisation of the instability and related control maps. We finally show that the control maps obtained here are in very good agreement with control experiments documented in the literature. A freely yawing axisymmetric bluff body controlled by near-wake flow coupling Thomas J. Lambert, Bojan Vukasinovic, Ari Glezer Journal: Journal of Fluid Mechanics / Volume 863 / 25 March 2019 Print publication: 25 March 2019 Flow-induced oscillations of a wire-mounted, freely yawing axisymmetric round bluff body and the induced loads are regulated in wind tunnel experiments (Reynolds number $60\,000<Re_{D}<200\,000$ ) by altering the reciprocal coupling between the body and its near wake. This coupling is controlled by exploiting the receptivity of the azimuthal separating shear layer at the body's aft end to controlled pulsed perturbations effected by two diametrically opposed and independently controlled aft-facing rectangular synthetic jets. The model is supported by a thin vertical wire upstream of its centre of pressure, and prescribed modification of the time-dependent flow-induced loads enables active control of its yaw attitude. The dynamics of the interactions and coupling between the actuation and the cross-flow are investigated using simultaneous, time-resolved measurements of the body's position and phase-locked particle image velocimetry measurements in the yawing plane. It is shown that the interactions between trains of small-scale actuation vortices and the local segment of the aft-separating azimuthal shear layer lead to partial attachment, and the ensuing asymmetric modifications of the near-wake vorticity field occur within 15 actuation cycles (approximately three convective time scales), which is in agreement with measurements of the flow loads in an earlier study. Open- and closed-loop actuation can be coupled to the natural, unstable motion of the body and thereby affect desired attitude control within 100 convective time scales, as is demonstrated by suppression or enhancement of the lateral motion. Control of stationary cross-flow modes in a Mach 6 boundary layer using patterned roughness Thomas Corke, Alexander Arndt, Eric Matlis, Michael Semper Journal: Journal of Fluid Mechanics / Volume 856 / 10 December 2018 Published online by Cambridge University Press: 11 October 2018, pp. 822-849 Print publication: 10 December 2018 Experiments were performed to investigate passive discrete roughness for transition control on a sharp right-circular cone at an angle of attack at Mach 6.0. A cone angle of attack of $6^{\circ }$ was set to produce a mean cross-flow velocity component in the boundary layer over the cone by which the cross-flow instability was the dominant mechanism of turbulent transition. The approach to transition control is based on exciting less-amplified (subcritical) stationary cross-flow modes through the addition of discrete roughness that suppresses the growth of the more-amplified (critical) cross-flow modes, and thereby delays transition. The passive roughness consisted of indentations (dimples) that were evenly spaced around the cone at an axial location that was just upstream of the first linear stability neutral growth branch for cross-flow modes. The experiments were performed in the air force academy (AFA) Mach 6.0 Ludwieg Tube Facility. The cone model was equipped with a motorized three-dimensional traversing mechanism that mounted on the support sting. The traversing mechanism held a closely spaced pair of fast-response total pressure Pitot probes. The measurements consisted of surface oil flow visualization and off-wall azimuthal profiles of mean and fluctuating total pressure at different axial locations. These documented an 25 % increase in the transition Reynolds number with the subcritical roughness. In addition, the experiments revealed evidence of a nonlinear, sum and difference interaction between stationary and travelling cross-flow modes that might indicate a mechanism of early transition in conventional (noisy) hypersonic wind tunnels. Active attenuation of a trailing vortex inspired by a parabolized stability analysis Adam M. Edstrand, Yiyang Sun, Peter J. Schmid, Kunihiko Taira, Louis N. Cattafesta Published online by Cambridge University Press: 19 September 2018, R2 Designing effective control for complex three-dimensional flow fields proves to be non-trivial. Often, intuitive control strategies lead to suboptimal control. To navigate the control space, we use a linear parabolized stability analysis to guide the design of a control scheme for a trailing vortex flow field aft of a NACA0012 half-wing at an angle of attack $\unicode[STIX]{x1D6FC}=5^{\circ }$ and a chord-based Reynolds number $Re=1000$ . The stability results show that the unstable mode with the smallest growth rate (fifth wake mode) provides a pathway to excite a vortex instability, whereas the principal unstable mode does not. Inspired by this finding, we perform direct numerical simulations that excite each mode with body forces matching the shape function from the stability analysis. Relative to the uncontrolled case, the controlled flows show increased attenuation of circulation and peak streamwise vorticity, with the fifth-mode-based control set-up outperforming the principal-mode-based set-up. From these results, we conclude that a rudimentary linear stability analysis can provide key insights into the underlying physics and help engineers design effective physics-based flow control strategies. Sensor and actuator placement trade-offs for a linear model of spatially developing flows Stephan F. Oehler, Simon J. Illingworth Published online by Cambridge University Press: 31 August 2018, pp. 34-55 We consider feedback flow control of the linearised complex Ginzburg–Landau system. The particular focus is on any trade-offs present when placing a single sensor and a single actuator. The work is presented in three parts. First, we consider the estimation problem in which a single sensor is used to estimate the entire flow field (without any control). Second, we consider the full information control problem in which the entire flow field is known, but only a single actuator is available for control. By considering the optimal sensor placement and optimal actuator placement while varying the stability of the system, a fundamental trade-off for both problems is made clear. Third, we consider the overall feedback control problem in which only a single sensor is available for measurement; and only a single actuator is available for control. By varying the stability of the system, similar fundamental trade-offs are made clear. We discuss implications for effective feedback control with a single sensor and a single actuator and compare it to previous placement methods.
CommonCrawl
Replacing Spectrum with Valuations of a Field - An Alternative to Schemes? Active 10 years, 6 months ago A scheme is defined to be a sheaf which is locally isomorphic to the spectrum of a ring. The idea behind this is that given an affine coordinate ring of a variety over an algebraically closed field, we can recover the variety, i.e. the geometric object, by looking at the maximal ideals of this affine coordinate ring. Including the prime ideals (which add in the irreducible subvarieties), we get the notion of a scheme, which is something which is gotten essentially from the spectrum of a ring. Another way to recover a variety from the algebra associated with it is to consider the valuations of its function field. Specifically, the points of a non-singular complete variety correspond to the valuations on the function field of the variety. We can actually define the variety as the set of valuations. If $K$ is the field and $v(K)$ denotes the set of valuations on $K$, then we declare $\{v \in v(K) \mid v(x) > 0\}$ for each $x \in K$ to be closed, giving a topology on the set of valuations. Finally, we can define the local ring at each point to be the valuation ring for that valuation. My question is, what if, instead of looking at spectra of rings, we defined a new object, which is locally the set of valuations of a field? For Dedekind rings, these seems to give something similar to the spectrum of the given Dedekind domain. Is this interesting in other contexts? Can one gain something by looking at it from this perspective? Edit: Although valuations do not give varieties up to isomorphism, our new object could still be something along the lines of "variety up to birational equivalence." ag.algebraic-geometry schemes valuation-theory David Corwin David CorwinDavid Corwin $\begingroup$ Is this something like what you mean? sbseminar.wordpress.com/2007/09/18/berkovich-spaces-i $\endgroup$ – Qiaochu Yuan Jul 21 '10 at 20:31 $\begingroup$ This seems similar. Also, is this the way in which the "spectrum" from algebraic geometry connects to the "spectrum" from functional analysis? $\endgroup$ – David Corwin Jul 21 '10 at 20:49 $\begingroup$ David, although Grothendieck hated valuation theory (and then had to concede its usefulness when Serre pointed him in the direction of valuative criteria) and thought that Tate's dream of $p$-adic analytic spaces was ridiculous (until Tate showed what he could do with it), the role of valuation-theoretic methods in algebraic geometry has undergone somewhat of a reivival in recent years. No way does it replace schemes, but these crazy spaces (and their quasi-compactness) are a powerful tool for some constructions in the spirit of Zariski. See my comment on the rabbit's answer below. $\endgroup$ – BCnrd Jul 21 '10 at 22:42 $\begingroup$ There are some inaccuracies in the question. 1. A scheme is not a sheaf. It is a topological space together with a sheaf of rings (such that...). 2. We cannot actually define a variety of dimension greater than one as the set of valuations of $K/k$: this ignores the existence of birationally equivalent, but nonisomorphic, smooth projective varieties. $\endgroup$ – Pete L. Clark Jul 22 '10 at 5:54 $\begingroup$ C'mon folks, a big plus of Grothendieck topologies and topoi is that we can "identify" geometric objects with functors they represent, and then ask if those functors are fppf sheaves. Anyone who's ever said that a functor on schemes is "represented by an algebraic space" instead of "is an algebraic space", or "identifies" a scheme with the fppf or etale sheaf functor it represents (e.g., when using finite flat commutative group schemes, doing descent theory, using constr. etale sheaves, etc.) has engaged in such abuse of terminology. This is completely standard, and convenient. No problem. $\endgroup$ – BCnrd Jul 22 '10 at 15:07 This is an old approach to finding models for varieties, introduced by Zariski in 1944 in his work on resolution of singularities. (See "The compactness of the Riemann manifold of an abstract field of algebraic functions", Bulletin of the American Mathematical Society 50: 683–691, doi:10.1090/S0002-9904-1944-08206-2, MR0011573) He defined a Zariski topology on a space of valuations, which seems to have inspired Grothendieck's definition of Zariski topology on a scheme. Much of Zariski's work on these spaces is rather similar to Grothedieck's work on the foundations of schemes. Zariski called the space of valuations the "Riemann manifold" of a variety, though it is now called the Zariski-Riemann space. Volume 2, chapter VI section 17 of Zariski and Samuel's book on commutative algebra gives more details. Richard BorcherdsRichard Borcherds $\begingroup$ I should add that in Elliptic Curves by Alain Robert, he says that the set of valuations "is going to play the role of 'spectrum' of the field K(V)." This book was written in 1972. $\endgroup$ – David Corwin Jul 29 '10 at 17:33 This "space of valuations" to me sounds like the Riemann-Zariski space. Generally this should be pretty nasty, for surfaces eg it is some kind of limit of the system of all possible blow-ups. It is an old idea. For a different direction towards compactifying $Spec(\mathbb{Z})$, you can read about Arakelov theory and look at the recent papers of Connes and Consani on the arXiv about geometry over the "field" with one element. Eugene EisensteinEugene Eisenstein $\begingroup$ The underlying spaces of schemes are also nasty if one stares at them too closely, but with experience they're not so bad. Ditto for Riemann-Zariski spaces, so they shouldn't be regarded as "pretty nasty". In fact, Berkovich spaces (and adic spaces) are like this too: also spaces of valuations on a ring (in the affinoid case), but with practice one gets accustomed to them by putting on foggy glasses. As for the recommendation in a "different direction", I recommend Borger's viewpoint. $\endgroup$ – BCnrd Jul 22 '10 at 1:59 $\begingroup$ +1 for recommending Borger's viewpoint. $\endgroup$ – Chandan Singh Dalawat Jul 22 '10 at 3:36 $\begingroup$ Thank you Brian, you are absolutely right. I should have said "the topology can be different from what we are used to on schemes" instead of "nasty." Also thank you for mentioning Borger. $\endgroup$ – Eugene Eisenstein Jul 25 '10 at 23:22 $\begingroup$ @EugeneEisenstein +1 for seconding Chandan's appreciation for the recommendation of Borger's viewpoint. $\endgroup$ – Somatic Custard Oct 12 '18 at 18:14 This perspective is given in C. Chevalley, "Introduction to the theory of algebraic functions in one variable". There instead of considering smooth algebraic curves over the complex numbers(or Riemann surfaces), the author considers simply considers the function field. The author also constructs the curve with the topology from this description, as you mention in your post. In general for other commutative rings, for example for the ring $\mathbb Z$, considering the valuations instead of the ideals is the perspective of Arakelov theory. I do not know anything on that subject; but I have seen it mentioned in Neukirch, "Algebraic Number Theory". The objective seems to be treating $\mathbb Z$ as a compactified projective curve, and the archimedean valuations play the role of the points at infinity. I have a recollection that that the theory of the infinite place is achieved by attaching a hermitian bundle to the geometric object we are considering. Perhaps you can read more in Serge Lang, "Introduction to Arakelov Theory", or Faltings, "Lectures on the arithmetic Riemann Roch theorem". This theory seems to have been used in Faltings' proof of the Mordell conjecture. I am not aware of other applications. Perhaps experts can say more on this. AnweshiAnweshi $\begingroup$ You mean treating Spec Z as a compactified projective curve. $\endgroup$ – Qiaochu Yuan Jul 21 '10 at 21:26 $\begingroup$ Yes, if you want it phrased thus. $\endgroup$ – Anweshi Jul 22 '10 at 10:48 You cannot define the variety as the set of valuations. Birational varieties will have the same set of valuations. As far as I remember, logicians were looking at it for a while. If you have spare 250 bucks, you can learn something there. Bugs BunnyBugs Bunny $\begingroup$ Fine, then our new object would be like a variety up to birational equivalence. $\endgroup$ – David Corwin Jul 21 '10 at 20:45 $\begingroup$ Yeah, this is probably right, Doc. $\endgroup$ – Bugs Bunny Jul 21 '10 at 21:07 $\begingroup$ Bugs, there are more clever/useful ways to define a "Riemann-Zariski space" attached to a scheme, involving structure beyond function fields. Consider Michael Temkin's very creative use of a scheme-like version of Riemann-Zariski spaces in his recent work on semistable curve fibrations and higher-dimensional analogues. So the answer is a definite "yes" to the question of whether spaces of valuations can be useful in non-birational modern algebraic geometry; moreover, it has nothing to do with logic stuff. Grothendieck would spin in his grave over this, except he's still alive (it seems). $\endgroup$ – BCnrd Jul 21 '10 at 22:38 $\begingroup$ In one of the more extreme topologies defined by Voevodsky -- the $h$-topology -- the valuation rings whose fraction fields are algebraically closed play the role of local rings. This observation was a little useful to me once, but I don't know if it has ever been useful to anybody else. $\endgroup$ – Tom Goodwillie Jul 22 '10 at 1:05 $\begingroup$ If $R$ is a normal subring of a field $K$, it is common to study $R$ by considering the set of valuations that are nonnegative on $R$. This will distinguish different subrings of $K$, so you can get more information than just the birational class. As far as I know, there is no difficulty extending this construction to non-affine normal schemes, but I don't know a reference for that. $\endgroup$ – David E Speyer Jul 22 '10 at 17:52 Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry schemes valuation-theory or ask your own question. What are examples illustrating the usefulness of Krull (i.e., rank > 1) valuations? Ringed and locally ringed spaces Closed points of valuation scheme The space of valuations of a function field Embeddings of fields and rational points
CommonCrawl
Publications of the Astronomical Society of Australia (7) The Evolutionary Map of the Universe pilot survey Ray P. Norris, Joshua Marvil, J. D. Collier, Anna D. Kapińska, Andrew N. O'Brien, L. Rudnick, Heinz Andernach, Jacobo Asorey, Michael J. I. Brown, Marcus Brüggen, Evan Crawford, Jayanne English, Syed Faisal ur Rahman, Miroslav D. Filipović, Yjan Gordon, Gülay Gürkan, Catherine Hale, Andrew M. Hopkins, Minh T. Huynh, Kim HyeongHan, M. James Jee, Bärbel S. Koribalski, Emil Lenc, Kieran Luken, David Parkinson, Isabella Prandoni, Wasim Raja, Thomas H. Reiprich, Christopher J. Riseley, Stanislav S. Shabala, Jaimie R. Sheil, Tessa Vernstrom, Matthew T. Whiting, James R. Allison, C. S. Anderson, Lewis Ball, Martin Bell, John Bunton, T. J. Galvin, Neeraj Gupta, Aidan Hotan, Colin Jacka, Peter J. Macgregor, Elizabeth K. Mahony, Umberto Maio, Vanessa Moss, M. Pandey-Pommier, Maxim A. Voronkov Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021 Published online by Cambridge University Press: 07 September 2021, e046 We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers $270 \,\mathrm{deg}^2$ of an area covered by the Dark Energy Survey, reaching a depth of 25–30 $\mu\mathrm{Jy\ beam}^{-1}$ rms at a spatial resolution of $\sim$ 11–18 arcsec, resulting in a catalogue of $\sim$ 220 000 sources, of which $\sim$ 180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here. Early Science from POSSUM: Shocks, turbulence, and a massive new reservoir of ionised gas in the Fornax cluster C. S. Anderson, G. H. Heald, J. A. Eilek, E. Lenc, B. M. Gaensler, Lawrence Rudnick, C. L. Van Eck, S. P. O'Sullivan, J. M. Stil, A. Chippendale, C. J. Riseley, E. Carretti, J. West, J. Farnes, L. Harvey-Smith, N. M. McClure-Griffiths, Douglas C. J. Bock, J. D. Bunton, B. Koribalski, C. D. Tremblay, M. A. Voronkov, K. Warhurst Published online by Cambridge University Press: 23 April 2021, e020 We present the first Faraday rotation measure (RM) grid study of an individual low-mass cluster—the Fornax cluster—which is presently undergoing a series of mergers. Exploiting commissioning data for the POlarisation Sky Survey of the Universe's Magnetism (POSSUM) covering a ${\sim}34$ square degree sky area using the Australian Square Kilometre Array Pathfinder (ASKAP), we achieve an RM grid density of ${\sim}25$ RMs per square degree from a 280-MHz band centred at 887 MHz, which is similar to expectations for forthcoming GHz-frequency ${\sim}3\pi$-steradian sky surveys. These data allow us to probe the extended magnetoionic structure of the cluster and its surroundings in unprecedented detail. We find that the scatter in the Faraday RM of confirmed background sources is increased by $16.8\pm2.4$ rad m−2 within 1 $^\circ$ (360 kpc) projected distance to the cluster centre, which is 2–4 times larger than the spatial extent of the presently detectable X-ray-emitting intracluster medium (ICM). The mass of the Faraday-active plasma is larger than that of the X-ray-emitting ICM and exists in a density regime that broadly matches expectations for moderately dense components of the Warm-Hot Intergalactic Medium. We argue that forthcoming RM grids from both targeted and survey observations may be a singular probe of cosmic plasma in this regime. The morphology of the global Faraday depth enhancement is not uniform and isotropic but rather exhibits the classic morphology of an astrophysical bow shock on the southwest side of the main Fornax cluster, and an extended, swept-back wake on the northeastern side. Our favoured explanation for these phenomena is an ongoing merger between the main cluster and a subcluster to the southwest. The shock's Mach angle and stand-off distance lead to a self-consistent transonic merger speed with Mach 1.06. The region hosting the Faraday depth enhancement also appears to show a decrement in both total and polarised radio emission compared to the broader field. We evaluate cosmic variance and free-free absorption by a pervasive cold dense gas surrounding NGC 1399 as possible causes but find both explanations unsatisfactory, warranting further observations. Generally, our study illustrates the scientific returns that can be expected from all-sky grids of discrete sources generated by forthcoming all-sky radio surveys. The POlarised GLEAM Survey (POGS) II: Results from an all-sky rotation measure synthesis survey at long wavelengths C. J. Riseley, T. J. Galvin, C. Sobey, T. Vernstrom, S. V. White, X. Zhang, B. M. Gaensler, G. Heald, C. S. Anderson, T. M. O. Franzen, P. J. Hancock, N. Hurley-Walker, E. Lenc, C. L. Van Eck Published online by Cambridge University Press: 17 July 2020, e029 The low-frequency linearly polarised radio source population is largely unexplored. However, a renaissance in low-frequency polarimetry has been enabled by pathfinder and precursor instruments for the Square Kilometre Array. In this second paper from the POlarised GaLactic and Extragalactic All-Sky MWA Survey-the POlarised GLEAM Survey, or POGS-we present the results from our all-sky MWA Phase I Faraday Rotation Measure survey. Our survey covers nearly the entire Southern sky in the Declination range $-82^\circ$ to $+30^\circ$ at a resolution between around three and seven arcminutes (depending on Declination) using data in the frequency range 169−231 MHz. We have performed two targeted searches: the first covering 25 489 square degrees of sky, searching for extragalactic polarised sources; the second covering the entire sky South of Declination $+30^\circ$, searching for known pulsars. We detect a total of 517 sources with 200 MHz linearly polarised flux densities between 9.9 mJy and 1.7 Jy, of which 33 are known radio pulsars. All sources in our catalogues have Faraday rotation measures in the range $-328.07$ to $+279.62$ rad m−2. The Faraday rotation measures are broadly consistent with results from higher-frequency surveys, but with typically more than an order of magnitude improvement in the precision, highlighting the power of low-frequency polarisation surveys to accurately study Galactic and extragalactic magnetic fields. We discuss the properties of our extragalactic and known-pulsar source population, how the sky distribution relates to Galactic features, and identify a handful of new pulsar candidates among our nominally extragalactic source population. Science with the Murchison Widefield Array: Phase I results and Phase II opportunities – Corrigendum A. P. Beardsley, M. Johnston-Hollitt, C. M. Trott, J. C. Pober, J. Morgan, D. Oberoi, D. L. Kaplan, C. R. Lynch, G. E. Anderson, P. I. McCauley, S. Croft, C. W. James, O. I. Wong, C. D. Tremblay, R. P. Norris, I. H. Cairns, C. J. Lonsdale, P. J. Hancock, B. M. Gaensler, N. D. R. Bhat, W. Li, N. Hurley-Walker, J. R. Callingham, N. Seymour, S. Yoshiura, R. C. Joseph, K. Takahashi, M. Sokolowski, J. C. A. Miller-Jones, J. V. Chauhan, I. Bojičić, M. D. Filipović, D. Leahy, H. Su, W. W. Tian, S. J. McSweeney, B. W. Meyers, S. Kitaeff, T. Vernstrom, G. Gürkan, G. Heald, M. Xue, C. J. Riseley, S. W. Duchesne, J. D. Bowman, D. C. Jacobs, B. Crosse, D. Emrich, T. M. O. Franzen, L. Horsley, D. Kenney, M. F. Morales, D. Pallot, K. Steele, S. J. Tingay, M. Walker, R. B. Wayth, A. Williams, C. Wu Published online by Cambridge University Press: 23 March 2020, e014 Science with the Murchison Widefield Array: Phase I results and Phase II opportunities Published online by Cambridge University Press: 13 December 2019, e050 The Murchison Widefield Array (MWA) is an open access telescope dedicated to studying the low-frequency (80–300 MHz) southern sky. Since beginning operations in mid-2013, the MWA has opened a new observational window in the southern hemisphere enabling many science areas. The driving science objectives of the original design were to observe 21 cm radiation from the Epoch of Reionisation (EoR), explore the radio time domain, perform Galactic and extragalactic surveys, and monitor solar, heliospheric, and ionospheric phenomena. All together $60+$ programs recorded 20 000 h producing 146 papers to date. In 2016, the telescope underwent a major upgrade resulting in alternating compact and extended configurations. Other upgrades, including digital back-ends and a rapid-response triggering system, have been developed since the original array was commissioned. In this paper, we review the major results from the prior operation of the MWA and then discuss the new science paths enabled by the improved capabilities. We group these science opportunities by the four original science themes but also include ideas for directions outside these categories. The POlarised GLEAM Survey (POGS) I: First results from a low-frequency radio linear polarisation survey of the southern sky C. J. Riseley, E. Lenc, C. L. Van Eck, G. Heald, B. M. Gaensler, C. S. Anderson, P. J. Hancock, N. Hurley-Walker, S. S. Sridhar, S. V. White The low-frequency polarisation properties of radio sources are poorly studied, particularly in statistical samples. However, the new generation of low-frequency telescopes, such as the Murchison Widefield Array (the precursor for the low-frequency component of the Square Kilometre Array) offers an opportunity to probe the physics of radio sources at very low radio frequencies. In this paper, we present a catalogue of linearly polarised sources detected at 216 MHz, using data from the Galactic and Extragalactic All-sky Murchison Widefield Array survey. Our catalogue covers the Declination range –17° to –37° and 24 h in Right Ascension, at a resolution of around 3 arcminutes. We detect 81 sources (including both a known pulsar and a new pulsar candidate) with linearly polarised flux densities in excess of 18 mJy across a survey area of approximately 6 400 deg2, corresponding to a surface density of 1 source per 79 deg2. The level of Faraday rotation measured for our sources is broadly consistent with those recovered at higher frequencies, with typically more than an order of magnitude improvement in the uncertainty compared to higher-frequency measurements. However, our catalogue is likely incomplete at low Faraday rotation measures, due to our practice of excluding sources in the region where instrumental leakage appears. The majority of sources exhibit significant depolarisation compared to higher frequencies; however, a small sub-sample repolarise at 216 MHz. We also discuss the polarisation properties of four nearby, large-angular-scale radio galaxies, with a particular focus on the giant radio galaxy ESO 422–G028, in order to explain the striking differences in polarised morphology between 216 MHz and 1.4 GHz. The Challenges of Low-Frequency Radio Polarimetry: Lessons from the Murchison Widefield Array Murchison Widefield Array E. Lenc, C. S. Anderson, N. Barry, J. D. Bowman, I. H. Cairns, J. S. Farnes, B. M. Gaensler, G. Heald, M. Johnston-Hollitt, D. L. Kaplan, C. R. Lynch, P. I. McCauley, D. A. Mitchell, J. Morgan, M.F. Morales, Tara Murphy, A. R. Offringa, S. M. Ord, B. Pindor, C. Riseley, E. M. Sadler, C. Sobey, M. Sokolowski, I. S. Sullivan, S. P. O'Sullivan, X. H. Sun, S. E. Tremblay, C. M. Trott, R. B. Wayth We present techniques developed to calibrate and correct Murchison Widefield Array low-frequency (72–300 MHz) radio observations for polarimetry. The extremely wide field-of-view, excellent instantaneous (u, v)-coverage and sensitivity to degree-scale structure that the Murchison Widefield Array provides enable instrumental calibration, removal of instrumental artefacts, and correction for ionospheric Faraday rotation through imaging techniques. With the demonstrated polarimetric capabilities of the Murchison Widefield Array, we discuss future directions for polarimetric science at low frequencies to answer outstanding questions relating to polarised source counts, source depolarisation, pulsar science, low-mass stars, exoplanets, the nature of the interstellar and intergalactic media, and the solar environment.
CommonCrawl
Procedural Quad Sphere a metal to plastic flow, and results of these tests may closely parallel each other. Here is the formula: Distance = echo signal high time * Sound speed (340M/S)/2. Create an account or log into Facebook. resulting quad-meshes are well suited as control meshes for high quality spline surfaces as for instance NURBS or T-splines. Ford Campus Vision and Lidar Data Set Gaurav Pandey ∗, James R. The best place to ask and answer questions about development with Unity. project a temp node on sphere. Proland is a procedural landscape rendering library. The sum of these multiples is 23. News, email and search are just the beginning. So, the new update is focusing mainly on implementing a new procedural terrain engine for the game. rapidly perform distance measurements on the sphere is the separation procedure • separation uses a k-dimensional (or k-d) tree to pixelate the sky – a k-d tree, essentially, recursively bisects a dataset perpendicular to each coordinate axis (e. Our 28,901,808 listings include 6,215,559 listings of homes, apartments, and other unique places to stay, and are located in 154,023 destinations in 228 countries and territories. Quad buffering is the default choice for most PS2 VU. So, you're probably wondering, TWG, what IS KLE? Allow me to tell you all about it. In other words they "bisect" (cut in half) each other at right angles. Predecessors to the renowned 1176LN, these rare units are prized for their gritty tube warmth, unique attack characteristics, and outstanding build quality, making. My Renderman results looked like a solid glass sphere. Use math to come up with a better approach. Les machines à vecteurs de support ou séparateurs à vaste marge (en anglais support vector machine, SVM) sont un ensemble de techniques d'apprentissage supervisé destinées à résoudre des problèmes de discrimination [note 1] et de régression. Where do the images come from? How are they they put together? And how often are they updated?. # 2: Let water run for 15 minutes to assure that you are. Horizon was instrumental in getting my business, Goodspeed Lawn and Design, through our first year of business. The following procedure will help you create a bubble chart with similar results. You'll begin by adding textures to the utility containers in the fenced area at the rear of the compound. Common tank sizes and approximate dimensions are shown in the chart. This is Apollo Control at 145 hours. It's based on ARM cortex M3 (STM32F103VCT6), providing 72MS/s sampling rate with integrated FPGA and high speed ADC. QuadCylinder MAXScript primitive All-quad cylinder primitive with hemispherical capping option. Unfortunately it is well known that a high-quality NURBS (surface) repre-. The position's Z is always 0, and the W is 1. Right now we're using our procedural cube sphere, but it could be any mesh. Those Al'kesh peeled off and engaged the Eagle fighters. The degree of turbidity can be measured from the amount of light scattered by the materials in the sample using a UV-Visible spectrophotometer in transmission mode or using an integrating sphere. Target is Kismet Procedural Mesh Library. Con Koumis 96,095 views. In this paper, we explore the techniques required by traditional HPC programmers in porting HPC applications to FPGAs, using as an example the LFRic weather and climate model. The DSO Nano V3 is a pocket-size compatible 32bit digital storage oscilloscope. Visit ESPN to get up-to-the-minute sports news coverage, scores, highlights and commentary for NFL, MLB, NBA, College Football, NCAA Basketball and more. It has been made for Unity 5. Without any proper shading, volumetric rendering is fairly uninteresting. [fragment shader] 2. The color-cube is defined in its local space (called model space) with origin at the center of the cube with sides of 2 units. If a smooth shaded rendering is being used then the model with 1000 facets is probably just as good as the original with 4 times the number of facets. : an investigation of wide scope. extent in space; a tract or area. Quad warp: Adds quad controls to the layer so that you can change its shape by moving its corners. The billboard quad should have vertex positions on the range of (-1, 1) that get scaled in the vertex shader. 5CH RC Camera Spy. Chapter 07. I'm going to show how to make a triangular grid, but you can easily extrapolate this to make square grids and other shapes you need for your Unity projects. dll file containing several scripts that add various new generation options to users. I also tried to reuse a shader for skyboxes but that didn't rotate with the mesh, I think I've read all the top posts on the argument and I think they're of little use for me. Free shipping always!. Lux - an open source shader framework Unity 4. Computer Graphics Stack Exchange is a question and answer site for computer graphics researchers and programmers. The calibration procedure for the Quad DSO it is already described, e. [1] [2] A central distinction in contact mechanics is between stresses acting perpendicular to the contacting bodies' surfaces (known as the normal direction ) and frictional stresses acting tangentially between the surfaces. The left-hand scene inFigure 5 is rendered using real-time ray marching (a. The question is, how do I map that into a depth buffer in any useful way?. Our 28,901,808 listings include 6,215,559 listings of homes, apartments, and other unique places to stay, and are located in 154,023 destinations in 228 countries and territories. Bonifacic , 10. 1 phone recycling website. extent in space; a tract or area. VMware vSphere vDS, VMkernel Ports, and Jumbo Frames 21 May 2009 · Filed in Tutorial. And the inexpensive ($120. What's being done is each quad is being tessellated and then the vertex position is normalized. ATI OpenGL Legacy Demos Overview The ATIOpenGLDemos. Because all noise is generated on the GPU, the Decade Engine running on the CPU has no knowledge of how much a vertex is displaced. This weekly series aims to keep you on top of the latest tools and techniques, and introduces fresh perspectives on traditional methods for architectural and product visualization, animation, visual effects, games and virtual worlds, and motion graphics. Sphere w/ Uniform Diffuse coefficient Radiance Map Sphere w/ Radiance Map + = Texture specifies diffuse color ( kd coefficients) for each point on surface - three coefficients, one each for R, G, and B radiance channels Sphere w/ Uniform Diffuse coefficient Reflectance (kd) Map Sphere w/ Reflectance Map. In this lecture we will extend the code to add in block type selection and pick the appropriate texture to display on the quad from a texture atlas using UV mapping. Flange MAXScript primitive Another all-quad primitive, created with TurboSmooth in mind so it has an additional Outline parameter to tighten the edge loops. A Guide to Spirit Guides & Angels. Deploying and developing royalty-free open standards for 3D graphics, Virtual and Augmented Reality, Parallel Computing, Neural Networks, and Vision Processing. Complementing this hybrid trap-trap. Create Logo programs (Turtle Graphics & more) using our modern web based logo interpreter and editor! Start running logo commands now! Free Online Turtle Graphics : logointerpreter. As soon as Lazarus was rebuilt, the errors came. The position's Z is always 0, and the W is 1. In this tutorial we'll create a sphere mesh based on a cube, then use mathematical reasoning to improve it. Memory Glass provides a unique method of memorializing your family, friends and pets by suspending cremated remains within solid glass sculptures and keepsake jewelry. In this game are presented five tales, all combined into an overarching volume, each tale reproduced with sincerity and clarity. (A) Measurement for the apparent loss in weight of brass bob: (a) Experiment with tap water: Measure the weight of a solid, say a metallic ball in air by using spring balance as shown in Fig 3. WRIGHTz Abstract. Cube sphere with mesh deformer component. The PQS node contains all of the data regarding our planet's actual terrain. are you using chunked lod on top of a quad sphere, or some other lodding scheme? do you have dynamic lod reduction based on movement speed or just distance? do you get precision problems in your 3d noise at high detail levels? I will probably think of more!. This powder diffractometer can characterize crystallinity, crystal phases, and, in many cases, the identity of solid samples. Cg_Tutorial_to_Unity - Porting of Cg tutorial source code Cg_Shader_Patterns - Multiple Pixel Lights. aim or purpose. Con Koumis 96,095 views. > Quad menu > Transform quadrant > Convert to > Convert to Editable Poly; Create or select an object. Example 507 planarizes a quad mesh until it satisfies a user-given planarity threshold. • SDS Subdivide - Moves all points in the mesh, including the original points, towards the limit surface that Modo derives by using the original mesh as the control points. split hexas. Search the world's information, including webpages, images, videos and more. Contribute to aeroson/UnityProceduralPlanets development by creating an account on GitHub. I'm currently developing a Procedural Voxel Planet using the Marching Cubes algorithm. This is the basic 3D geometry we will start with. Returns: Sphere generation. Unity's default cube game object. The Fire Alarm Store - : - Our Special Offers Browse By Manufacturer Browse By System Type Clearance Fire Alarm Kits fire alarms, fire extinguishers. Enter a user name that you want to use for logging in to the Edge CLI. b) In the Mesh Faces form, select Quad from the Elements option menu under Scheme and Map from the option menu to the right of Type. Online Shopping for Electronics, Fashion, Appliances, Furniture, Baby Needs & Toys at Lazada. Then the origin is mapped to the south pole of the sphere and the point at infinity is mapped to the north pole. - cix/QuadSphere. All COBE map data are presented in a quadrilateralized spherical projection, an approximately equal-area projection (to within a few percent) in which the celestial sphere is projected onto an inscribed cube. , à l'âge de 85 ans. Online shopping from the earth's biggest selection of books, magazines, music, DVDs, videos, electronics, computers, software, apparel & accessories, shoes, jewelry. Cube sphere with mesh deformer component. However, that is not what I want to do, the shader is supposed to be grabbing the uv coord and using the color value pick which texture to apply. 25 width Sphere 32mm lamp finial. Even Wikipedia also, just states the theorem!!I want to know the procedure to find the radius of the Soddy Circle?? I apologize if its duplicate and to mention it is not a homework. I tried getting vertices data from sphere using blender, but I cant figure it out how. Most of my "special effects" experience is in working on a video game or two. [Plugin] Simplex Noise 1D,2D,3D,4D Fast Perlin Noise Version 12-19-2015, 09:29 PM Note that I'm using it for 3d locations on a sphere, but the same idea would. THE SPHERE ALEX TOWNSEND, HEATHER WILBERy, AND GRADY B. For HSDS and Poly surfaces, the basic interface remains the same, except that the maximum number of sides per polygon increases from 4 to over two billion. Description Draws a rectangle to the screen. Video game industry news, developer blogs, and features delivered daily. Let's have a look at a cube. Do not hot plug Grove-Ultrasonic-Ranger, otherwise it will damage the sensor. However, using linear modal analysis to calculate the basis from scratch is known to be. Here is a coefficient of the acceleration field, ℘ represents the scalar potential of the pressure field in the center of the sphere, = − is the Lorentz factor for the velocities of the particles in the center of the sphere, and in view of the argument's smallness the sine is expanded to the second-order terms. You'll begin by adding textures to the utility containers in the fenced area at the rear of the compound. Our 28,901,808 listings include 6,215,559 listings of homes, apartments, and other unique places to stay, and are located in 154,023 destinations in 228 countries and territories. Returns: Sphere generation. The triangle arrays are simply indices into the vertex arrays; three indices for each triangle. The entry point of those script must be a reference to a meshfilter's mesh. There are over 30 tools that range from UVing to generating Motion Vectors from simulations. proceduralLandscape. pbproj Project Builder project includes a set of ATI SDK demos ported from the Rage 128, Radeon, and Radeon 8500 PC OpenGL demo suites. Sphere Volume This shader uses the worldPos built-in unity variable to calculate the world space of each texel and render the color of it appropriately. Merseyside glass manufacturer. Wind Rose and Polar Bar Charts. MassageTools offers electric massagers from top massage therapy tools and equipment manufacturers such as Thumper Massagers, AcuVibe, ePulse, Human Touch, Massagenius and more. Williamson, Sr. Create two and three dimensional shapes. The calibration procedure for the Quad DSO it is already described, e. random points on the unit sphere in $\mathbb R^d$, the probability that the origin is contained in the. com - Surf your logo code!. Not that it matters, but…. fitobject = fit(x,y,fitType,Name,Value) creates a fit to the data using the library model fitType with additional options specified by one or more Name,Value pair arguments. Over the first 50 frames, the sphere moves between the first two states, and over the second half of the animation, the sphere moves between the second and third states. The original technique was pioneered by Edwin Catmull in 1974. To overcome this, we discuss an extension of Smith theory in the volume setting that includes NDFs on the entire sphere in order to produce a single unified reflectance model capable of describing everything from a smooth flat mirror all the way to a semi-infinite isotropically scattering medium with both low and high roughness regimes in between. Usage-Place the Math3d. Collision spaces: Quad tree, hash space, and simple. The Pure Reference Extreme (PRE) is the finest speaker I have ever heard overall. The spin-coefficient formalism (SC formalism) (also known in the literature as Newman-Penrose formalism (NP formalism)) is a commonly used technique based on the use of null tetrads, with ideas taken from 2-component spinors, for the detailed treatment of 4-dimensional space-times satisfying the equations of Einstein's theory of general relativity. JmonkeyStore is OPEN After years of requests and attempts, a software store (a. 98 55 57 56 55. And this is the effect when using quad calculated uvs, which sounds like its more a cylindrical projection problem common to wrapping texture round a sphere. The following documents are available for facilities to download, as part of the accreditation process. Quad warp: Adds quad controls to the layer so that you can change its shape by moving its corners. I'm used to all perspectives being in sync from other tools. Thoughts and ideas from Kinex Medical Company. Most of my "special effects" experience is in working on a video game or two. flavors: stable, equivariant, rational, p-adic, proper, geometric, cohesive. Color in a procedural sky depends on a ray direction and we will first convert this space from a polar directions of "azimuth and elevation" to a 2D space of uv. The whole project, start-to-finish, will be erected and removed in a 24 hour period. /*++ BUILD Version: 0004 // Increment this if a change has global effects Copyright (c) 1985-96, Microsoft Corporation Module Name: gl. An open central sphere allows visitors to circulate through so that they may encounter a microcosm of hanging gardens. pbproj Project Builder project includes a set of ATI SDK demos ported from the Rage 128, Radeon, and Radeon 8500 PC OpenGL demo suites. Unwrap UVW supports polygons and Bezier quad and tri patch faces in addition to triangles and quads. Ford Campus Vision and Lidar Data Set Gaurav Pandey ∗, James R. Surface (polygonal) Simplification. Most of my "special effects" experience is in working on a video game or two. Announcing Unreal Engine 4 support We got Voxel Farm running in UE4. In this case the procedural solid is simply a sphere defined by the absolute distance from a central point. Find the right care, right when you need it. Visualize the mapping in Unity. Based on ARM -M3, it's equipped with 320*240 color display, SD card, USB port and recharging function. b (1) : a subdivision of a defensive military position. A 3D map, on the other hand, is generated by 3ds Max. Let's all play Mechwarrior 4: Mercenaries together! is this thread dead? i managed to grab a copy of the old 2010 3. Upon subdivision, the two lengths of the i -th generation tiling L i and S i transform as. While both the texture coordinates and the positions are correct, the final normals are wrong. Translucent Shader - A shader for different translucent surfaces (for example: skin, paraffin, plastic etc. Right now we're using our procedural cube sphere, but it could be any mesh. I'm at a loss for a better method by which to subdivide and index the surface. 1 mi (7 km). positive p5 is inside of the sphere. For every vertex there can be a normal, two texture coordinates, color and tangent. Definition of sector. Type the crossword puzzle answer, not the clue, below. A quad mesh can be transformed in a planar quad mesh with Shape-Up 23, a local/global approach that uses the global step to enforce surface continuity and the local step to enforce planarity. Type or paste a DOI name into the text box. In-person and online visits. MAKO Partial Knee Resurfacing MAKO is a robotic arm assisted partial knee resurfacing procedure designed to relieve the pain caused by joint degeneration due to osteoarthritis (OA). Understand how to use Perlin and fractal noise to generate 2D terrain and 3D landscapes. These tools are not tied to the regular Houdini development cycle and become available. Create or select an object. Each of the six cube faces is a quad tree which is used for subdividing the terrain as you move closer to the planet. Git as a version control and backup system. The positions are already in clip-space. QUaD Data Available through LAMBDA The QUaD team has released their sky maps and band powers through LAMBDA. This is a perfect add-on to ProFlightSimulator which does not include combat fighting. are you using chunked lod on top of a quad sphere, or some other lodding scheme? do you have dynamic lod reduction based on movement speed or just distance? do you get precision problems in your 3d noise at high detail levels? I will probably think of more!. Sell your old mobile or tablet for cash with Mazuma Mobile, the UK's No. 3ds Max is a powerful, deep, and multifaceted program, so there's always more to learn. Procedural Planets Source (self. Anthony Hospital - Centura Health, notified the Department [Colorado Department of Public Health and Environment] of a Y-90 SIR-Sphere misadministration. This tutorial follows Procedural Grid. Quad-Core 1. 00) and extra tough, water proof Aquacopter frame; A fully functional vacuum forming system can actually be constructed with a shop vacuum and an electric grill. 3lb Natural - $958. The Grand Army of the Republic (GAR), also known as the Grand Army and the Clone Army, was a major branch of the Galactic Republic Military composed entirely of clone troopers, an army of elite soldiers created from the genetically altered template of the Mandalorian bounty hunter Jango Fett. The first order in the third sphere is the thrones. 98 55 57 56 55. Welcome to the Kopernicus Procedural Quad Sphere Library Expansion, or KLE for short. Beyond the thrones are the cherubim. The Centers for Disease Control and Prevention (CDC) cannot attest to the accuracy of a non-federal website. The semester is over at last, and my grades should be in by Monday. Moved Permanently. See the Procedural example project for examples of using the mesh interface. What is the Amazon Trade-In program? The Amazon Trade-In program allows customers to receive an Amazon. Color in a procedural sky depends on a ray direction and we will first convert this space from a polar directions of "azimuth and elevation" to a 2D space of uv. How can I sync the zoom level in all 3 perspectives in a quad view? Currently, when I zoom in one of them (top ortho for instance) the other two are not zooming with it, resulting in an inconsistent setup. It has been made. The blend amount should use the color blend value. Sphere SHAPE - 1. Upon subdivision, the two lengths of the i -th generation tiling L i and S i transform as. 0 at z = radius, with t increasing linearly along longitudinal lines. proceduralLandscape. This doesn't happen with concentric spheres as we let the outer radius go to infinity, hence a single sphere has a nonzero capacitance. 00d sphere,. if you want to deform the sphere, it is disadvantageous that the density of vertices is greater around the poles. Useful for post-processing effects. find_text now uses an improved parallelization on internal data level, which results in a speedup of up to 50% on a quad core machine with hyperthreading. Free shipping always!. A square pyramid is a three-dimensional solid characterized by a square base and sloping triangular sides that meet at a single point above the base. You're in the right place. html - (SOON) An example of how to depth map quad tree sphere. The timing of card play is huge, offering precise timing and ridiculous efficiency when played correctly. Meshing solutions from ANSYS for fluid models provide unstructured tri- and quad-surface meshing driven by curvature, proximity, smoothness and quality, in combination with a pinch capability that automatically removes insignificant features. The positions are already in clip-space. What is the Amazon Trade-In program? The Amazon Trade-In program allows customers to receive an Amazon. In this paper, we explore the techniques required by traditional HPC programmers in porting HPC applications to FPGAs, using as an example the LFRic weather and climate model. , à l'âge de 85 ans. The motion_box is the bounding box on shutter close time for motion blur. Where the lower hemisphere intersects the horizontal plane is the outward trace of the stereonet plot. Preparation. 5CH RC Camera Spy. The system uses a geodesic meshing for the base sphere to guarantee each surface patch has the same area. For each eye, composite the stereo eye image on top of the mono image using a Unity image effect. # As there currently (2013-04-12) is no function read_colors in color_list. Learn libGDX inside out on the Wiki, study the Javadocs, or read a third-party tutorial. Little Little G Age 16 to 18 Challenge Level: This problem involves four different parts which you can either discuss, just think about or analyse with various levels of detail. Shop by department, purchase cars, fashion apparel, collectibles, sporting goods, cameras, baby items, and everything else on eBay, the world's online marketplace. Context Homotopy theory. For many purposes it is perfectly fine, but for some use cases, e. WebLogic Express incorporates the presentation and database access services from WebLogic Server, enabling developers to create interactive and transactional e-business applications quickly and to provide presentation services for existing applications. MAKO Partial Knee Resurfacing MAKO is a robotic arm assisted partial knee resurfacing procedure designed to relieve the pain caused by joint degeneration due to osteoarthritis (OA). To help users navigate the site we have posted a site navigation guide. The spin-coefficient formalism (SC formalism) (also known in the literature as Newman-Penrose formalism (NP formalism)) is a commonly used technique based on the use of null tetrads, with ideas taken from 2-component spinors, for the detailed treatment of 4-dimensional space-times satisfying the equations of Einstein's theory of general relativity. Welcome to the Extension for Autodesk ® 3ds Max ® 2013. Gold colostrum is the first milk immediately after giving birth and then transition milk is the milk that follows over the next four days. 3 Functions of Multiple Variables. Preparation. So we only need to be concerned with terms like 5 x ² z ². [Plugin] Simplex Noise 1D,2D,3D,4D Fast Perlin Noise Version 12-19-2015, 09:29 PM Note that I'm using it for 3d locations on a sphere, but the same idea would. One of the best QUICK procedure to erase Pix4Dmapper is to use Advanced Uninstaller PRO. sphere tracing), and the right-hand scene is rendered using real-time ray tracing. Introduction. Each quad is made up of 4 vertices, defined in counter-clockwise (CCW) order, such as the normal vector is pointing out, indicating the front face. It is recommended that you allow all cookies. Hello Students! First of all I want to thank all of you for enrolling in this course - its so great to see this kind of involvement. • SDS Subdivide - Moves all points in the mesh, including the original points, towards the limit surface that Modo derives by using the original mesh as the control points. Both scenes are rendered at interactive rates by pure fragment shaders drawing a single full-screen quad. The signal from one satellite allows you to determine that you are on a sphere at a given radius from the satellite. For each named color draw a sphere in the position that corresponds to its RGB values. Here is a coefficient of the acceleration field, ℘ represents the scalar potential of the pressure field in the center of the sphere, = − is the Lorentz factor for the velocities of the particles in the center of the sphere, and in view of the argument's smallness the sine is expanded to the second-order terms. Lab 2: Map Projections and Coordinate Systems in ArcGIS v. You see, KSP creates planets using something called 'Procedural Quad Sphere Mods'. These provide artifact-free textures on all sides. The first order in the third sphere is the thrones. The equatorial radius (semi major axis) of the ellipsoid. Textures are bound to texture units using the glBindTexture function you've used before. CodeChef was created as a platform to help programmers make it big in the world of algorithms, computer programming and programming contests. Memory Glass provides a unique method of memorializing your family, friends and pets by suspending cremated remains within solid glass sculptures and keepsake jewelry. - this is a single quad I made it possible to build - 8x8 quads, 256x256 faces. It generally grows up to 2 m (7 ft) tall, with serrated leaves and red inflorescences. 1 and the Android extension pack to procedurally generate complex geometry in real-time with geometry shaders. The goal of the VTK examples is to illustrate specific VTK concepts in a consistent and simple format. Procedural Mesh Generation If true, build box using simplex elements (i. Other stuff I bet I can work out. Because applying tangent-space normal maps to a sphere is an absolute nightmare, I take the unusual approach of generating object-space normal maps. This tutorial follows Procedural Grid. The planet consists of a cube, with the vertices mapped to a sphere. There's been quite a lot of interest lately in the realtime graphics world to do with Sparse Voxel Octrees (SVOs), so I thought it was about time I had a look at them. , best-fit sphere). How to use the Unreal Engine 4 Editor. Parallel simulation of multiphase flows using octree adaptivity and the volume-of-fluid method Gilou Agbaglah a,b,S´ebastienDelauxc,DanielFuster, b, J´eroˆme Hoepffnera,, Christophe Josseranda,b,St´ephanePopineta, b,c,PascalRaya,,RubenScardovellid and St´ephane Zaleskia,b. Quad warp: Adds quad controls to the layer so that you can change its shape by moving its corners. wet plasma 56Fe 16O40Ar Wet Dry 16O40Ar 56Fe The Element has the option of a desolvator (called the Aridus). I'm still exploring the solution here so more when I'm sure what is a good thing but Im pleased with what I have so far. An internal 2MB USB disk can be used to store waveform captures, user applications and to upgrade firmware. Convert Quad to Triangles. The function identify3d allows you to label points interactively with the mouse: Press the right mouse button (on a two-button mouse) or the centre button (on a three-button mouse), drag a rectangle around the points to be. Not only due to its price of approximately 200 euros, but also because of its low power consumption it is an great candidate for a virtualization home lab. Right now we're using our procedural cube sphere, but it could be any mesh. cube approach in procedural texturing and modeling. by kinexmedical. Just a moment while your game loads. Google Earth is the most photorealistic, digital version of our planet. The short answer is that you do not. Quad warp: Adds quad controls to the layer so that you can change its shape by moving its corners. One more case study in Navy neophilia. Just a moment while your game loads. Target is Kismet Procedural Mesh Library. © 2019 Valve Corporation. How HamSphere Works. derive the Gauss quadrature method for integration and be able to use it to solve. , à l'âge de 85 ans. Being able to change the texture that's on a quad really makes the environment start to look like a Minecraft world even if it is only one block. Book today and save with Wyndham Rewards, the award-winning hotel rewards program. To help users navigate the site we have posted a site navigation guide. Tristan commented on March 3, 2018 at 10:58 pm. 6 Conclusion. Since its origination, the research in the field of IGA is accelerating and has been applied to various problems. COMPUTING WITH FUNCTIONS IN SPHERICAL AND POLAR GEOMETRIES I. StatShow is a website analysis tool which provides vital information about websites. To conclude the procedure, you'll demonstrate that the sphere is responding only to the box's X position, regardless of animation. sphere tracing), and the right-hand scene is rendered using real-time ray tracing. Now play around with some measurements until you have another dot that is exactly the same distance from the focus and the straight line. So for stitching, you need extra border meshes for each level and side and corner which gives you an additional 8 fold mesh count ⇒ ~4590 meshes in total. Texture units. Go to frame 0. This ensures that the best detail will be used at each edge. , but by applying an n-point Gauss-Legendre quadrature rule, as described here, for example. This weekly series aims to keep you on top of the latest tools and techniques, and introduces fresh perspectives on traditional methods for architectural and product visualization, animation, visual effects, games and virtual worlds, and motion graphics. The whole project, start-to-finish, will be erected and removed in a 24 hour period. blas·to·cyst (blas'tō-sist). In this case the procedural solid is simply a sphere defined by the absolute distance from a central point. This code was used in my procedural terrain/flight demo. First of all create a sphere, give it 200 segments so later when we displace the geometry it'll have enough segments to form the contours of the rock surface we're going to create. Just Create a new empty project, name it how you want to. The stereonet forms the surface of this lower hemisphere. The hardness test is preferred because it is simple, easy, and relatively. Contribute to aeroson/UnityProceduralPlanets development by creating an account on GitHub. This is a quad that covers the screen. The various documents/manuals are provided as general guides for operation and to aid in the servicing and repair of your HiFi equipment. We distinguish great circles that are sections of a sphere that diameter is equal to the diameter of the sphere and small circles that are any other section. unfortunately though that copy was downloaded through the mtx and the game is stuck on what is now a very old computer. Target is Procedural Mesh Component. 6 Conclusion. Arm is the industry's leading supplier of microprocessor technology, offering the widest range of microprocessor cores to address the performance, power and cost requirements for almost all application markets. My spherical terrain setup consists of a quadrilateralized spherical cube - a cube morphed into sphere (see first image below) that is subdivided into smaller patches by a quad-tree subdivision algorithm (see second image below). Objects that can be created with planes include floors, tabletops, or mirrors. In particular, the combination of a linear ion trap with the Orbitrap analyzer has proven to be a popular instrument configuration. DSO Quad – Aluminum Alloy now is available at Seeed Studio. alcohol-statistics. Dithering Shader - A color replacement and dithering shader. It is designed to render in real-time very large landscapes, up to whole planets. Adjust displacement distance accordingly. pbproj Project Builder project includes a set of ATI SDK demos ported from the Rage 128, Radeon, and Radeon 8500 PC OpenGL demo suites.
CommonCrawl
Resistor Color Code Calculator and Chart (4-band, 5-band or 6-band) This calculator helps you compute the characteristic impedance of an embedded microstrip. Trace Thickness Substrate Height 1 Trace Width This calculator is designed to calculate the characteristic impedance of an embedded microstrip - a flat conductor suspended over a ground plane with a dielectric between it and another dielectric material above the conductor (see diagram below). Embedded microstrips are commonly crafted using printed circuit boards, although other materials can be used. An embedded microstrip can be constructed using a microstrip with a solder mask. Just enter the given values for trace thickness, substrate heights, trace width and subtrate dielectric in the calculator above and press the "calculate" button. The default units for all given values, except the subtrate dielectric, is in millimetres. It is possible to select other units. Note: height H1 cannot be greater in value than height H2. The calculator will give zero if this is the case. $$Z_{0_{embed}}=Z_{0}\left \{ \frac{1}{\sqrt{e^{\frac{-2b}{H_{1}}}}+\frac{er}{Z_{0surf}e_{reff}}(1-e^{\frac{-2b}{H_{1}}})} \right \}$$ $$er_{eff}=\frac{er+1}{2}+\frac{er-1}{2}\left \{ \sqrt{\frac{W}{W+12H_{1}}}+0.04(1-\frac{W}{H_{1}})^2 \right \}$$ when $$\frac{W}{H_{1}} < 1$$ $$er_{eff}=\frac{er+1}{2}+\frac{er-1}{2}\left \{ \sqrt{\frac{w}{w+12h_{1}}}\right \}$$ when $$\frac{W}{H_{1}} ≥ 1$$ $$W_{eff}=W+\left ( \frac{t}{\pi } \right )ln\left \{ \frac{4e}{\sqrt{(\frac{T}{H})^2+(\frac{T}{W \pi+1.1T\pi})^2}} \right \}\frac{E_{r}+1}{2E_{r}}$$ $$X_{1}=4(\frac{14E_{r}+8}{11E_{r}})(\frac{H_{1}}{W_{eff}})$$ $$X_{2}=\sqrt{16(\frac{H_{1}}{W_{eff}})^2(\frac{14E_{r}+8}{11E_{r}})^2+(\frac{E_{r}+1}{2E_{r}})\pi^2}$$ $$b = H_{1} - H_{2}$$ $$Z_{0_{embed}}$$ = characteristic impedance of the embedded microstrip in ohms (Ω). $$H_{1}$$ = subtrate height 1 $$W$$ = trace width $$T$$ = trace thickness $$\epsilon_{r}$$ = substrate dielectric $$er_{eff}$$ = effective substrate dielectric Source: PC-2141A (2004) "Design Guide for High-Speed Controlled Impedance Circuit Boards" The embedded microstrip has a similar construction to the microstrip except for the additional dielectric subtrate on top of the conductor. Microwave antennas and couplers as well as some filters can be created using the embedded microstrip. These transmission lines are not as easy to manufacture as microstrips, but nevertheless are still far cheaper than the traditional waveguide, as well as being more compact and lighter. However, microstrips cannot handle power levels as high as waveguides can. Microstrips also have issues in power loss, cross-talk and unintentional radiation because they are not enclosed like the waveguide. Embedded microstrips also find themselves in high-speed digital PCB designs where entails signal travel with minimal distortion and no cross-talk and/or radiation is required. Teardown Tuesday: Thermal Camera Use a PICAXE Microcontroller to Read and Display Temperature Teardown of Schneider Electric's Altivar Process ATV630U15M3 Introduction to Integrated Circuits (ICs) ATGM April 08, 2016 Hello. The equations listed on this page look promising but any detailed look brings up questions: - is "H" H1 or H2? It may be presumed that h1 and H1 line up but that is assumption not certainty - is "Er" the same as dielectric constant? Or effective dielectric constant? The calculator brings up answers but there is no error checking, or at least not enough. H1 can be greater than H2 and still yield an answer. Can anyone list the original source of these equations or provide a consistent set of them? Like. Reply bobspam February 22, 2019 Can you update this calculator using the IPC-2141A(2004) Errata ? http://www.ipc.org/4.0_Knowledge/4.1_Standards/2141A_Errata.pdf Edge AI Chips for Voice-Activated Devices Save Power, Protect Privacy Smart Contact Lens With Embedded Microelectronics Promises Bright Future for Vision Correction The JTAG Test Access Port (TAP) State Machine by Sam Gallagher How Will 5G's High-Frequency Band Affect Signal Integrity? Tips for Achieving Low-Frequency Precision and Improved Bandwidth in Photodiode Circuits
CommonCrawl
Circular convolution calculator circular convolution calculator You should be familiar with Discrete-Time Convolution (Section 4. in the unit circle of the complex plane, with one vertex at. g x = e −14​ x c o s x. It has applications in many engineering areas such as discrete signal. Jan 12, 2015 · Digital Signal Processing Calculations like DFT, IDFT, Linear Convolution, Circular Convolution, DCT and IDCT at your fingertips! Supports 480X800 screen resolutions for now! Will be made generic soon! Usage Instructions: 1. The input texture image in these examples is white noise. Definition The convolution of piecewise continuous functions f , g : R → R is the function f ∗ g : R → R given by (f ∗ g)(t) = Z t 0 f (τ)g(t − τ) dτ. Each signal is modelled by a register of N discrete values (samples), and the discrete Fourier Transform (DFT) computed by the Fast Fourier Transform (FFT). Moreover, because they are simple, Introduction. The convolution is determined directly from sums, the definition of convolution. A spatial separable convolution simply divides a kernel into two, smaller kernels. Kernels are 1D or 2D grids of numbers that indicate the influence of a pixel's neighbors on its final value. Exponential Growth/Decay Calculator Convolution Convolution MATLAB source code. Select the DSP operation from the drop-down menu 2. But in practice we can comfortably use the fast convolution without worrying about these small differences in values. The Fourier tranform of a product is the convolution of the Fourier transforms. 5 \leq t \leq -1##, ##-0. Fast convolution algorithms In many situations, discrete convolutions can be converted to circular convolutions so that fast transforms with a convolution Also notice that both ##f## and ##h## are even functions, so their convolution will also be an even function. In order to keep the convolution result size the same size as the input, and to avoid an effect called circular convolution, we pad the signal with zeros. ccn2 = cconv(x1,x2,2) ccn2 = 1×2 -1 1 This describes a simple method I found to do circular convolution, which I think is simpler than the method I saw in Digital Signal Processing, by Proakis, Manolakis. direct. lb SHAHWAN KHOURY Faculty of Engineering, Notre Dame University, Jounieh, Lebanon formulation of a discrete-time convolution of a discrete time input with a discrete time filter. Circular convolution arises most often in the context of fast convolution with a fast Fourier transform (FFT) algorithm. 'Samples Calculator' is used to generate samples for a sinusiodal signal. ∴,x1(n) = {1, 2, 3, 4, 0, 0, Let y = h≈x be the four point circular convolution of the two sequences. The sequence y(n) is equal to the convolution of sequences x(n) and h(n): Convolution has numerous applications including probability and statistics, computer vision, natural language processing, image and signal processing, engineering, and differential equations. Share. Since circular convolution does not have the growth property, it can be used recursively in connectionist systems with fixed width vectors. Linearity. kH[k] Matrix Method to Calculate Circular COnvolution. ( ) = Ж 1 onto the two sequences so that the circular convolution accomplished by the DFT is The method uses circular convolution Circular convolution is used to construct associations of convolution calculator using vectors with 1024 elements. Sep 15, 2020 · (Note: Values might be slightly different from the naive method in some cases, as the fastConvolve function calculates a circular convolution. example. Finally, once all of the convolutions and poolings are complete the final part of the network is a fully connected layer. This graphically translates to linear shifting. The discrete convolution is very similar to the continuous case, it is even much simpler! You only have to do multiplication sums, in a moment we see it, first let's see the formula to calculate the convolution in the discrete or analogous case: Circular convolution is another way of finding the convolution sum of two input signals. This discrete convolution is parallel to that used for continuous functions considered in [9]. What You Will Learn. It can be shown that a convolution in time/space is equivalent to the multiplication in the Fourier domain, after appropriate padding (padding is necessary to prevent circular convolution). Fourier Transform both signals; Perform term by term multiplication of the transformed signals Inverse transform the result to get back to the time domain Jul 20, 2014 · FLOWCHART: 53 UR11EC098 START ENTER THE INPUT SEQUENCE x[n] & SYSTEM RESPONSE h[n] PERFORM LINEAR AND CIRCULAR CONVOLUTION IN TIME DOMAIN PLOT THE WAVEFORMS AND ERROR STOP 54. com/projects/circular_convolution Convolution calculation. y(n)=x(n)*h(n)=\. By the end of Ch. Nov 04, 2020 · A string indicating which method to use to calculate the convolution. Verify the circular convolution property of the DFT in Matlab. 7. Denote the output as x[n] and plot x[n] for each cases of L separately. If convolution is The demo displays the spectra of any two waveforms chosen by the user, computes their linear convolution, then compares their circular convolution according to the convolution theorem. 4 3. Remarks: I f ∗ g is also called the generalized product of f and g. 0 · 2. Convolution calculation. In convolution, we do point to point multiplication of input functions and gets our output function. Steps. The latter is an important indicator of prostate cancer in patients with a mildly elevated PSA. DFT (do not compute x n and h n ), a) determine DFT Returns the discrete, linear convolution of two one-dimensional sequences. The structure is an input to calculate n-th sample of circular convolution, shift the reverted circle by n samples to the right. 30 Dec 2019 For a discrete sequence x(n), we can calculate its Discrete Fourier we can also say that the twiddle factor has periodicity/a cyclic property. In addition, the convolution continuity property may be used to check the obtained convolution result, which requires that at the boundaries of adjacent intervals the convolution remains a continuous function of the parameter . 1 (Circular Convolution). D. Thus it is important for students to understand the use, along with the theory of convolution, so they can better evaluate the results they get from convolution. The convolution of f(t) and g(t) is equal to the integral of f(τ) times f(t-τ): Discrete convolution. This module relates circular convolution of periodic signals in one domain to multiplication in the other domain. convolution algorithm. Calculate angles or sides of triangles with the Law of Sines. Circular buffering isn't needed for a convolution calculation, because every sample can be immediately accessed. Consider a program where both the input and the output signals are completely contained in memory. Otherwise, if the convolution is performed between two signals spanning along two mutually perpendicular dimensions (i. You can select the number of gold karats from The convolution calculator has two active controls that perform different functions. The zero-padded points are removed after acausal convolution, and retained after linear convolution. be used to calculate the convolution between the signal and the filter, However, in overlap save this is not the case, and circular convolution must be used. 18(e), which can be formed by summing (b), (c), and (d) in the interval 0 ≤ n ≤ L − 1. It resembles the linear convolution, except that the sample values of one of the input signals is folded and right shifted before the convolution sum is found. If two sequences of length m, n respectively are convoluted using circular convolution then resulting sequence having max [m,n] samples. The easiest way (imho) is to first calculate the linear convolution and then wrap around that result to achieve the circular Periodic convolution is valid for discrete Fourier transform. Summary. (The other dimension, the "depth" dimension, is the number of channels of each image). The rst type of convolution is: De nition 1. 5 6) 3. 4. They are: Direct convolution for feature data, or DC mode. In convolution, before elements of two vectors are multiplied one is flipped and then shifted In this context the process is referred to more generally as "convolution" (see: convolutional neural networks. 21 134 136 137 132 . L=4, M=4. Dec 04, 2019 · Circular convolution is just like linear convolution, albeit for a few minute differences. The result of linear and volution with five synapses, or a circular convolution with many synapses). exactness of solution • Remember to account for T in the convolution ex. Thus x[-1] is the same as x[N-1]. The proposed method outperforms standard circular convolution-based 8 Oct 2018 Zero padding helps to avoid circular convolution, but increases calculation time In addition, scaled diffractions [3–8] that can calculate light Circular convolution, also known as cyclic convolution, is a special case of periodic convolution, which is the convolution of two periodic functions that have the 1. The central peak is twice the height of its neighbors. Convolution is a combination of result function and Calculus: Integral with adjustable bounds. 0< y ≤ g s − x · f x. Details. 0. The matrix on the left contains numbers, between 0 and 255, which each correspond to the brightness of one pixel in a picture of a face. u t = t ≥0: 1, t <0: 0. Convolution Calculator. ▫ DTFT, DFT Compute Circular Convolution Sum. Using the properties of the. direct calculation of the summation ; frequency-domain approach lg. Then their convolution s 1 ∗ s 2 is non-zero between (t 1 + t 3) and (t 2 + t 4). When algorithm is Frequency Domain, this VI computes the convolution using an FFT-based technique. 2 function [Windowed_Spec] = Wind_Flattop(Spec) % Given an input spectral sequence 'Spec', that is the % FFT of some time sequence 'x', Wind_Flattop(Spec) % returns a spectral sequence that is equivalent % to the FFT of a flat-top windowed version of time % sequence 'x'. Multiply everything item by item and add multiplies up to get a Calculate the four-point DFT of the aperiodic sequence x[k] of length N = 4, which is must have the same length in order to compute the circular convolution. Home / ADSP / MATLAB PROGRAMS / MATLAB Videos / Circular Convolution using MATLAB. Transition Jan 06, 2012 · 2. W4 Re 2 Im N=2 W2 0 W2 1 N=4 W4 0 W4 3 W4 1 1 −1 −1 1 i-i W8 4 N=8 W8 0 W8 6 W8 2 −1 1 i-i W8 7 W8 5 W8 3 W8 1 Powers of roots of unity are periodic with period N, since the Nth roots of unity are Convolution operations are built on kernels. Discrete Fourier Transform (DFT) Recall the DTFT: X(ω) = X∞ n=−∞ x(n)e−jωn. Using the DFT via the FFT lets us do a FT (of a nite length signal) to examine signal frequency content. 16 Jun 2018 In DSP to solve a convolution of a long duration sequence there are two OVERLAP SAVE METHOD STEP-4: Perform Circular Convolution of DFT: Properties. To me, circular convolution is an operation on any sequences. If you evaluate the linear (don't confuse it with circular convolution, which arises in DFT computing process) convolution of 2 discrete-time sequences you'll find out that the final result is the same as if you have multiplied 2 polynomials with coefficients, equal to sequences' samples. com/videotutorials/index. Eq. I The definition of convolution of two functions also holds in where ` ' denotes circular convolution. and x2 (n) = {1, 2, 1, 2}. The convolution pipeline contains 1024 MACs for int16 or fp16, along with a 32 element accumulator array for partial sum storage. Jul 20, 2015 · Trying to use Matlab to calculate the circular correlation between x = [2 3];, and y = [4 1 -8]; Unfortunately cannot find appropriate function, such as CXCORR or CIRCORR? Can calulate using matlab Linear Correlation, Linear Convolution, Circular Convolution as follows: This is a Python GUI Application Developed by Anshuman Biswal to Perform Fast Fourier Transform (FFT) on a given Signal Sequence, it is written in Python 3. The output value k is then stored in the output array at the same (x, y) -coordinates (relative to the input image). Below are roots of unity for N D2, N D4, and N D8, graphed in the complex plane. When we use the DFT to compute the response of an LTI system the length of the circular convolution is given Aug 11, 2020 · Introduction. In particular, the DTFT of the product of two discrete sequences The circular convolution of the zero-padded vectors, xpad and ypad, is equivalent to the linear convolution of x and y. Nov 30, 2018 · The Definition of 2D Convolution. It is mathematically given as, Mathematically also defined as, Example: Let us consider the same example x(n)=[1 3 -2 1] and y(n)=[1 1 0 0] to find circular convolution. V varies from zero to one. tutorialspoint. In fact the convolution property is what really makes Fourier methods useful. How can we calculate their circular convolution using only the linear convolution (and possibly padding numbers)? Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. {x(n)} ® {y(n)} = N−1. Syntax of this builtin convolution command is v=conv(x,h) where x and h are the input functions while v is our output. On the left is a simple circular vector field; to its right is the result of a computational fluid dynamics code. For the sinc power N model, the gain pattern is as follows: where N is the exponent. The result of the circular convolution of two vectors of n elements has just n elements. ∑. Origin uses the convolution theorem, which involves the Fourier transform, to calculate the convolution. fftconvolve exploits the FFT to calculate the (d) For these signals, N is large enough so that the circular convolution of x[n] and h[n] and their linear convolution (Use calculator). Since the third argument of cconv allows it to perform either circular or linear convolution, there are scenarios for which it will be more efficient to use cconv to compute a linear convolution than conv. Signal processing theory such as Multiplication of two sequences in frequency domain is called as circular convolution. Use linear convolution when the source wave contains an impulse response (or filter coefficients) where the first point of srcWave corresponds to no delay (t = 0). We have three types of padding that are as follows. Winograd convolution, or Winograd mode. Enter second The discrete fourier transform calculator can accept up to 10 numbers as input series. This causes inefficiency when compared to circular convolution. Dec 03, 2016 · Linear convolution takes two functions of an independent variable, which I will call time, and convolves them using the convolution sum formula you might find in a linear sytems or digital signal processing book. 2. fftconvolve exploits the FFT to calculate the convolution of Convolution of two functions. As another example, suppose that {Xn} is a discrete time ran-dom process with mean function given by the expectations mk = E(Xk) and covariance function given by the expectations KX(k,j) = E[(Xk − mk)(Xj − mj)]. Log InorSign Up. The sequence y(n) is equal to the convolution of sequences x(n) and h(n):. There are two commons ways to calculate the convolution of two signals What is the difference between Linear Convolution and Circular Convolution in case Definition 1. If x(t) is the input, y(t) is the output, and h(t) is the unit impulse response of the system, then continuous-time EECS 451 CIRCULAR CONVOLUTION Def: y(n) = h(n) c u(n) = PN 1 i=0 h(i)(u(n i))N, Yk = XkUk. 5##, ##-1. I want the convolution evaluated at omegan as well. if x < 0 then x = -x – 1 else if x >= image_width then x = 2*image_width – x – 1 end if. The Convolution Matrix filter uses a first matrix which is the Image to be treated. The result on applying this image convolution was: Summary. Circular convolution, also known as cyclic convolution, is a special case of periodic convolution, which is the convolution of two periodic functions that have the same period. Since fast algorithms are available for the computation of discrete sine and cosine transforms, the proposed method is an alternative to the Filename convolutioncalculatorv1. (b) Unlike the convolution unit, the which is the same as . Matlab Code for Circular Convolution FFT Convolution. Periodic or circular convolution is also also called as fast convolution. fftconvolve exploits the FFT to calculate the convolution of > Almost. The direct calculation of the convolution can be difficult so to calculate it easily Fourier transforms and multiplication methods are used. htm http://www. Jul 20, 2014 · FLOWCHART: 53 UR11EC098 START ENTER THE INPUT SEQUENCE x[n] & SYSTEM RESPONSE h[n] PERFORM LINEAR AND CIRCULAR CONVOLUTION IN TIME DOMAIN PLOT THE WAVEFORMS AND ERROR STOP 54. Suppose we have two signals: s 1 is non-zero between t 1 and t 2, and s 2 is non-zero between t 3 and t 4. (a) Basic convolution (b) Active convolution Figure 2. Matrix Method to Calculate Circular COnvolution Laplace Transformation, Convolution and Numerics | ResearchGate, the on MATLAB using the FFT function, an adaptive FFT calculator based on the FFTW . The used kernel depends on the effect you want. In either case (ii)In the second step, we calculate the cyclic convolution of each section with the transform (FNTT) will be applied to calculate the 2D circular convolution of 2D for a positive integer and we calculate the DFT of Ь(Т) as follows. n f a[n] 0 8 n f b[n] 0 8 n f a[n] 0 8 n f b[nmodN] n shifted 0: n shifted 1: n shifted 8: n 8 0 8 16 24 1 2 (f a f b)[n]: Convolution is a particular type of operation that involves folding, shifting, multiplying and adding. This technique will be fully explained in a 2D lesson very soon. Two another methods of Circular Convolution: Using fft and ifft predefined function and By using for loop function . 5 Compute by hand the circular convolution of the following two 4- point. I searched for better circular kernels by global optimization with an equiripple cost function, with the number of component kernels and the transition bandwidth as the parameters. Using Time Domain formula The modulo-2 circular convolution is equivalent to splitting the linear convolution into two-element arrays and summing the arrays. " MATLAB documentation says this. It is a calculator that is used to calculate a data sequence. 1 Compute the DFT of the 2-point signal by hand (without a calculator or computer) 1. 2 Add up those products; The resulting sequence of dot products is the convolution of the kernel with the signal; The formal definition of convolution extends from minus infinity to plus infinity. xls x for Excel and MultipleConvolutionOO. I have TMIN and TMAX for the 2013 and 2014. 1) The notation (f ∗ N g) for cyclic convolution denotes convolution over the cyclic group of integers modulo N . Circular convolution is again well demonstrated on paper stripes. Convolution: A mathematical function performs on two functions to produce the third function. The 'Calculate' button is used to execute the calculations while the 'Reset' button 30 Dec 2019 For a discrete sequence x n we can calculate its Discrete Linear Convolution Circular Convolution calculator Enter first data sequence real circular interpolation calculator These G codes are used to specify circular motion MATLAB PROGRAMS MATLAB Videos Circular Convolution using MATLAB 18 Apr 2020 Periodic convolution is valid for discrete Fourier transform. CHOOSE boxes ,1-4 Selecting SOFT menus or CHOOSE boxes ,1-5 The TOOL menu ,1-7 Setting time and date ,1-7 Introducing the calculator's keyboard ,1-11 Selecting calculator modes ,1-12 Operating Mode ,1-13 Q2: Write a program to find circular convolution of x(n) and h(n)? Where x(n) is the last four digits of your registration number and h(n) is the first four digits of your registration number a) Also plot the signal x, h and convolution output (i. • Long unit sample responses h[·] can mean that the convolution itself can take a long time. Related calculators. e DFT) to perform fast linear convolution " Overlap-Add, Overlap-Save " Circular convolution is linear convolution with aliasing ! Fast Fourier Transform " Enable computation of an N-point DFT (or DFT-1) with the order of just N· log 2 N complex multiplications. The relationship between the DTFT of a periodic signal and the DTFS of a periodic signal composed from it leads us to the idea of a Discrete Fourier Transform (not to be confused with Discrete-Time Fourier Transform) use of circular convolution, an operation well known in signal processing. calculate zeros and poles from a given transfer function. The zero-paddingserves to simulate acyclic convolution using circular convolution. Circular Convolution is calculated as . This time, however, we need stepler or glue: • plot the two sequences on paper stripes and denote zero sample, • make circles out of the stripes, • reverse one of the circles, signals. Jan 15, 2019 · Let's discuss padding and its types in convolution layers. This section of MATLAB source code covers convolution matlab code. Jun 13, 2020 · I'm trying to implement diffusion of a circle through convolution with the 2d gaussian kernel. Spreadsheets can be used to perform "shift-and-multiply" convolution for small data sets (for example, MultipleConvolution. (This is how digital spectrum analyzers work. In some instances, a circular convolution is actually desirable. The discrete fourier transform calculator can accept up to 10 numbers as input series. " So if we are working in the s-domain and we end up with two functions multipled together, we can use the convolution integral to convert back to the t-domain. One sequence is distributed clockwise and the other convolution of an N1 point sequence with itself will have a maximum length (2N - 1) and consequently the (2N - 1) point circular convolution of an N-point sequence with itself will be identical to the N-point linear convolution. Here we are attempting to compute linear convolution using circular convolution (or FFT) with zero-padding either one of the input sequence. An ultrasonic imaging system includes a receiver which demodulates the echo signals received by a transducer array and dynamically focuses the baseband echo signals, and a color flow processor which includes an adaptive wall filter in the form of a circular convolution filter that enables a narrow band of wall signals to be removed without loss of data samples. You retain all the elements of ccirc because the output has length 4+3-1. Plot circular convolution. DSP - DFT Circular Convolution - Let us take two finite duration sequences x1(n) and x2(n), having integer length as N. 25 136 145 148 151 . 3), which tells us that given two discrete-time signals \(x[n]\), the system's input, and \(h[n]\), the system's response, we define the output of the system as Oct 04, 2012 · 7 thoughts on " Circular Convolution without using built – in function " karim says: December 6, 2014 at 2:59 pm Starting with the name of ALLAH, The L-point circular convolution of x1[n] and x2[n] is shown in OSB Figure 8. Enter the data sequences into its appropriate position and click on calculate to get unique single data sequence. Not so bad! The halos would need to be eliminated, and the disk is also fading a bit toward the edges. This Prostate Volume Calculator is intended for evaluation of the man's prostate volume as well as the PSA density (* optional) in case the PSA level is known. Since multiplication is more efficient (faster) than convolution, the function scipy. g s − x. Solution 10. This analytic expression corresponds to the. N. We are delaying both the ends of the equation by k. We give an example of calculating a linear convolution through a cyclic for 4 counts long 3 references long (this example was considered above). (a) Conventional convolution unit with four input neurons and two output neurons. Plot transfer function response. 1 Multiply each pair of aligned values together; 3. Finite impulse response (FIR) digital lters and convolution are de ned by y(n) = LX 1 k=0 h(k)x(n k) (1) where, for an FIR lter, x(n) is a length-N sequence of numbers (b) Using MATLAB and L = 8, 16, 32 point DFTs, calculate the circular convolution between 21 [n] and 22[n]. However in short, a convolution is a form of image multiplication but rather than multiplying two images pixel to pixel, a convolution multiplies each pixel of the first image with all the pixels of the second image. For instance, an interesting e↵ect is achieved by taking the circular convolution of a long segment of white noise with some other (shorter) sound. Let's convolve x 1 (n)=(1,2,3) and x 2 (n)= (4,5,6). In today's post, I am gonna design a Convolution Calculator in MATLAB. Discrete circular convolution Knowing that the properties of the Fourier transform also work for a sampled signal, we present the definition of the 2D discrete Fourier transform (2D DFT), where M and N represent the number of samples in each dimension, x and y are the discrete spatial variables, and u and v are the transform or frequency variables: A Lookahead: The Discrete Fourier Transform. Convolution is the most important technique in Digital Signal Processing. This property is used to calculate the linear convolution more efficiently, by calculating he circular convolution, which in turn can be calculated very efficiently in the frequency domain using To calculate the periodic convolution across the samples they need to be genuine. In other words, circular convolution of two flnite sequences corresponds to linear convolution of the inflnitely periodic extensions of the two sequences. Circular shift of a sequence: if X(k) = DFT {x(n)} then Here, ® stands for circular convolution defined by. Bode plot. Enter first data sequence. Padding Full : Let's assume a kernel as a sliding window. 8 and TKinter. I am to calculate Daily EHF for this period. The image is a bi-dimensional collection of pixels in rectangular coordinates. Linear convolution is the basic operation to calculate the output for any linear time invariant system given its input and its impulse response. Fast Convolution Methods " Use circular convolution (i. Reflected Indexing. Now re-define x and calculate w: Another way to compute circular convolution is using the convolution- multiplication. KABALAN Electrical and Computer Engineering Department, American University of Beirut, P. The circular convolution function cconv and the linear convolution function conv use different algorithms to perform their calculations. A cross-section of the autocorrelation function is shown below. Convolution op-erates on two signals (in 1D) or two images (in 2D): you can think of one as the \input" signal (or image), and the other (called the kernel) as a \ lter" on the input image, pro- 30. " This Demonstration studies the equivalence of linear and circular convolutions. When , we say that ismatched filter for . 1. 6, we will know that by using the FFT, this approach to convolution is generally much faster than using direct convolution, such as MATLAB's convcommand. com/calc/math/convolution-calculator. It is important to Grade: Consider the following two time domain signals g and h, each consisting of four samples. For more about how to use the Integral Calculator go to quot Help quot or take a look Add more exotic convolution types like circular convolution. Numerically, I have sampled these functions at negative&positive equidistantly spaced values omegan (from-Omega to Omega in N points) with associated values fn and gn. When we perform linear convolution, we are technically shifting the sequences. Circular convolution using circular convolution: x1(n) = {1, 2, 3, 4}. Solving convolution problems PART I: Using the convolution integral The convolution integral is the best mathematical representation of the physical process that occurs when an input acts on a linear system to produce an output. To calculate the value of each transformed pixel, add the products of each surrounding pixel value with the corresponding kernel value. O. convolution basics including matlab function is covered. Circular convolution arises most often in the context of fast convolution with 12 Jan 2015 DSP CALCULATOR - Digital Signal Processing Calculations like DFT, IDFT, Linear Convolution, Circular Convolution, DCT and IDCT at your This describes a simple method I found to do circular convolution, which I think is simpler than the method I saw in Digital Signal Processing, by Proakis, Linear and Circular Convolution · View MATLAB Command · x = [2 1 2 1]; y = [1 2 3]; clin = conv(x,y); · xpad = [x zeros(1,6-length(x))]; ypad = [y zeros(1,6-length(y))] ; Pictorial comparison of circular and linear convolution and the convolution theorem in Furthermore, we have applied the convolution theorem to calculate the It is a calculator that is used to calculate a Linear Convolution/Circular Convolution calculator Enter first data sequence: (real numbers only) Enter second data {O N=0,1,,P-1 Other Here P=6 L=9 Calculate What Is Required Below. May 09, 2016 · Circular Convolution in Matlab. It has two text fields where you enter the first data sequence and the second data sequence. There are three peaks. In this paper, a novel and simple method is given to prove the FFT-based fast method of linear convolution by exploiting the structures of circulant matrix. After the convolution, the output data has $1$ sample, $16$ channels with height $62$ ($=64-3+1$) and width $124$ ($=128-5+1$). Linear Convolution/Circular Convolution calculator Enter first data sequence: (real numbers only) Convolution calculator online. ) To see how they work, let's start by inspecting a black and white image. Multiply the two transforms. Start with a new workbook. 2012 was the first year that neural nets grew to prominence as Alex Krizhevsky used them to win that year's ImageNet competition (basically, the annual Olympics of Duality of circular convolution and element-wise multiplication. Our convolution calculator combines two data sequence into a single data sequence. In convolution layer we have kernels and to make the final filter more informative we use padding in image matrix or any kind of input array. convolution, one sequence is linearly shifted with respect to the other in order to calculate an output value, whereas in circular convolution, the sequence is circularly shifted. circ_conv(x,h) = [2+4, 5+4, 8, 8, 5] = [6, 9, 8, 8, 5] is the circular convolution. e. Take Inverse Discrete Fourier transform of the product and the result is the circular convolution of two vectors. "Cyclic"="circular Convolution is often performed numerically and students have a tendency to blindly accept the results their calculator or computer provides. Fully Connected Layer. A) N = L + 5-point Circular Convolution Of X [n] And H [n], Calculate Y [n! Y [n] X [n] And 18 Apr 2017 Circular convolution as linear convolution with aliasing. The result will be the value of the output pixel at 10 Feb 2018 by transforming the convolution into a discrete Fourier transform. circular convolution architectures is reported for the first time. 121 121 118 111 . Convolution is used in the mathematics of many fields, such as probability and statistics. They are in some sense the simplest operations that we can perform on an image, but they are extremely useful. In the case of a cross-correlation neither one is flipped. xn={6, 9, -4, 1, 2, 3} hn={1,-2, 3, -4, 0, 1} Calculate the circular convolution of the following two sequences using the DFT method convolution of an N1 point sequence with itself will have a maximum length (2N - 1) and consequently the (2N - 1) point circular convolution of an N-point sequence with itself will be identical to the N-point linear convolution. But in practice we can comfortably use the fast convolution without worrying about algorithm specifies the convolution method to use. Sounds like a weird combination of biology and math with a little CS sprinkled in, but these networks have been some of the most influential innovations in the field of computer vision. The only difference is that in a convolution one of the functions is, is flipped before being multiplied with the second. tutorialspoint. 1;0/. Follow. There is a close relationship between Adapted from: http://www. In signal processing, linear convolution (or simply convolution) refers to the convolution between infinitely supported sequences and filters, while circular convolution refers to the convolution between finitely supported and circularly extended sequences and filters (circular extension makes such sequences and 1. Compare your results to part (a) of this problem. For finite sequences x(n) with M values and 27 Jan 2018 Matrix Method to Calculate Circular Convolution Watch more videos at https:// www. Add a "true" convolution mode, where the weights are flipped before multiplication. Matlab program to find Circular Convolution by matrix multiplication using circshift command. Convolving two signals is equivalent to multiplying the frequency spectrum of the two signals. To calculate a linear convolution of two N point sequences, we propose a novel algorithm, which uses circular convolution in N points, instead of. the linear and circular convolutions, consider a length-4 sequence x[n] and a length-3 sequence h[n]. Quadrangle calculator (vectors). And finally, calculate the inverse Fourier transform. 3. * Perform the circular convolution. We may even be able to evaluate the integral to determine our answer. whether time or DFT or some thing else. f x =7 e −2 x · u x. so here suppose \[X_1[n]\] is the filter input and\[{ X_2[n]}\] is the impulse response of a filter ,so Turning the calculator on and off ,1-2 Adjusting the display contrast ,1-2 Contents of the calculator's display ,1-2 Menus ,1-3 SOFT menus vs. One function should use the DFT (fft in Matlab), the other function should compute the circular convolution directly not using the DFT. Convolution calculator combines two individual data sequence to make it a single data sequence with standard convolution operation with formula. With this tutorial, you will learn how to perform convolution in Origin. Still one coincedence exists. Enter the FFT: from the convolution 3 Circuits for the Linear Convolution Because the linear convolution of two signals can be reduced to the circular convolution after zero padding the signals, the above quantum circuits can be used to calculate the linear convolution. Then is the DFT matrix, where ` ' denotes Hermitian transposition (transposition and complex-conjugation). Circular Convolution Circular convolution of two signals is equal to conventional convolution of one signal with a periodically extended version of the other. Circular Indexing By using the FFT algorithm to calculate the DFT, convolution via the frequency domain can be faster than directly convolving the time domain signals. The Fast Fourier Transform, fft, is used for efficiency. cannot use a digital computer to calculate a continuum of functional values Periodic Convolution. The improvement of the speed for floating point multiplication/addition was achieved by canonical sign digit implementation methodology, which reduced the Convolution of two functions. 2 dimensional discrete convolution is usually used for Calculate the circular convolution of the following two sequences. Another application is the interpolation of DFT spectra instead of zero-padding in the time-domain. The DFT of the length-vector can be written as , and the corresponding inverse DFT is . Convolution is a formal mathematical operation, just as multiplication, addition, and integration. advance - circular time-advance (left-shift) of a vector casc - cascade algorithm for phi and psi wavelet functions circonv - circular convolution cmf - conjugate mirror of a filter convat - convolution a trous convmat - sparse convolution matrix convmat2 - sparse convolution matrix (simplified version) Trying to use Matlab to calculate the circular correlation between x = [2 3];, and y = [4 1 -8]; Unfortunately cannot find appropriate function, such as CXCORR or CIRCORR? Can calulate using matlab Linear Correlation, Linear Convolution, Circular Convolution as follows: Calculate the continuous convolution of a function f (omega') and g (omega') at omega. . 3. 4 years ago. Suppose that the only computational devices available are multipliers, adders, and processors that compute Apr 16, 2018 · It can be shown that a convolution in time/space is equivalent to the multiplication in the Fourier domain, after appropriate padding (padding is necessary to prevent circular convolution). A circular convolution is also required to filter signals which are periodic by its nature, for instance microphone signals captured from a circular or spherical microphone array. By using the FFT algorithm to calculate the DFT, convolution via the frequency domain can be faster . you will find calculated ans is different from code ans. Now we do the same thing (line up, multiply and add, then shift), but with concentric circles. Fast Convolution Algorithms Overlap-add, Overlap-save 1 Introduction One of the rst applications of the (FFT) was to implement convolution faster than the usual direct method. Periodic convolution arises, for example, in the context of the discrete-time Fourier transform (DTFT). ) Nov 30, 2018 · The Definition of 2D Convolution. zip () Title Convolution Calculator Description In introductory digital signal processing courses, the convolution is a rather important concept and is an operation involving two functions. Let's understand the convolution operation using two matrices, a and b, of 1 dimension. Overall IELTS band scores are reported to the nearest half or whole band. 1 Subfigure 1. Jul 28, 2019 · In the end it comes full circle since the backpropagation for the covolutional layer is also a convolution, but with spatially flipped layers. a = [5,3,7,5,9,7] b = [1,2,3] In convolution operation, the arrays are multiplied element-wise, and the product is summed to create a new array, which represents a*b. Check the third step in the derivation of the equation. ) Cool! The Convolution Pipeline supports three types of operations. Zeros need to be padded to the end of input sequence (the signal or the response) with the shorter length to ensure that the lengths of both input sequences are equal. The final result is the same; only the number of calculations has been changed by a more efficient algorithm. Plot convolution of two signals in matlab Image filtering and convolution are one and the same thing. This method has 4 steps and in this video it is explained in the easiest way poss Dec 01, 2019 · Linear Convolution: Circular Convolution: Linear convolution is a mathematical operation done to calculate the output of any Linear-Time Invariant (LTI) system given its input and impulse response. One of the most important applications of the Discrete Fourier Transform (DFT) is calculating the time-domain convolution of signals. Box 11-0236, Beirut, Lebanon. com). convolve but it isn't the same, and I can't find an equivalent. Here is a few screenshots of the application: Sep 15, 2020 · We need to calculate the sharp change in x as the fastConvolve function calculates a circular convolution. T*hh I want \ast to denote the convolution. TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with regard to these materials, including but not limited to all implied warranties and conditions of merchantability, fitness for a particular purpose To a convolution. The transform coefficients are either symmetric or asymmetric and hence we need to calculate only half of the total coefficients. g = [1, 2, 0, 1]T h = [2, 2, 1, 1]T In this problem, you will calculate the circular convolution y := g ~ h of these signals in two ways. where index values outside the range of 0 to N-1 are interpreted "circularly", that is as referring to a periodically-repeated version of x or y. Rectangle Square Rhombus Rectangle Circle. I know there is also the \star command. 3) for 0 m N 1, mod is the remainder of m kdivided by N. Finally, when you distinguish between the different relevant cases for ##t## (ie ##t\leq -1. 8. , the response to an input when the system has zero initial conditions) of a system to an arbitrary input by using the impulse response of a system. Steps for Cyclic Convolution Steps for cyclic convolution are the same as the usual convolution, except all index calculations are done "mod N" = "on the wheel" Step1: Plot f[m] and h[ m] Subfigure 1. • Let denote the result of a linear convolution of x[n] with Example 6. Comparison of a conventional convolution unit with the ACU. The input sequences x and y must have the same length if circular is true. Also included is a fast circular convolution function based on the FFT. 5 \leq t \leq 0##), the results you obtain for each of the segments must be equal on their mutual endpoints. so ,the FFT convolution is also called high-speed convolution. 34 137 140 147 149 . Apps . 41) where is the circular convolution operator. When the sequences of length m, n respectively are convoluted with circular convolution then it will resulted in sequence with max [m,n] samples. But, never the less, the best way to calculate a cross-correlation map, is again by taking advantage of the convolution theorem. n. • Integration with continuous frequency Ωhinders computation. This is the Jan 03, 2012 · i think it is not correct as find cir convo of x1=[1 2 3 4] x2=[ 1 1 4] then calculate it. For this reason, FFT convolution is also called high-speed convolution. Circular Convolution using MATLAB Calculate poles and zeros from a given The time domain interpolation using a zero-extended DFT, described in the previous section, performs a time domain circular convolution of the zero-packed data sequence (by a factor of M) with the bandlimited and periodic sin(πn)/sin(πn/N) interpolation function described in Section VII. 34. Note that the usual definition of convolution of two sequences x and y is given by convolve(x, rev(y), type = "o"). convolution, the function scipy. I The definition of convolution of two functions also holds in Digital Signal Processing Calculations like DFT, IDFT, Linear Convolution, Circular Convolution, DCT and IDCT at your fingertips! Supports 480X800 screen resolutions for now! Will be made generic soon! Usage Instructions: 1. 4 calculate G[k] and W. auto. 7 Convolution. The Fourier Transform is used to perform the convolution by calling fftconvolve. What is the difference between linear convolution and circular convolution? Example: Compute circular convolution of the following sequences: x1[n]=[-1 2 -3 2] need to wait for the entire data to arrive to calculate X[k]. The Laplace Transform brings a function from the t-domain to a function in the S-domain. These All content and materials on this site are provided "as is". Press "COMPUTE" button V 2 Update: *Fixed a bug regarding 5 Convolution of Two Functions The concept of convolutionis central to Fourier theory and the analysis of Linear Systems. Hours Calculator - calculate hours and minutes. Does it matter which one I use to represent convolution? Then I want a Fourier-transform symbol, I mean the line with a coloured and an empty circle on either side, to connect the x(t) and X(f), h(t) and H(f), y(t) and Y(f) respectively. Linear time-invariant systems (LTI systems) are a class of systems used in signals and systems that are both linear and time-invariant. uniquelycommon. xls or MultipleConvolution. where: (x(n))N,N-point periodic extension of x(n). If you recall the definition of convolution (using a non-causal kernel to be precise), 2D discrete convolution; Filter implementation with convolution; Convolution theorem; Continuous convolution. (2N -1) points Convolution. Aug 14, 2018 · The spati a l separable convolution is so named because it deals primarily with the spatial dimensions of an image and kernel: the width and the height. Keys to Numerical Convolution • Convert to discrete time • The smaller the sampling period, T, the more exact the solution • Tradeoff computation time vs. after appropriate padding (padding is necessary to prevent circular convolution). The black line shows the results of convolution. 0. • Required a frequency response for which h[·] is reasonably straightforward to calculate. The zero padding may enlarge the qubit representation of the signal, but only by one qubit. 30 Nov 2018 For each case, compute the product of the mutually overlapping pixels and calculate their sum. In fact users often say convolution, when what they really mean is a correlation. fft. Q) The two sequences x1(n)={2,1,2,1} & x2(n)={1,2,3,4}. Their DFTs are X1(K) and X2(K) respectively, which is shown below − Circular Convolution Matlab Code: Here is a detailed matlab code for circular convolution using inbuilt as well as without using function: The circular convolution of the zero-padded vectors, xpad and ypad, is equivalent to the linear convolution of x and y. signal. The computation of linear convolution can be realized by using the circular convolution, while the computation of circular convolution can be computed by using the FFT-based fast method [19]. 2 Step 2: "Spin" h[ m] n times Anti Clock Wise (counter-clockwise) to get h[n-m] At each position, calculate the dot product of the two; 3. Calculate the convolution of the signals, 5. I'm trying to calculate the convolution of two probability density functions (PDFs) defined on the real axis: a normal distribution • Required a frequency response for which h[·] is reasonably straightforward to calculate. This is a method to compute the circular convolution for \(N\) points between two sequences, where \(N\) is the length of the longer of the two sequences (or the length of the Anyways, coming back to our today's Convolution Calculator, let's start its designing: Convolution Calculator in MATLAB. Long division calculator is a division with remainder calculator. (x – j, y – k) are reflected back into the image by using following algorithm. Let and (7. Circular convolution vector w= (w 1;w 2; ;w N) 2CN is: w m= NX 1 k=0 x ky (m k)modN; (1. Feb 01, 2013 · perform single pixel convolution end for end for. Press "COMPUTE" button V 2 Update: *Fixed a bug regarding In this paper, we derive a relation for the circular convolution operation in the discrete sine and cosine transform domains. 54. Convolution is commonly used in signal processing. Circular convolution is essentially the same process as linear convolution. ods for Calc), but for larger data sets the performance is much slower that Fourier convolution (which is much easier done in Matlab or Octave than in spreadsheets). The final result is the same; only fewer no of number calculations . Digital signal processing is applied linear algebra? This is easy to grasp for color matching, where we have fixed dimensions of 1 (number of test lights), 3 (number of primary lights, number of photopigments), and 31 (number of sample points in a spectral power distribution for a light, or in the spectral absorption for a pigment). 16 DFT and circular convolution. Convolution of image input, or image input mode. Write two Matlab functions to compute the circular convolution of two sequences of equal length. Oct 14, 2020 · The convolution operation forms the basis of any convolutional neural network. 23 133 131 136 136 . The convolution theorem is useful, in part, because it gives us a way to simplify many calculations. s =10. AIM: Write a MATLAB Script to perform discrete convolution (Linear and Circular) for the given two sequences and also prove by manual calculation. 'Linear Convolution/Circular Convolution Calculator' calculates the linear/circular convolution for given inputs. Also correlation is actually the simpler method to understand. Support bigger 28 Apr 2020 Convolution Calculator The correlation function of f T is known as fast Fourier transform FFT algorithms via the circular convolution theorem. This requires each of the signals and duration and counts respectively add zeros to length . Circular convolution returns same number of elements that of two signals. An online convolution calculator along with formulas and definitions. output from linear convolution is N+M-1 in length, the N-circular convolution will corrupt the first M-1 samples, leaving the last N-M+1 samples of the circular convolution result pristine (we do not worry about a "tail" since for circular convolution the window only takes N points effectively discarding the last In this lecture we will see CIRCULAR CONVOLUTION USING DFT AND IDFT METHOD. The 2D convolution has $20$ channels from input and $16$ kernels with size of $3 \times 5$. Note that the FFT, with a bit of pre- and postprocessing, can quickly calculate the discrete cosine transform (DCT), which is used in many multimedia compres­sion algorithms. y) by using subplot command. Notation is w= x y. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Check out our other math calculators such as Harmonic Mean Calculator or Weighted Average Calculator. In one dimension the convolution between two functions, f(x) and h(x) is dened as: g(x)= f(x) h(x)= Z ¥ ¥ f(s)h(x s)ds (1) This is done with a 5x5 image convolution kernel. However, convolution also affects the duration of the time vector over which the signals are defined! This prevents the "wrap-around" effect that occurs in circular convolution. Convolution involving one-dimensional signals is referred to as 1D convolution or just convolution. The formula for V is V = 1−R 1 The circular standard deviation, v, is defined as v = −2ln(R 1) The circular dispersion, used in the calculation of confidence intervals, is defined as δ = T R 1 2 2 1 2 − The skewness is The computation of linear convolution can be realized by using the circular convolution, while the computation of circular convolution can be computed by using the FFT-based fast method [19]. In mathematics and, in particular, functional analysis, convolution is a mathematical operation on two functions f and g, producing a third function that is typically viewed as a modified version of one of the original functions (from wikipedia. To calculate periodic convolution all the samples must be real. Complex Numbers, Convolution, Fourier Transform For students of HI 6001-125 "Computational Structural Biology" Willy Wriggers, Ph. Doing normal shift on xp(n) is equivalent to do circular shift on x(n) Slide 4 Digital Signal Processing Circular Shift x n k N xn xn k N (( )) ( ,module ) x (2) x((0))4 x(0) A simple method to do circular convolution Nasser M. Convolution vs Correlation (asymmetrical kernel effects) As I mentioned above the two operators 'Convolve' and 'Correlate' are essentially the same. A circular convolution uses circular rather than linear representation of the signals being convolved. But it's the coincedence Impulse Response and Convolution. 2 Convolution A useful way to view ltering is by convolution. DTFT is not suitable for DSP applications because •In DSP, we are able to compute the spectrum only at specific Can you please tell me that how can have t95. 3 This is most easily done by again considering circular convolution as "linear convolution plus aliasing. Circular convolution assumes that the input signal is periodic while linear convolution assumes that the signal has zero energy outside it's defined time axis. I've tried not to use fftshift but to do the shift by hand. For math, science, nutrition, history g2384 On GitHub Feel free to use our online Discrete Fourier Transform (DFT) calculator to compute the transform for the set of values. As an aside, circular buffering is also useful in off-line processing. The convolution of s(n) with y(t) is the inverse FT of the > (point-wise) product of S(f) with Y(f)! With appropriate zeropadding (otherise, this describes circular convolution). Enter the space-separated sequence (Example: 1 0. (b) Unlike the convolution unit, the in a computer. The IEEE-754 single precision format was considered for the representation of the twiddle factors. For two length-N sequences x and y, the circular convolution of x and y can be written as. Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs Introduction Correlation and Convolution are basic operations that we will perform to extract information from images. Automatically chooses direct or Fourier method based on an estimate of which is faster (default). Reply Start a New Thread For a circular convolution, data points outside the input range are considered to repeat periodically, thereby satisfying the first requirement. Convolution is a mathematical operation, which applies on two values say X and H and gives a third value as an output say Y. Also, circular convolution is defined for 2 sequences of equal length and the output also would be of the same length. It would be a great help if you can give me an idea of t95. htm Lecture By: Ms. Calculus: Fundamental Theorem of Calculus Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. 'DFT/IDFT Calculator' is used to calculate 2/4/8 point DFT/IDFT for sequences. 0< y ≤ g s − x. 0 Let us now calculate int. 5. , if signals are two-dimensional in nature), then it will be referred to as 2D convolution. The convolution theorem shows us that there are two ways to perform circular convolution. g2384 On GitHub Key Concept: Convolution Determines the Output of a System for any Input. Addition takes two numbers and produces a third number, while convolution takes two signals and produces a third signal. Circular Convolution: Circular Convolution refers to the convolution between finitely supported and circularly extended sequences and filters. However there are a few things he glossed over a bit that would clear things up first of all convolution is in fact Linear Convolution Circular Convolution Calculator calculates the linear circular convolution for given inputs. Is there a way of doing this ? Jul 27, 2014 · "Circular convolution is used to convolve two discrete Fourier transform (DFT) sequences. The sequence of data entered in the text fields can be separated using Find circular convolution and linear using circular convolution for the following sequences x1(n) = {1, 2, 3, 4} and x2(n) = {1, 2, 1, 2}. Circular convolution is the same thing but considering that the support of the signal is periodic (as in a circle, hance the name). rapidtables. (filtering) can be computed by the DFT (which rests on the circular This is just to show how to calculate an example. I hope this helped! convolution The Use of Spreadsheets to Calculate the Convolution Sum of Two Finite Sequences* ALI EL-HAJJ, KARIM Y. Nov 24, 2011 · C/C++ : Convolution Source Code. Using cyclic convolution, one can calculate the linear convolution of two signals. Convolution of 2 discrete functions is defined as: 2D discrete convolution. Convolutions can be very difficult to calculate directly, but are often much easier to calculate using Fourier transforms and multiplication. Length of y(n) = L+M-1=4+4-1=7. N 2. plot response for a High pass fi In circular or periodic convolution we can look at the N point sequences as being distributed on a circle due to the periodicity. We can create white noise using The N-point circular convolution is the sum of linear convolutions shifted in time by N x 3p [n]=x 1 [n]⊗x 2 N [n] Example 1: ! Let ! The N=L=6-point circular convolution results in Penn ESE 531 Spring 2019 - Khanna 34 Example 1: ! Let ! The N=L=6-point circular convolution results in Penn ESE 531 Spring 2019 - Khanna 35 Example 1: ! Let ! CS1114 Section 6: Convolution February 27th, 2013 1 Convolution Convolution is an important operation in signal and image processing. Using DFT method. Usually, to find the Inverse Laplace Transform of a function, we use the property of linearity of the Laplace Transform. Convolutional neural networks. You got to know about some important operations that can be approximated using an image convolution. Plot the output of linear convolution and the inverse of the DFT product to show the equivalence. Menu. Users can find DFT and IDFT of 4-Point,8-Point signal sequence in Frequency and Time Domain using Radix Algorithm, Also Linear Convolution and Circular Convolution using Radix. Overlap-Save and Overlap-AddCircular and Linear Convolution Modulo Indices and the Periodic Repetition 1 1 2 0 2 1 1 0 12 8 4 0-4 13 9 5 1-3 14 10 6 2-2 15 11 7 3 -1 First try on a circular convolution kernel. 3 In this case, is matched to look for a "dc component," and also zero-padded by a factor of . Circular convolution Is there a way with Python to perform circular convolution between two 1D arrays, like with Matlab function cconv? I tried numpy. The convolution is between the Gaussian kernel an the function u, which helps describe the circle by being +1 inside the circle and -1 outside. Circular Shift In previous example, the samples from xp(n-2)0 to N-1 result in a circular shifted version of x(n) by 2. E of Chapter 1, where the DFT length is MN. Convolution in the discrete or analogous case. Abbasi November 2, 2018 Compiled on May 23, 2020 at 2:56am This describes a simple method I found to do circular convolution, which I think is simpler than the method I saw in Digital Signal Processing, by Proakis, Manolakis. Nov 12, 2013 · In some applications in coding theory, it is necessary to compute a 63-point circular convolution of two 63-point sequences x[n] and h[n]. 18(f) is identical to the result of linear convolution. Circular convolution can be performed in the following steps: Take Discrete Fourier transform of two vectors. Periodic or circular convolution is also called as fast convolution. XOR Calculator is an online tool to perform binary bitwise XOR operation on text in ASCII or numbers in Binary, Octal, Decimal and Hex formats. Since the length of the linear convolution is (2L-1), the result of the 2L-point circular con­ volution in OSB Figure 8. Each circle is 8 pixels in diameter. Just enter the set of values in the text box, the online DFT calculator tool will update the result. final convolution result is obtained the convolution time shifting formula should be applied appropriately. Where you put the zeros depends on what you want to do, ie: on the 1D case you can concatenate them on each end, but on 2D it is normally placed all the way around the original signal. In this method, the pixel lying outside the image i. In this way, the linear convolution between two sequences having a different length. Periodic or circular Circular convolution is essentially the same process as linear convolution. * By doing zero padding make the length of every sequence equal to number of samples contained in linear convolution. . MATLAB has a built in command for convolution using which we can easily find the convolution of two functions. Although the description above implies a box filter, any arbitrary filter shape can be used for the filter convolution kernel. 231 The calculator will find the Inverse Laplace Transform of the given function. In volution with five synapses, or a circular convolution with many synapses). Time-invariant systems are systems where the output does not depend on when an input was applied. edu. Let's watch a quick video clip getting the convolution result. Email: kabalan@aub. The periodic convolution sum introduced before is a circular convolution of fixed length—the period of the signals being convolved. Filename convolutioncalculatorv1. Let xand y2CN. Mar 08, 2000 · The input function consists of two circles (cylinder functions), one displaced 8 pixels to the left and the other 8 pixels to the right. 6: Perform the periodic (circular) convolution of two sequences in Let us verify that with a circular convolution table. How to obtain same result from linear and circular convolution? * Calculate the value of "N", that means number of samples co ntained in linear convolution. Now as in the circular convolution, the vectors are only 5 long, the last 2 entries of the result will be added in the front, so. Oct 19, 2017 · The DFT provides an efficient way to calculate the time-domain convolution of two signals. Convolution can be used to calculate the zero state response (i. You learned the exact convolution kernels used and also saw an example of how each operator modifies an image. The circular variance, V, measures the variation in the angles about the mean direction. Recall, that $$$\mathcal{L}^{-1}\left(F(s)\right)$$$ is such a function `f(t)` that $$$\mathcal{L}\left(f(t)\right)=F(s)$$$. The Gaussian kernel is . The x and y components of that vector are each given by functions of the x and y coordinates. School of Health Information Sciences Convolution affects the duration of the output signal. Convolution is used in differential equations, statistics, image and signal processing, probability, language processing and so on. 6. The demo shows convolution of the following two functions: , The lower blue line shows h(a-t), as a changes from -5 to 5. When algorithm is Direct, this VI computes the convolution using the direct method of linear convolution. Linear systems are systems whose outputs for a linear combination of inputs are the same as a linear combination of individual responses to those inputs. ! At each position, calculate the dot product of the two; 3. Jul 25, 2016 · Convolution is performed on Line 34 by taking the element-wise multiplication between the roi and kernel , followed by summing the entries in the matrix. Convolution Calculator The correlation function of f (T) is known as convolution and has the reversed function g(t-T). Find out the sequence x3(m) which is equal to circular convolution of two sequences. Note that FFT is a direct implementation of circular convolution in time domain. Let denote the matrix of sampled DFT sinusoids for a length DFT: . circular convolution calculator Sections & Pages Fonts in Use Fonts Collection Text Effects List Privacy | Disclaimer | Contact
CommonCrawl
laws of motion class 11 questions with answers: Physics NLM Chapter thGet important laws of motion class 11 questions with answers developed by expert faculties to provide affordable and quality education to students. View class 11 Physics important questions with answers for exam point of view from the topic Nwton's Laws of Motion fully solved. These important questions will play significant role in clearing concepts of the chapter. This question bank is designed by keeping NCERT in mind and the questions are updated with respect to upcoming Board exams. You will get here all the important questions with their answers for Class 11& 12 chapters. In this article student's will get laws of motion questions with answers which is an important topic of Class 11 Physics. Click Here for Detailed Notes of Newton's Laws of Motion along with other chapters and subjects. Q. Name one factor on which the inertia of a body depends. Q. If you jerk a piece of paper from under a book quick enough, the book will not move. Why? Q. Is a bus moving along a circular track, an inertial frame of reference? Q. A body is moving with uniform velocity. Can it said to be in equilibrium? Q. Why do we beat dusty blanket with a stick to remove dust particle? Q. An astronaut accidentally gets separated out of his small spaceship accelerating in inter-stellar space at a constant rate of . What is the acceleration of the astronaut the instant after the is outside the spaceship? Q. Why do we use shock absorbers in automobiles? Q. Why bogies of a train are provided with the buffers? Q. If a body is not at rest, the net external force acting on it cannot be zero. Is it true or false? Q. What is the impulsive force? Q. A thief jumps from the upper storey of a house with a load on his back. What is the force of the load on his back when the thief is in air? Q. Why does a swimmer push the water backwards? Q. Can a body in linear motion be in equilibrium? Q. A uniform rope of length and mass hung from a support. What is the tension at a distance from the free end? Q. In the arrangement given, if the points and move down with a velocity , find the velocity of M? Q. A soda water bottle is falling freely. Will the bubbles of gas rise in the water of the bottle? Q. A bird is sitting on the floor of a wire cage and the cage is in the hands of a boy. The bird starts flying in the cage. Will the boy experience any change in the weight of the cage? Q. Air is thrown on a sail attached to a boat from an electric fan placed on the boat. Will the boat start moving?. Q. Is a 'single isolated force' possible in nature? Q. A disc of mass $m$ is placed on a table. A stiff spring is attached to it and is vertical. To the other end of the spring is attached a disc of negligible mass. What minimum force should be applied to the upper disc to press the spring such that the lower disc if lifted off the table when the external force is suddenly removed? Q. Explain how a man walks on the ground? Q. A man weights 70 kg. He stands on a weighting scale in a lift which is moving. (a) Upward with a uniform speed of $10 \mathrm{ms}^{-1}$. (b) Downward with a uniform acceleration of $5 \mathrm{ms}^{-2}$ with a uniform acceleration of (c) Upward with a uniform acceleration of $5 \mathrm{ms}^{-2}$. What would be the reading on the scale in each case? What would be the reading if the lift mechanism failed, and it hurtled down freely under gravity? $\left[g=10 \mathrm{ms}^{-2}\right]$ Q. It is easy to catch a table tennis ball than a cricket ball, even when both are moving with the same velocity. Why? Q. In which case will a rope have the greater tension (a) Two men pull the ends of the rope with forces $F$ equal in magnitude but opposite in direction. (b) One end of the rope is fastened to a fixed support and the other is pulled by a man with a force $2 F .$ Q. Ten one-rupee coins are put on the top of eachother on a table. Each coin has a mass $m$ kg. Give the magnitude and direction of (a) The force on the $7^{\text {th }} \operatorname{coin}$ (counted from the bottom) due to all the coins on its top (b) The force on the $7^{\text {th }}$ coin by the eighth coin (c) The reaction of the $6^{\text {th }}$ coin on the $7^{\text {th }}$ coin [NCERT] Q. A force acting on a material particle of mass $m$ first grows to a maximum value $F_{m}$ and then decreases to zero. The force varies with time according to a linear law, and total time of motion is $t_{m}$. What will be the velocity of the particle at the end of this interval if the initial velocity is zero Q. The assertion made by the Newton's first law ofmotion that every body continues in its state of uniform motion in the absence of external force appears to be contradicted in everyday experience. Why? Q. A stone when thrown on a glass window smashes the window pane to piece, but a bullet from the gun passes through making a clean hole. Why? Q. A block is supported by a cord $C$ from a rigid support, and another cord $D$ is attached to the bottom of the block. If you give a sudden jerk to $D,$ it will break. But if you pull on $D$ steadily, $C$ will break. Why? Q. Why do you fall forward when a moving train decelerates to a stop and fall backward when a train accelerates from rest? What would happen if the train rounded a curve at constant speed? [NCERT] Q. "It is reasonable to consider the Earth as an inertial frame for many laboratory scale terrestrial experiments, but the Earth is a non-inertial frame of reference for astronomical observation." Explain this statement. Do you see how the same frame of reference is approximately inertial for one purpose, and non-inertial for the other? [NCERT] Q. A mass of $6 \mathrm{kg}$ is suspended by a rope of length $2 \mathrm{m}$ from a ceiling. A force of $50 \mathrm{N}$ in the horizontal direction is applied at the mid-point of the rope, as shown in the figure. What is the angle that rope makes with the vertical in equilibrium? Neglect the mass of the rope. Given : $\mathrm{g}=10 \mathrm{ms}^{-2}$. [NCERT] Q. A piece of uniform string hangs vertically, so that its free end just touches the horizontal surface of a table. If the upper end is released, show that at any instant during fall of the string, the total force on the surface is three times the weight of that part of the string which is lying on the surface. [NCERT] Q. With what acceleration ' $a^{\prime}$ should a box descend so that a block of mass $M$ placed in it exerts a force $\frac{\mathrm{Mg}}{4}$ on the floor of the box? Q. The pulley arrangements of figures $(a)$ and $(b)$ are identical. The mass of the rope is negligible. In figure $(a),$ the mass $m$ is lifted up by attactating a mass $2 m$ to the other end of the rope. In Figure (b), $m$ is lifted up by pulling the other end of the rope with a constant downward force $F=2 m g$. Calculate the accelerations in the two cases. Q. A monkey of mass 40 kg climbs on a rope which can withstand a maximum tension of $600 N .$ In which of the following cases will the rope break: the monkey (a) Climbs up with an acceleration of $6 \mathrm{ms}^{-2}$ (b) Climbs down with an acceleration of $4 \mathrm{ms}^{-2}$ (c) Climbs up with a uniform speed of $5 \mathrm{ms}^{-1}$ (d) Falls down the rope nearly freely under gravity Take $g=10 \mathrm{ms}^{-2}$. Ignore the mass of the rope. [NCERT] (new) Q. Give the magnitude and direction of the net force acting on (a) A drop of rain falling down with a constant speed (b) A cork of mass 10 g floating on water (c) A kite skilfully held stationary in the sky (d) A car moving with a constant velocity of $30 \mathrm{km}$ (e) A high-speed electron in space far from all gravitating objects and free of electric and magnetic fields. Q. For ordinary terrestrial experiments, which of the observers below are inertial and which non-inertial? (a) A child revolving in a "giant wheel" (b) A driver in a sport car moving with a constant high speed of $200 \mathrm{km} \mathrm{h}^{-1}$ on a straight road, (c) The pilot of an aeroplane which is taking off, (d) A cyclist negotiating a sharp turn, (e) The guard of a train which is slowing down to stop at a station? [NCERT] Q. Explain, why (a) A horse cannot pull a cart and run in empty space. (b)Is it easier to pull a lawn mower than to push it (c) Does a cricketer move his hands backward when holding a catch. Q. Fig. Shows the position-time graph of a particle of mass 0.04 kg. Suggest a suitable physical context for this motion. What is the time between two consecutive impulses received by the particle $?$ What is the magnitude of each impulse? [NCERT] Q. Two identical billiard balls strike a rigid wall with the same speed but at different angles, and get reflected without any loss of speed, as shown in the figure below. What is (i) the direction of the force on the wall due to each ball? and (ii) the ratio of the magnitudes of impulses imparted on the two balls by the wall? [NCERT] Q. A light string carrying a small bob hangs from the roof of a vehicle as shown in the figure. If the vehicle moves in a horizontal straight line with a constant acceleration of $2.4 \mathrm{ms}^{-2}$ from left to right, determine the angle, which the string makes with the vertical. Q. Two masses 7 kg and 12 kg are connected at the two ends of a light inextensible strings that passes over frictionless pulley. Find the acceleration of the masses and the tension in the string, when the masses are released. Q. Two pulley of masses $12 \mathrm{kg}$ and $8 \mathrm{kg}$ are $\mathrm{d} \mathrm{e}$ connected by a fine string hanging over a fixed pulley as shown. Over the latter is hung a fined string with masses $4 \mathrm{kg}$ and $M .$ Over the $12 \mathrm{kg}$ pulley is hung another fine string with masses 3 $\mathrm{kg}$ and $6 \mathrm{kg}$. Calculate $M$ so that the string overthe fixed pulley remains stationary. Q. One end of a string is attached to a 6 kg mass on a smooth horizontal table. The string passes, the edge of the table, and to its other end is attached a light smooth pulley. Over this pulley passes another string to the ends of which are attached masses of $4 \mathrm{kg}$ and $2 \mathrm{kg}$ respectively as shown. Show that the $6 \mathrm{kg}$ mass moves with an acceleration of $\frac{8 \mathrm{g}}{17}$. Q. A wooden block of mass 2 kg rests on a soft horizontal floor. When an iron cylinder of mass $25 \mathrm{kg}$ is placed on top of the block, the floor yields steadily and the block and the cylinder together go down with an acceleration of $0.1 \mathrm{ms}^{-2}$. What is the action of the block on the floor (a) before and (b) after the floor yields? Take $\mathrm{g}=10 \mathrm{ms}^{-2}$. [NCERT] eSaral provides you complete edge to prepare for Board and Competitive Exams like JEE, NEET, BITSAT, etc. We have transformed classroom in such a way that a student can study anytime anywhere. With the help of AI we have made the learning Personalized, adaptive and accessible for each and every one. You will get other chapters' and subjects' important questions with answers for Class 11 & 12 science stream. Visit eSaral Website to download or view free study material for JEE & NEET. Also get to know about the strategies to Crack Exam in limited time period. laws of motion questions and answers laws of motion class 9 questions and answers laws of motion class 11 questions with laws of motion class 9 icse questions and answers Manikanta - Nov. 10, 2020, 9:53 p.m. Its great to learn here. But in my opinion more than the theory part the problems part must be more and the theory part is also fantastic i want just one sum but here i learned full in theory even some of them i didnt knew. If there are more number of problems it may be one off the best learning logestics. Thank You SouLReGaLToS - Sept. 16, 2020, 5:36 p.m. THIS IS DAMM GOOD TO PRACTICE FOR NEWTON'S LAWS OF MOTION AS IT HAS ALL THE TYPES OF QUESTION charmi - Sept. 7, 2020, 4:28 p.m.
CommonCrawl
Phenotypic differentiation of gastrointestinal microbes is reflected in their encoded metabolic repertoires Eugen Bauer1, Cedric Christian Laczny1, Stefania Magnusdottir1, Paul Wilmes1 & Ines Thiele1 Microbiomevolume 3, Article number: 55 (2015) | Download Citation The Erratum to this article has been published in Microbiome 2016 4:35 The human gastrointestinal tract harbors a diverse microbial community, in which metabolic phenotypes play important roles for the human host. Recent developments in meta-omics attempt to unravel metabolic roles of microbes by linking genotypic and phenotypic characteristics. This connection, however, still remains poorly understood with respect to its evolutionary and ecological context. We generated automatically refined draft genome-scale metabolic models of 301 representative intestinal microbes in silico. We applied a combination of unsupervised machine-learning and systems biology techniques to study individual and global differences in genomic content and inferred metabolic capabilities. Based on the global metabolic differences, we found that energy metabolism and membrane synthesis play important roles in delineating different taxonomic groups. Furthermore, we found an exponential relationship between phylogeny and the reaction composition, meaning that closely related microbes of the same genus can exhibit pronounced differences with respect to their metabolic capabilities while at the family level only marginal metabolic differences can be observed. This finding was further substantiated by the metabolic divergence within different genera. In particular, we could distinguish three sub-type clusters based on membrane and energy metabolism within the Lactobacilli as well as two clusters within the Bifidobacteria and Bacteroides. We demonstrate that phenotypic differentiation within closely related species could be explained by their metabolic repertoire rather than their phylogenetic relationships. These results have important implications in our understanding of the ecological and evolutionary complexity of the human gastrointestinal microbiome. Recent advances in sequencing technologies have greatly improved our knowledge about the metabolic complexity of the human microbiome and provide novel approaches to identify beneficial microbes [1]. In particular, sequencing the (ideally) entire genomic content (i.e., metagenomic sequencing) of the intestinal microbiota has allowed the establishment of a catalog of main groups of microorganisms present in the gastrointestinal tract and potential metabolic pathways [3] by avoiding culturing and isolation of individual microbial organisms. In this respect, endeavors of the human microbiome project [4] and the MetaHIT consortium [5] aim at establishing comprehensive data-sets of metagenomic content, metabolic functions, and taxonomic compositions within human individuals as well as the isolation and sequencing of numerous microbial taxa. Despite these efforts, however, we are still lacking a comprehensive mechanistic understanding of the intestinal microbiota. One major hurdle in achieving this goal is the lack of organismal system boundaries, enabling us to associate the presence of metabolic pathways in the microbiome with a specific bacterium. Inferring metabolic roles by taxonomic classification alone is difficult because phylogenetically closely related organisms might be very different in their metabolism [6]. It may be therefore challenging to associate functional roles to entire taxonomic groups [7] to conjecture the biological relevance of intestinal bacteria. For instance, members of the same genus, or even of the same species, can be both probiotic and pathogenic [8], indicating a differential strain-specific adaptation. In this context, nutrient utilization can be a strong determinant for the adaptation to varying environments, since it can give a competitive advantage to other organisms that are metabolically less versatile. Thus, having additional metabolic functions can aid microbes in occupying further niches within the human gut. Accordingly, the functional consequences for the host change. Current developments in systems biology allow the modeling of microbial metabolism to gain a mechanistic insight into the relationship between genotype and phenotype [9]. Genome-scale metabolic reconstructions form the basis of such modeling efforts. A reconstruction is assembled based on the genomic sequences as well as biochemical and phenotypic data of a target organism, and accounts for metabolic genes, enzymes, and their associated reactions [10, 11]. Genome-scale metabolic reconstructions serve as a blueprint for condition-specific metabolic models [10, 11], which are obtained by the application of constraints, such as known nutrient uptake rates. The reconstruction process often includes a gap-filling procedure [12, 13], in which additional reactions are included to better model biologically relevant phenotypes, such as the formation of all known biomass precursors [14]. Metabolic models can be studied using a variety of mathematical methods [15]. One frequently used approach is flux balance analysis, which is applied to investigate a functional steady-state flux distribution of the modeled system, while maximizing (or minimizing) a particular cellular objective (e.g., production of biomass precursors) [10]. This modeling approach has been used to investigate nutrient requirements [16], gene essentialities [17], and metabolic interactions [18] for organisms of interest, thereby providing new insights into phenotypic and metabolic properties. The reconstruction process relies on the availability of detailed phenotypic data for the target organism [11], which is usually not available for many of the commonly found microbes in the human gut [1, 3]. To obtain representative metabolic reconstructions for these less well-studied organisms, automatic tools have been developed in recent years, such as the Model SEED platform [19], to provide a valuable starting point for metabolic modeling. In fact, draft reconstructions have been used to generate hypotheses about the target organisms with subsequent experimental validation, leading to the refinement of the metabolic reconstruction [14, 20–22]. In this study, we generated automatically refined draft genome-scale metabolic models of 301 representative intestinal microbes in silico based on whole genome sequences of the human microbiome project using an established approach [19]. We applied a combination of unsupervised machine-learning and computational modeling techniques to study individual and global differences of the metabolic models and the original genomes. Our key results include: i) divergent reactions involved in energy metabolism and membrane synthesis which are most relevant to discriminate different phylogenetic groups, ii) a linear relationship between differences in metabolic reaction potential and essential nutrients determined by flux balance analysis which indicates that the phenotype is directly correlated to the metabolic repertoire, iii) differences in metabolic reaction potential and phylogeny which exhibit an exponential relationship, suggesting an explanation as to why closely related microbes can be very different in their metabolic traits while at less-resolved phylogenetic distances only marginal differences in metabolic diversity can be observed, iv) local differences in pathway presence which can be used to further distinguish representatives of Lactobacillus, Bifidobacteria, and Bacteroides. In summary, we demonstrate the importance of the metabolic repertoire of microbes to predict their phenotypic behavior in an ecological and evolutionary context. Selected microbes as a model for the human gut microbiota In order to answer ecological and evolutionary questions relevant for human health and disease, we selected 301 commonly found gut microbes based on their reported occurrence in the healthy gut microbiome [1, 3] and the availability of sequenced isolate genomes (Fig. 1). We used the Model SEED platform [19] to generate automated draft genome-scale metabolic reconstructions for each microbe. To enable growth under anaerobic conditions, which are predominant in the human gut [23], we added specific reactions, if necessary (Additional file 1: Table S2). A comparison of our draft reconstructions with a set of published manually refined high-quality metabolic reconstructions taken from [24] revealed that most of the metabolic functionalities were captured in the refined draft reconstructions (Additional file 2: Figure S1). Reactions absent in the refined draft reconstruction belonged mostly to the category of transport and exchange reactions, whose addition requires experimental and physiological data, as substrate specificity and transport mechanism is difficult to automatically annotate in microbial genomes [25]. Phylogeny and individual statistics of the microbe selection. The cladogram shows the taxonomic relationships among the 301 microbes. In the four outer layers, the bars represent the relative individual genome size, number of genes, number of reactions, and in silico growth rate. The different colors represent the various bacterial classes. The leaf colors and shapes symbolize whether a microbe is a probiotic (green triangle), a pathogen (red diamond), an opportunistic pathogen (red circles), or a non-pathogenic bacterium (white triangles) Our set of refined draft reconstructions captured a wide spectrum of different phyla (Fig. 1) with a taxonomic diversity similar to what is commonly observed in the human intestine [3]. The high diversity and proportion of microbes within the phyla Bacteroides, Proteobacteria, and Firmicutes (Fig. 1) is concordant with observations in the human colon [26]. Moreover, by integrating information about pathogenic and beneficial traits of each microbe (Fig. 1), we were able to associate these metabolic traits with the phenotype toward the host. As expected, a large proportion of probiotic Lactobacillus and Bifidobacteria could be found in the classes Bacilli and Actinobacteria, respectively (Fig. 1 and Table 1). Additionally, known pathogenic organisms within the Proteobacteria, Fusobacteria, and Bacilli are also represented. Thus, our selection of bacteria provides an appropriate representation of microbial species, phenotypic traits, and metabolic processes present in the colon, the main site of microbial fermentation and interaction of microbes with the host [27]. Table 1 Genome and metabolic model statistics of the selected microbes Our analysis also included microbes with draft genomes (Additional file 3: Table S1), requiring the assessment of the overall genome completeness and the potential impact on gene annotations and consequently on the generated metabolic reconstructions. The completeness and possible genomic contamination by other microorganisms of the individual 301 of the individual genomes was assessed using a collection of 107 universal, single-copy genes [28, 29]. In our set of 301 genomes, the average estimated genome completeness was 95 % (Table 1). We further investigated the genome size and annotated genes among the 301 organisms (Table 1). Gammaproteobacteria had generally large genomes and a high number of annotated genes, while members of the order Bacteroidia had in general larger genomes but a lower number of annotated genes. This difference could be attributed to differences in annotation efficiencies, as Proteobacteria (in particular, gut specific Escherichia species) are very well-studied and thus have more homologous genes. Consequently, the number of reactions in the constructed metabolic models was higher and the number of reactions added via gap-filling lower. In contrast, we found a higher number of gap-filled reactions and a lower number of reactions in Actinobacteria (Table 1). This bias, which is well established for metagenomic analyses [30], is most likely the result of having less experimental data and validated gene annotation available for Actinobacteria. The presence of this apparent annotation bias underlines the limitation in current annotation techniques affecting particularly phylogenetically distant microbes [29–31] and highlights the need for more detailed experimental biochemical studies to elucidate gene functions in phyla distant to those containing model organisms [31]. Global reaction differences recapitulate conserved taxonomic patterns and phenotypes To assess the differences within the metabolic reconstructions, we tested whether they could recapitulate the taxonomy of the studied microbes. We therefore computed a metabolic distance between the reconstructions based on the reaction presence [32] and subsequently used principle coordinate analysis (PCoA) [33]. This analysis revealed clusters, which correspond to known taxonomic groups (Fig. 2). More specifically, with more than 30 % of explained variance, the first principle coordinate (Fig. 2) was able to discriminate between Gram-negative and Gram-positive bacteria, which is in concordance to traditional measures of broad taxonomic groups, assigned based on the phylogeny of the 16S rRNA gene, the production of fatty acids, and corresponding membrane lipid composition [34]. In our PCoA (Fig. 2), members of the class Negativicutes were closely associated with Gram-negative bacteria rather than their phylogenetically close Gram-positive relatives, which is in accordance to their unusual membrane composition including two membrane layers [35]. Global differences within metabolic models and their most divergent reactions. Biplot of the principle coordinate analysis based on the metabolic distance determined by the presence/absence of specific reactions in the metabolic models. Taxonomic groups are represented by different colors. The 200 reactions most associated with the point separation are indicated as arrows pointing from the coordinate origin to the contributing direction. The arrow shading represents reactions overlapping in their direction of contribution. The complete set of 2272 reactions sorted by their relevance can be found in Additional file 4: Table S3 The separation between Gammaproteobacteria and Actinobacteria highlights that our reconstructions captured taxa-specific metabolic features, despite the mentioned annotation bias. Furthermore, Clostridia species showed a high metabolic diversity and overlapped with clusters of other microbial taxa (Fig. 2), which is consistent with the reported metabolic variety of these bacteria and their corresponding beneficial traits in the human gut [36]. Erysipelotrichia representatives are closely but nonetheless distinctly placed relative to the Clostridia in the 2D principle coordinate plot (Fig. 2). Intriguingly, members of Erysipelotrichia were formerly considered as Clostridia based on the phylogeny of marker genes [37] but then re-assigned to a novel class based on their phylogeny and membrane composition [38]. Similar to the Clostridia, Bacilli species were also widely spread in the 2D principle coordinate plot (Fig. 2), reflecting their metabolic versatility [39]. In contrast, other taxa had more dense clusters, particularly Actinobacteria, reflecting more specialized roles of these bacteria, such as the conversion of polysaccharides [40]. Overall, we propose that metabolic reconstructions could be used, in addition to canonical approaches, to assist in the taxonomic definition of novel microbes and the re-assignment of already described microbes into better defined taxonomic groups. In particular, our approach has the advantage of considering functional characteristics, in contrast to methods solely relying on the presence and phylogeny of marker genes. As also pointed out by previous studies [41], functional repertoires can have a positive influence on the annotation quality of taxonomic groups. Ultimately, this could shed light onto the metabolic versatility of microbes in general or in specific habitats, such as the human gut. Energy and membrane metabolism as markers for metabolic divergence Following the broader characterization, we aimed to obtain a better understanding of the reactions driving the observed separation in the first two coordinates. The separation of taxonomic groups is due to reactions involved in membrane synthesis and central metabolism (Fig. 2). In particular, different types of lysophosolipase reactions exhibit the highest explanatory power (Additional file 4: Table S3). These reactions convert various phospholipid precursors (differing in their number of C-atoms) and have the same direction in the first principle coordinate, because all reactions can be carried out by single enzymes and are thus linearly dependent. Similarly, the amylomaltases catalyze multiple reactions differing in their substrates (Additional file 4: Table S3). For the enoyl-ACP reductase, we found a variety of reactions with different directions toward the first principle coordinate (Fig. 2). This variation in angle represents a potential variation in distinct yet convergent fatty acid synthesis processes involved in energy metabolism and known to be present in the human gut microbiome [2], thus contributing to the discrimination of the different types of bacteria. This observation is consistent with the fact that fatty acid profiles have been used to characterize microbial communities before the advent of nucleic acid-based methods [42]. The synthesis of endotoxins was positively associated with the distribution of Gram-negative pathogenic species within the Proteobacteria and Fusobacteria, which is in accordance with previously reported correlations between various diseases and the abundance of Proteobacteria-producing endotoxins [43]. The transport and utilization of diverse carbohydrates involved in energy metabolism, such as mannitol, mannose, and fructose, were positively associated with the location of the Bacilli cluster in the 2D principle coordinate plot (Fig. 2). This association highlights the variety of substrate consumption as represented by these reconstructions of microbial metabolism. In accordance with the literature, Bacilli are known to utilize a broad range of carbohydrates [44]. The differentiation of taxonomic groups based on reactions involved in energy and membrane metabolism may have important implications in understanding the evolution and heterogeneity of intestinal microbes. For instance, Gram-negative bacteria have been reported to change their membrane composition [45] in order to cope with environmental influences, such as antibiotics and human immune agents, many of which target bacterial membrane compounds [46]. Additionally, ecological changes within the microbial community [47] can provoke a differentiation in metabolic capabilities involved in energy metabolism leading to altered interactions of the community with the human host, supporting the observed high explanatory power of metabolic reactions toward cluster separation. The relationship between genotype, phenotype, and metabolic repertoire is non-linear To further investigate the observed metabolic diversity (Fig. 2) and its evolutionary basis, we computed the phylogenetic relationship between the 301 bacteria based on 400 protein-coding metabolic genes [48] using two methanogenic archaea as outgroups (Additional file 5: Figure S2). On the basis of this rooted phylogenetic tree, we computed pairwise phylogenetic distances from the heights within the tree using the cophenetic distance [49]. While the clustering of this phylogenetic distance (Fig. 3) recapitulated the original phylogeny (Additional file 5: Figure S2), we additionally computed a genetic distance based on the 16S rRNA gene similarity of the microbes (Additional file 6: Figure S3), to ensure that our observations were reproducible with other methods or markers. The pairwise distance based on the phylogenetic tree and the inferred presence of distinct reactions were overall congruent with each other (Fig. 3). Interestingly, we identified an exponential relationship between phylogeny and metabolic repertoire (Fig. 4), which is in accordance to a previous study based on genomic measures [50]. To exclude potential artifacts resulting from homology-based annotation methods (Model SEED) used for the generation of the metabolic reconstructions, we also determined the distance based on the presence of detected clusters of orthologous groups (COGs) [51] and Pfam protein domains [52]. These two measures also exhibited the same exponential relationship between metabolic repertoire and phylogeny (Fig. 4). Importantly, this relationship indicates that closely related species can have an extremely divergent set of metabolic reactions, while at taxonomic ranks above the family level, only limited amounts of additional emergent features were observed. Since COG annotations and Pfam domains are prone to misclassification, we also included annotation measures with a higher quality, such as MetaCyc functionalities [53] as well as EC numbers (Additional file 7: Figure S4) and observed a comparable exponential trend. Similar observations have been obtained in published experimental studies based on the phenotypic properties of different strains from the same genus or species [6, 8], underlining the biological relevance of our observations. In the context of a microbial community or biofilm, our observed relationship explains why closely related taxonomic groups (e.g., species of the same genus) are able to co-exist, while the overall consortium is limited in its metabolic potential [54]. In addition to this result, we identified a linear relationship between the metabolic repertoire and the similarity of essential nutrients, which we calculated using flux balance analysis as a proxy for the metabolic phenotype (Fig. 4b). These findings complement previous knowledge about the relationship between genotype and phenotype by Plata et al. [55]. Here, a similar exponential relation was observed between microbial phylogeny and varying growth conditions in selected genome-scale metabolic models, which were not directly associated with a specific habitat [55]. Additionally, this relationship has also been found with respect to the phenotypic similarity based on gene essentiality and synthetic lethal genes [55]. Taking into account that these latter measures have been based on flux balance analysis and are thus analogous to our results, we conclude that the observed patterns are generally applicable to bacteria. Furthermore, we argue that the metabolic network constituting of a set of reactions is appropriate to represent and explain a phenotype (Fig. 4b). Assuming the metabolic repertoire as one of the major factors for the evolution of intestinal microbes, transfer of metabolic traits within different taxa may account for fast metabolic diversification of species and strains leading to niche partitioning. In fact, horizontal gene transfer has been shown to be enriched within organisms inhabiting the same environment, particularly, the human gut [56]. In addition to the results of Plata et al. [55], we propose the metabolic repertoire as one of the major factors influencing the phenotypic differentiation of human gut microbial communities. Still, the clear separation of taxonomic groups noted above (Fig. 2) suggests that exchange of functionalities is limited to ensure a certain metabolic divergence within the whole microbiota to maintain functional diversity and limit competition between closely related organisms. Tanglegram between the hierarchical clustering of the phylogenetic and metabolic distance. Tanglegram between the dendrograms of the reaction distance according to the presence of specific reactions and the phylogenetic distance according to the cophenetic distance of the maximum likelihood tree (rooted with two methanogenic archea) calculated from the sequence similarity of 400 selected essential genes. The dendrograms were calculated using hierarchical clustering with complete linkage. Lines connecting the same microbe are colored according to the taxonomic class Relationship between reaction content, phylogeny, and phenotype. The metabolic distance was determined according to the presence of specific reactions in the model (A,B). COG (C) and Pfam (D) functional differences were assessed by comparing the presence/absence of COG functions and Pfam domains for all genomes, respectively. The phylogenetic distance is based on the cophenetic distance of the maximum likelihood tree (rooted with two methanogenic archea) calculated from the sequence similarity of 400 selected essential genes (a, c, d). The phenotype divergence was represented by the difference in essential nutrients, which were determined by removing the nutrient of interest from the in silico medium and subsequently checking for growth/no growth with flux balance analysis (b). The shading of the points (a–d) represents the density of all pairwise comparisons between the microbe models (n = 45.150). The blue line (a–d) represents the moving average over the data points. The goodness of fit for both regression models (a, b) can be found in Table 2. The means of the phylogenetic (a, c, d) and metabolic distances (b) for each taxonomic category are indicated by dashed red lines The relationship between phylogeny, metabolic repertoire, and phenotype is taxon-dependent To account for taxon-dependent differences between microorganisms (Table 1), we focused our analysis on model subsets of the five classes and the three genera with the highest number of representatives (Table 2). Additionally, this focus allows us to elucidate whether our results were dependent on our selection of microbes or could be expanded to other microbes not considered in this study. We found that the exponential relationship between phylogeny and metabolic repertoire as well as the linear relationship between nutrient essentiality and metabolic repertoire was apparent for most taxonomic groups (Table 2). However, we noticed differences within the taxa. In particular, there was a considerable exponential fit for all five major bacterial classes except for Clostridia, which could be explained by Clostridia's broad metabolic versatility and the corresponding difficulties in the taxonomic assignment within this class [57]. Our result is in accordance with the observed cluster variability of Clostridia when comparing the clustering of the metabolic and phylogenetic distance (principal coordinate analysis, Fig. 3). When investigating individual genera, we detected a high correlation between essential nutrients and the metabolic repertoire of Bifidobacteria, whereas the correlation between their phylogeny and metabolic repertoire was less pronounced (Table 2, Fig. 3). For members of the genus Bacteroides, the metabolic repertoire correlated strongly with their phylogeny (Fig. 3), but only weakly with the essential nutrients (Table 2). Based on these results, we propose that the divergence within this genus can be explained by divergence in metabolic pathways relating to membrane synthesis (Fig. 5) rather than energy metabolism and thus nutrient essentiality. For the Lactobacillus genus, we found a strong correlation between metabolic potential with both, phylogeny and essential nutrients. Within this genus, energy metabolism explained particular phenotypic divergences of species (Fig. 5), which is consistent with the observed high correlation between reactions involved in nutrient uptake and the clustering of representatives of the Bacilli in the principal coordinate plot (Fig. 2). Taken together, our results show a generality of the observed relationships between phylogeny, metabolic repertoire, and nutrient essentiality within and between taxonomic groups (Fig. 4). Table 2 Summary statistics of the relationship between reaction content, phylogeny, and essential nutrients Local differences within metabolic models and their sub-type-specific pathways. The metabolic distance was determined according to the presence of specific reactions in the model (a). t-SNE was performed to obtain a low dimensional representation of the local differences within taxonomic groups which are represented by the different colors. Sub-types are defined based on hierarchical clustering of the reaction similarities. Members of one sub-type are connected with lines which originate from the cluster centroid. The ellipses represent confidence intervals of the clusters with a certainty of 95 %. Distinguished pathways within sub-types include the genera Lactobacillus (b), Bifidobacterium (c), and Bacteroides (d). The pathways occurring in only one of the sub-types are framed by boxes carrying the corresponding cluster name. Reactions within pathways are represented by black arrows. GAP glycerol-3-phosphate, PEPG peptidoglycan, PGP phosphatidylglycerophosphate, PG phosphatidylglycerol, APG 1-Acyl-sn-glycero-3-phosphoglycerol, TTDCA tetradecanoate, HDCA heptadecanoate, OCDCA octadecanoate Reaction differences reflect metabolic versatility among closely related microbes To further investigate the metabolic divergence within closely related microbes of the same taxonomic group, we used t-distributed stochastic neighbor embedding (t-SNE) [58] for the two-dimensional visualization of the reaction similarities (Fig. 5, see Additional file 8: Figure S5 for point labels). t-SNE is a non-linear, non-parametric dimensionality reduction and has been used previously to reveal data-inherent cluster structures [59–61]. This method enabled us to identify fine-scale reaction differences, in addition to the principal coordinate analysis (Fig. 2). Several distinct clusters were apparent and corresponded to the different bacterial classes (Fig. 5). We further focused our analysis on the three most abundant genera in our model selection (Table 1). For Lactobacillus, we noted a widespread metabolic repertoire and thus a relatively large variability of members within this group (Fig. 5). We identified three distinct subclusters (La1, La2, and La3) within this genus. While La2 showed major overlaps with the other two clusters, La1 and La3 were distinct from each other. We investigated the differences in the reaction sets between the representatives of the different subclusters (Additional file 9: Table S4). Based on the present reaction sets, La1 corresponds to obligate homofermentative La2 to facultative homofermentative and La3 to obligate heterofermentative pathways involved in the energy metabolism of lactic acid bacteria (Fig. 5b). The pathway presence in the genomes explains why La2 overlaps with the other clusters, since the facultative homofermentative group (La2) shares reactions with the obligate homofermentative (La1) and heterofermentative group (La3) [44]. In agreement with the literature, these subclusters correspond to known divergent pathways involved in energy metabolism in Lactobacilli [39]. This distinction of biologically relevant phenotypic groups using predicted difference in metabolic reactions encouraged us to propose novel bacterial sub-types. Therefore, we confirmed for our choice of the number of subclusters by performing hierarchical clustering (Fig. 3) to ensure that the subclusters were substantially different. For the Bifidobacteria, we propose two distinct subclusters (Bi1 and Bi2), which differed in the reactions involved in energy metabolism and membrane biosynthesis (Fig. 5c). For the energy metabolism, numerous reactions involved in the uptake and utilization of diverse carbohydrates were observed for members of the subcluster Bi1 (Additional file 11: Table S5), corresponding to known strain-specific differences within closely related Bifidobacteria [62]. Furthermore, we found reactions involved in the uptake and conversion of glucosamine to peptidoglycan, which could be associated with membrane composition in these two groups. To our knowledge, such pathway differentiation has not yet been proposed for Bifidobacteria. For the Bacteroidia, we could distinguish two subclusters (Ba1 and Ba2). The differences between these clusters can be attributed to the membrane biosynthesis (Fig. 5d; Additional file 12: Table S6). Members of Ba2 possess various pathway types leading to the production of varying phosphatidylglycerol compounds, whereas members of Ba1 can further process phosphatidylglycerol to myristic acid. This finding is of particular biological importance, when considering the virulence and signaling purposes of membrane lipids in Bacteroides species found in previous studies [63, 64], which links the phenotype to the synthesis of membrane compounds. Furthermore, since energy metabolism and substrate availability via the diet are major ecological driving forces within the human gut microbiota [2], the metabolic diversification of other closely related microbes, such as Lactobacillus spp. and Bifidobacterium spp., can be a necessary requirement to maintain a stable coexistence with each other and the host. Considering that optimal conditions for metabolic cooperation are dependent on the similarity between the metabolic repertoires of several species [32], this pathway analysis approach could be used to estimate cooperative as well as competitive strategies. In particular, microorganisms tend to have a higher cooperativity if they are not too similar nor too different [65], indicating that members of the same taxon, but different subclusters (Fig. 5b) might be able to co-exist, whereas functionally similar microbes may be more likely to compete with each other [54]. The requirement for a certain functional diversity to ensure a well-functioning cooperative intestinal microbiota is crucial to break down various complex dietary compounds and divide metabolic tasks among different community members [66]. Our results complement these ideas by investigating the metabolic divergence within a model microbiota, which can be primarily distinguished by reactions involved in energy and membrane metabolism. These capabilities play important roles in shaping the interface between host and symbionts, and thereby may lead to a deeper understanding in addition to metagenomic analyses in which all microbial functions are assessed [1]. Furthermore, the metabolic repertoire of microbes is proportional to their phenotypic properties, highlighting the importance of diversity in explaining the metabolic processes taking place within the human gut. In contrast to these properties, the metabolic repertoire exhibited an exponential relationship with phylogeny, underlining the challenges in inferring metabolic functions from phylogeny alone, in particular when using single gene-centric approaches such as via 16S rRNA gene amplicon sequencing. Moreover, this circumstance can be regarded as an important evolutionary and ecological feature of the microbiome; functional components constituting whole pathways can be very different within closely related species, whereas the metabolism in the overall metabolic repertoire is limited. In other words, by dividing the metabolic tasks between certain taxonomic groups, the microbiota can make efficient use out of a small set of functions thereby facilitating niche partitioning. This result has important implications when considering the overall species richness of the human gut microbiome in the context of different patients and diseases [67]. Further analyses could prove these concepts by modeling interactions within bacteria and the use of the here reconstructed and refined genome-scale metabolic models. Metabolic model selection, construction, and refinement We selected a set of 301 microbes (Additional file 3: Table S1) representing species present in the normal gut microbiota of healthy individuals, according to previous studies [1, 3]. We retrieved the genome sequences as well as additional information about the sequencing status, oxygen requirement, taxonomic placement, and phenotype from the integrated microbial genome database [68]. The completeness and possible genomic contamination by other microorganisms of the individual 301 genomes was assessed using a collection of 107 universal, single-copy genes [28]. The genomic sequences were uploaded for gene annotation to the RAST server [69] using default parameters. Draft metabolic reconstructions were then built with these genome annotations using the Model SEED pipeline [19]. To ensure, that the metabolic models are able to grow under anaerobic conditions, which are prevalent in their natural ecosystem, we modified, if necessary, one to five reactions to enable anaerobic growth. The reactions modified for each model are listed in Additional file 1: Table S2. For descriptive purposes, reactions in the metabolic models were translated into our in-house metabolite and reaction database. The original SEED reaction nomenclature was maintained for the growth simulation. All refined draft metabolic models are publically available in their Matlab format at http://thielelab.uni.lu/in_silico_models. Growth simulation To compute different growth conditions, the metabolic reconstructions were subjected to flux balance analyses [10] with the COBRA Matlab toolbox [9] using IBM ILOG cplex as the linear programming solver (IBM, Inc.). Briefly, genome-scale metabolic models were represented as a stoichiometric matrix S, which encodes information about the mass balance of the complete set of enzymatic and transport reactions as well as a biomass reaction. The biomass reaction was retrieved from the metabolic reconstructions and represents the production of cellular building blocks (e.g., cofactors, amino acids, and lipids). Based on the stoichiometry, we could distinguish in our set of models 17 distinct biomass reactions, and based on the qualitative presence of compounds, we could distinguish 6 types of distinct biomass reactions (Additional file 3: Table S1). Hence, the automatically included biomass reactions from the Model SEED pipeline are different and therefore reflect different precursor needs of the considered microbes. Given this reaction as an objective for the biological system, the metabolic fluxes of all reactions in steady-state maximizing growth can be determined by defining an optimization problem as follows: $$ \begin{array}{c}\hfill \mathrm{maximize}\kern0.24em {v}_b\hfill \\ {}\hfill \mathrm{subject}\kern0.24em \mathrm{t}\mathrm{o}\kern0.24em \boldsymbol{S}\cdot v=0\hfill \\ {}\hfill {v}_{i, \min}\le {v}_i\le {v_i}_{, \max },\forall i\in n\kern0.24em \mathrm{reactions}\hfill \end{array} $$ With v b as the flux through the biomass objective function, v as the vector of all reaction fluxes, v i,min as the minimal flux capacity of reaction i, and v i,max as the maximal flux capacity of reaction i. The solution (metabolic fluxes of all reactions) of this optimization problem can be obtained using linear programming. The flux through the biomass reactions can be interpreted as the growth rate of the microbe model. By setting the constraints v i,min and v i,max of exchange (transport) reactions, varying growth conditions can be simulated. Throughout this study, the maximal uptake was constrained to 10 mmol/gDW/h to estimate natural occurring conditions. The maximal achievable growth rate was calculated under these conditions by assuming that all exchange reactions are potentially active (equivalent to rich medium condition). Additionally, the absence of a particular metabolite in the medium was simulated by setting its minimal and maximal exchange reaction constraints (v i,min and v i,max ) to 0 mmol/gDW/h. By the iterative removal of each metabolite individually from the rich medium for each microbe model, different growth conditions were simulated. Essential nutrients were defined by growth rates smaller than 0.05 h−1 after removal from the medium. This cutoff was based on the estimated growth rate of microbes within the mammalian gut [70]; however, all calculated smaller growth rates were below 0.0001 h−1 and thus negligible. Data mining of metabolic and genomic information To assess the differences between the individual microbes, we used the reaction content and essential nutrients as well as COG functions and Pfam domains. The reaction content was based on the metabolic models obtained from Model SEED [71], whereas the COG functions and Pfam domains were obtained from the integrated microbial genomes database [68]. For each microbe, the presence and absence of reactions, essential nutrients, and functions were assessed in relation to the union of all metabolic reconstruction and genome annotations, respectively. The resulting binary vector b was then analyzed between species i and j with the Jaccard Index as: $$ \frac{b_i\cap {b}_j}{b_i\cup {b}_j} $$ to calculate the metabolic proximity according to [32]. Based on the obtained distance matrix of the reaction content, we used principle coordinate analysis [33] and t-SNE [58] for reducing the dimensionality from 301 to 2. The two-dimensional embeddings were visualized by scatter plots. Using principle coordinate analysis, we analyzed reaction differences between the metabolic models on a global scale by correlating each reaction to the principle coordinates and subsequently selecting the 200 reactions with the highest correlation (Additional file 4: Table S3). The t-SNE-based visualization was used to identify local differences, with a detailed analysis of cluster structures within the genera Lactobacillus, Bifidobacteria, and Bacteroides. The reaction set differences between the determined sub-types of these genera were then used to identify type specific pathways. Phylogenetic analysis In addition to the determined metabolic difference, we used the phylogenetic relationships between the microbes as a measure of divergence. The phylogeny was computed with PhyloPhlAn, which uses a set of around 400 protein-coding genes for the phylogenetic placement [48]. In addition to the 301 bacterial genomes, the genomes of the archaea Methanobrevibacter smithii ATCC35061 and Methanosphaera stadtmanae DSM 3091 were used as an out-group to root the phylogenetic tree (Additional file 5: Figure S2). The resulting phylogenetic tree was visualized using EvolView [72]. The phylogenetic difference between the different bacteria was computed using the cophenetic distance based on the rooted tree [49]. Correlation between phylogeny, metabolic repertoire, and essential nutrients We determined the relationship between the metabolic repertoire of the models and the phylogenetic distance as well as its relation to the predicted essential nutrients by representing the phylogenetic distance as a function of the metabolic distance. We fitted different regression functions and found an exponential model defined by: $$ y={10}^{\left(\alpha +\beta x\right)} $$ to be the most suitable for explaining the relationship between metabolic distance x and phylogenetic distance y. For the relationship between essential nutrient difference z and metabolic difference x, we found a linear model defined by: $$ z=\alpha +\beta x $$ to be the best fit. We complemented the exponential model with the Spearman correlation and the linear model with the Pearson correlation as a measure of association between the variables. The goodness of fit measures for the different models and subsets of the data can be found in Table 2. The fitted parameters α and β for all plots in Fig. 4 can be found in Additional file 10: Table S7. Qin J, Li Y, Cai Z, Li S, Zhu J, Zhang F, et al. A metagenome-wide association study of gut microbiota in type 2 diabetes. Nature. 2012;490(7418):55–60. Flint HJ, Duncan SH, Scott KP, Louis P (2015). Links between diet, gut microbiota composition and gut metabolism. Proceedings of the Nutrition Society, 74, pp 13-22.doi:10.1017/S0029665114001463. Qin J, Li R, Raes J, Arumugam M, Burgdorf KS, Manichanh C, et al. A human gut microbial gene catalogue established by metagenomic sequencing. Nature. 2010;464(7285):59–65. Turnbaugh PJ, Ley RE, Hamady M, Fraser-Liggett C, Knight R, Gordon JI. The human microbiome project: exploring the microbial part of ourselves in a changing world. Nature. 2007;449(7164):804. Ehrlich SD. MetaHIT: The European Union Project on metagenomics of the human intestinal tract. Metagenomics of the Human Body. New York: Springer; 2011. p. 307–16. Monk JM, Charusanti P, Aziz RK, Lerman JA, Premyodhin N, Orth JD, et al. Genome-scale metabolic reconstructions of multiple Escherichia coli strains highlight strain-specific adaptations to nutritional environments. Proc Natl Acad Sci. 2013;110(50):20338–43. Arumugam M, Raes J, Pelletier E, Le Paslier D, Yamada T, Mende DR, et al. Enterotypes of the human gut microbiome. Nature. 2011;473(7346):174–80. Vebø HC, Solheim M, Snipen L, Nes IF, Brede DA. Comparative genomic analysis of pathogenic and probiotic Enterococcus faecalis isolates, and their transcriptional responses to growth in human urine. PLoS One. 2010;5(8), e12489. Schellenberger J, Que R, Fleming RM, Thiele I, Orth JD, Feist AM, et al. Quantitative prediction of cellular metabolism with constraint-based models: the COBRA Toolbox v2. 0. Nat Protoc. 2011;6(9):1290–307. Orth JD, Thiele I, Palsson BØ. What is flux balance analysis? Nat Biotechnol. 2010;28(3):245–8. Thiele I, Palsson BØ. A protocol for generating a high-quality genome-scale metabolic reconstruction. Nat Protoc. 2010;5(1):93–121. Kumar VS, Dasika MS, Maranas CD. Optimization based automated curation of metabolic reconstructions. BMC bioinformatics. 2007;8(1):212. Thiele I, Vlassis N, Fleming RM. fastGapFill: efficient gap filling in metabolic networks. Bioinformatics. 2014;30(17):2529–31. Reed JL, Patel TR, Chen KH, Joyce AR, Applebee MK, Herring CD, et al. Systems approach to refining genome annotation. Proc Natl Acad Sci. 2006;103(46):17480–4. Lewis NE, Nagarajan H, Palsson BO. Constraining the metabolic genotype–phenotype relationship using a phylogeny of in silico methods. Nat Rev Microbiol. 2012;10(4):291–305. Suthers PF, Dasika MS, Kumar VS, Denisov G, Glass JI, Maranas CD. A genome-scale metabolic reconstruction of Mycoplasma genitalium, iPS189. PLoS Comput Biol. 2009;5(2), e1000285. Edwards J, Palsson B. The Escherichia coli MG1655 in silico metabolic genotype: its definition, characteristics, and capabilities. Proc Natl Acad Sci. 2000;97(10):5528–33. Heinken A, Sahoo S, Fleming RM, Thiele I. Systems-level characterization of a host-microbe metabolic symbiosis in the mammalian gut. Gut Microbes. 2013;4(1):28–40. Henry CS, DeJongh M, Best AA, Frybarger PM, Linsay B, Stevens RL. High-throughput generation, optimization and analysis of genome-scale metabolic models. Nat Biotechnol. 2010;28(9):977–82. Ottar R, Giuseppe P, Manuela M, Bernhard OP, Ines T. Inferring the metabolism of human orphan metabolites from their metabolic network context affirms human gluconokinase activity. Biochem J. 2013;449(2):427–35. Orth JD, Palsson B. Gap-filling analysis of the iJO1366 Escherichia coli metabolic network reconstruction for discovery of metabolic functions. BMC Syst Biol. 2012;6(1):30. Manichaikul A, Ghamsari L, Hom EF, Lin C, Murray RR, Chang RL, et al. Metabolic network analysis integrated with transcript verification for sequenced genomes. Nat Methods. 2009;6(8):589. Evaldson G, Heimdahl A, Kager L, Nord C. The normal human anaerobic microflora. Scand J Infect Dis Suppl. 1981;35:9–15. Heinken A, Thiele I. Systematic prediction of health-relevant human-microbial co-metabolism through a computational framework. Gut Microbes. 2015;6(2):120–30. Lee TJ, Paulsen I, Karp P. Annotation-based inference of transporter function. Bioinformatics. 2008;24(13):i259–i67. Stearns JC, Lynch MD, Senadheera DB, Tenenbaum HC, Goldberg MB, Cvitkovitch DG, et al. Bacterial biogeography of the human digestive tract. Sci Rep. 2011;1. Wong JM, de Souza R, Kendall CW, Emam A, Jenkins DJ. Colonic health: fermentation and short chain fatty acids. J Clin Gastroenterol. 2006;40(3):235–43. Dupont CL, Rusch DB, Yooseph S, Lombardo M-J, Richter RA, Valas R, et al. Genomic insights to SAR86, an abundant and uncultivated marine bacterial lineage. ISME J. 2011;6(6):1186–99. Prentice MB. Bacterial comparative genomics. Genome Biol. 2004;5(8):338. Carr R, Borenstein E. Comparative analysis of functional metagenomic annotation and the mappability of short reads. PLoS One. 2014;9(8), e105776. El Yacoubi B, de Crécy-Lagard V. Integrative Data-Mining Tools to Link Gene and Function. Gene Function Analysis. Springer; 2014. p. 43–66. http://dx.doi.org/10.1007/978-1-62703-721-1_4 Mazumdar V. Salomon Amar, and Daniel Segre. Metabolic proximity in the order of colonization of a microbial community. PLoS One. 2013;8(10), e77617. Gower JC, Legendre P. Metric and Euclidean properties of dissimilarity coefficients. J Classif. 1986;3(1):5–48. Garrity GM, Bell JA, Lilburn TG. Taxonomic outline of the prokaryotes. Bergey's manual of systematic bacteriology: Springer, New York, Berlin, Heidelberg; 2004. Marchandin H, Teyssier C, Campos J, Jean-Pierre H, Roger F, Gay B, et al. Negativicoccus succinicivorans gen. nov., sp. nov., isolated from human clinical samples, emended description of the family Veillonellaceae and description of Negativicutes classis nov., Selenomonadales ord. nov. and Acidaminococcaceae fam. nov. in the bacterial phylum Firmicutes. Int J Syst Evol Microbiol. 2010;60(6):1271–9. Louis P, Flint HJ. Diversity, metabolism and microbial ecology of butyrate‐producing bacteria from the human large intestine. FEMS Microbiol Lett. 2009;294(1):1–8. Yutin N, Galperin MY. A genomic update on clostridial phylogeny: Gram‐negative spore formers and other misplaced clostridia. Environ Microbiol. 2013;15(10):2631–41. Ludwig W, Schleifer K, Whitman III W, Class III. Erysipelotrichia class nov. Bergey's Manual of Systematic Bacteriology. 2009;3:1298. Adler P, Bolten CJ, Dohnt K, Hansen CE, Wittmann C. Core fluxome and metafluxome of lactic acid bacteria under simulated cocoa pulp fermentation conditions. Appl Environ Microbiol. 2013;79(18):5670–81. Turroni F, Ribbera A, Foroni E, van Sinderen D, Ventura M. Human gut microbiota and bifidobacteria: from composition to functionality. Antonie Van Leeuwenhoek. 2008;94(1):35–50. Zhu C, Delmont TO, Vogel TM, Bromberg Y. Functional basis of microorganism classification. PLoS Comput Biol. 2015;11(8), e1004472. Guckert JB, Ringelberg DB, White DC, Hanson RS, Bratina BJ. Membrane fatty acids as phenotypic markers in the polyphasic taxonomy of methylotrophs within the Proteobacteria. J Gen Microbiol. 1991;137(11):2631–41. Tilg H, Kaser A. Gut microbiome, obesity, and metabolic dysfunction. J Clin Invest. 2011;121(6):2126–32. Kandler O. Carbohydrate metabolism in lactic acid bacteria. Antonie Van Leeuwenhoek. 1983;49(3):209–24. Gupta RS. Origin of diderm (Gram-negative) bacteria: antibiotic selection pressure rather than endosymbiosis likely led to the evolution of bacterial cells with two membranes. Antonie Van Leeuwenhoek. 2011;100(2):171–82. Bush K. Antimicrobial agents targeting bacterial cell walls and cell membranes. Rev Sci Tech. 2012;31(1):43–56. D'Souza G, Waschina S, Pande S, Bohl K, Kaleta C, Kost C. Less is more: selective advantages Can explain the prevalent loss of biosynthetic genes in bacteria. Evolution. 2014. Segata N, Börnigen D, Morgan XC, Huttenhower C. PhyloPhlAn is a new method for improved phylogenetic and taxonomic placement of microbes. Nat Commun. 2013;4. Sokal RR, Rohlf FJ. The comparison of dendrograms by objective methods. Taxon. 1962;11(2):33–40. Zaneveld JR, Lozupone C, Gordon JI, Knight R. Ribosomal RNA diversity predicts genome diversity in gut bacteria and their relatives. Nucleic Acids Res. 2010;38(12):3869–79. Natale DA, Shankavaram UT, Galperin MY, Wolf YI, Aravind L, Koonin EV. Towards understanding the first genome sequence of a crenarchaeon by genome annotation using clusters of orthologous groups of proteins (COGs). Genome Biol. 2000;1(5):RESEARCH0009. Finn RD, Bateman A, Clements J, Coggill P, Eberhardt RY, Eddy SR, et al. Pfam: the protein families database. Nucleic Acids Res. 2013;42:gkt1223. Caspi R, Altman T, Dale JM, Dreher K, Fulcher CA, Gilham F, et al. The MetaCyc database of metabolic pathways and enzymes and the BioCyc collection of pathway/genome databases. Nucleic Acids Res. 2010;38 suppl 1:D473–D9. Wilmes P, Bowen BP, Thomas BC, Mueller RS, Denef VJ, VerBerkmoes NC, et al. Metabolome-proteome differentiation coupled to microbial divergence. MBio. 2010;1(5):e00246–10. Plata G, Henry CS. Vitkup D. Long-term phenotypic evolution of bacteria: Nature; 2014. Smillie CS, Smith MB, Friedman J, Cordero OX, David LA, Alm EJ. Ecology drives a global network of gene exchange connecting the human microbiome. Nature. 2011;480(7376):241–4. Collins M, Lawson P, Willems A, Cordoba J, Fernandez-Garayzabal J, Garcia P, et al. The phylogeny of the genus clostridium: proposal of five new genera and eleven new species combinations. Int J Syst Bacteriol. 1994;44(4):812–26. Van der Maaten L, Hinton G. Visualizing data using t-SNE. J Mach Learn Res. 2008;9(2579–2605):85. Amir E-aD, Davis KL, Tadmor MD, Simonds EF, Levine JH, Bendall SC, et al. viSNE enables visualization of high dimensional single-cell data and reveals phenotypic heterogeneity of leukemia. Nat Biotechnol. 2013;31(6):545–52. Platzer A. Visualization of SNPs with t-SNE. PLoS One. 2013;8(2), e56883. Laczny CC, Pinel N, Vlassis N, Wilmes P. Alignment-free visualization of metagenomic data by nonlinear dimension reduction. Sci Rep. 2014;4. Lee J-H, O'Sullivan DJ. Genomic insights into bifidobacteria. Microbiol Mol Biol Rev. 2010;74(3):378–416. An D, Na C, Bielawski J, Hannun YA, Kasper DL. Membrane sphingolipids as essential molecular signals for Bacteroides survival in the intestine. Proc Natl Acad Sci. 2011;108(Supplement 1):4666–71. Nair B, Mayberry W, Dziak R, Chen P, Levine M, Hausmann E. Biological effects of a purified lipopolysaccharide from Bacteroides gingivalis. J Periodontal Res. 1983;18(1):40–9. Chiu H-C, Levy R, Borenstein E. Emergent biosynthetic capacity in simple microbial communities. PLoS Comput Biol. 2014;10(7), e1003695. Blaut M, Clavel T. Metabolic diversity of the intestinal microbiota: implications for health and disease. J Nutr. 2007;137(3):751S–5S. Le Chatelier E, Nielsen T, Qin J, Prifti E, Hildebrand F, Falony G, et al. Richness of human gut microbiome correlates with metabolic markers. Nature. 2013;500(7464):541–6. Markowitz VM, Chen I-MA, Palaniappan K, Chu K, Szeto E, Grechkin Y, et al. IMG: the integrated microbial genomes database and comparative analysis system. Nucleic Acids Res. 2012;40(D1):D115–D22. Aziz RK, Bartels D, Best AA, DeJongh M, Disz T, Edwards RA, et al. The RAST Server: rapid annotations using subsystems technology. BMC Genomics. 2008;9(1):75. Gibbons R, Kapsimalis B. Estimates of the overall rate of growth of the intestinal microflora of hamsters, guinea pigs, and mice. J Bacteriol. 1967;93(1):510. Overbeek R, Begley T, Butler RM, Choudhuri JV, Chuang H-Y, Cohoon M, et al. The subsystems approach to genome annotation and its use in the project to annotate 1000 genomes. Nucleic Acids Res. 2005;33(17):5691–702. Zhang H, Gao S, Lercher MJ, Hu S, Chen W-H. EvolView, an online tool for visualizing, annotating and managing phylogenetic trees. Nucleic Acids Res. 2012;40(W1):W569–W72. Albertsen M, Hugenholtz P, Skarshewski A, Nielsen KL, Tyson GW, Nielsen PH. Genome sequences of rare, uncultured bacteria obtained by differential coverage binning of multiple metagenomes. Nat Biotechnol. 2013;31(6):533–8. The authors are thankful to Mrs Almut Heinken for helping with the refinement of the draft metabolic models to account for anaerobic metabolisms and Dr. Dmitry Ravcheev for providing information about aerobic and anaerobic metabolisms. This study was supported by the ATTRACT program grants (FNR/A12/01 and FNR/A09/03), the Aides à la Formation-Recherche (FNR/6783162, FNR /4964712) grant from the Luxembourg National Research Fund (FNR), and a European Union Joint Programming in Neurodegenerative Diseases grant (INTER/JPND/12/01). Luxembourg Centre for Systems Biomedicine, University of Luxembourg, Esch-sur-Alzette, Luxembourg Eugen Bauer , Cedric Christian Laczny , Stefania Magnusdottir , Paul Wilmes & Ines Thiele Search for Eugen Bauer in: Search for Cedric Christian Laczny in: Search for Stefania Magnusdottir in: Search for Paul Wilmes in: Search for Ines Thiele in: Correspondence to Ines Thiele. EB, IT, and PW designed the study. EB and IT reconstructed the metabolic models and performed the analysis. CCL performed the phylogenetic analysis. SM collected phenotypic information and translated the reaction abbreviations. All authors edited and approved the final manuscript. Additional file 1: Table S2. Table of the gap-filled reactions used to ensure anaerobic growth. (XLSX 14 kb) Comparison between a set of our draft reconstructions and a set of published manually curated reconstructions. (TIFF 13914 kb) List of genome and model statistics of the microbe selection. (XLSX 94 kb) List of all reactions sorted according to their contribution to the point separation in Fig. 2. (XLSX 126 kb) Phylogenetic maximum likelihood tree (rooted with two methanogenic archaea) calculated from the sequence similarity of 400 selected essential genes. (TIFF 13054 kb) The exponential relationship between the phylogeny and reaction content using the 16S rRNA sequence similarity as a measure for genetic distance. (TIFF 3150 kb) The correlation between MetaCyc and EC functionalities with the phylogenetic distance. (TIFF 13708 kb) The same t-SNE-based, two-dimensional coordinates as in Fig. 5 with additional point labels for the different organisms. (TIFF 1771 kb) List of genera members belonging to the different clusters presented in Fig. 5. (XLSX 10 kb) Additional file 10: Table S7 The fitted parameters of the exponential models in Fig. 4. (XLSX 9 kb) Additional file 11: Table S5. Table with reaction differences within the clusters found for Bifidobacterium. (XLSX 30 kb) Table with reaction differences within the clusters found for Bacteroides. (XLSX 34 kb) Genome-scale metabolic reconstructions Metabolic modeling Metagenomics
CommonCrawl
Extracellular vesicles secreted by Saccharomyces cerevisiae are involved in cell wall remodelling Kening Zhao1 na1, Mark Bleackley ORCID: orcid.org/0000-0002-9717-75601 na1, David Chisanga ORCID: orcid.org/0000-0002-0421-39572, Lahiru Gangoda1, Pamali Fonseka1, Michael Liem1, Hina Kalra1, Haidar Al Saffar1, Shivakumar Keerthikumar1,3,4, Ching-Seng Ang5, Christopher G. Adda1, Lanzhou Jiang1, Kuok Yap6, Ivan K. Poon ORCID: orcid.org/0000-0002-9119-71731, Peter Lock ORCID: orcid.org/0000-0003-4421-36091, Vincent Bulone6, Marilyn Anderson ORCID: orcid.org/0000-0002-8257-51281 & Suresh Mathivanan1 ESCRT Fungal biology Extracellular vesicles (EVs) are membranous vesicles that are released by cells. In this study, the role of the Endosomal Sorting Complex Required for Transport (ESCRT) machinery in the biogenesis of yeast EVs was examined. Knockout of components of the ESCRT machinery altered the morphology and size of EVs as well as decreased the abundance of EVs. In contrast, strains with deletions in cell wall biosynthesis genes, produced more EVs than wildtype. Proteomic analysis highlighted the depletion of ESCRT components and enrichment of cell wall remodelling enzymes, glucan synthase subunit Fks1 and chitin synthase Chs3, in yeast EVs. Interestingly, EVs containing Fks1 and Chs3 rescued the yeast cells from antifungal molecules. However, EVs from fks1∆ or chs3∆ or the vps23∆chs3∆ double knockout strain were unable to rescue the yeast cells as compared to vps23∆ EVs. Overall, we have identified a potential role for yeast EVs in cell wall remodelling. Extracellular vesicles (EVs) are released by cells under normal and disease conditions1,2. Secretion of EVs is thought to be conserved in both eukaryotes and prokaryotes, but has been best described for mammalian cells which release multiple EV subtypes designated as exosomes, ectosomes or shedding microvesicles and apoptotic bodies3,4,5. While exosomes are endocytically derived from the multivesicular bodies (MVBs) and are between 30 and 150 nm in diameter, ectosomes are shed from the outward budding of the plasma membrane and are between 100 and 1000 nm in diameter6. It is well established that the endosomal sorting complex required for transport (ESCRT) machinery is critical for the biogenesis of exosomes in mammalian cells7,8. Much of the knowledge on the function and interactions between the components of ESCRT machinery stems from yeast models9. Hence, due to the involvement in the biogenesis and their presence in EVs, human ESCRT components such as TSG101 and Alix are often used as markers or enriched proteins for EVs5,10. However, proteomic analyses of fungal EVs have not identified many ESCRT components as cargo. Though ESCRT deletion strains have been characterised to produce less EVs11, the effect of deletion of ESCRT components on EV cargo has not been investigated. EVs have been isolated from a number of fungal species and are proposed to function in various host/pathogen interactions in both plants and animals12,13. In fungi, EVs are presumably involved in the transport of macromolecules across the fungal cell wall12. Cryptococcus neoformans, for example, produces secretory vesicles through a pathway involving MVBs. These vesicles are enriched with the polysaccharide glucuronoxylomannan which is incorporated into the capsule and delivered into host tissues as a virulence factor14. In addition, a number of proteins associated with virulence are also packaged in these vesicles15. EVs from Paracoccidioides brasiliensis and Paracoccidioides lutzii have been characterized with respect to their lipid16, proteome17, RNA18 and carbohydrate19 content. Interestingly, P. brasiliensis EVs induce an immune response in mice and promote M1 polarization of macrophages20. Similarly, vesicles produced by Candida albicans activate innate immune cells in vitro21,22. Antifungal drug resistance in C. albicans biofilms has also been linked to EV secretion11. Intracellular vesicles in Saccharomyces cerevisiae cells have been studied since the 1970s. Early work on yeast intracellular vesicles focused on a specific subclass of vesicles, termed chitosomes, which function in the biosynthesis of the cell wall polysaccharide chitin23. A more recent report described the composition of yeast EVs from wild type and mutant strains with defects in Golgi derived exocytosis or MVB formation24. All mutant strains produced EVs, but the proteomic profiles differed depending on the pathway that had been disrupted. One major difference in secretion of EVs by fungi compared to mammalian cells involves the barrier, the fungal cell wall. A number of potential mechanisms for EV transit across the cell wall have been postulated25. From studies on the uptake of a liposomal formulation of the antifungal drug amphotericin B, it is clear that the cell wall has elastic properties26 that are possibly modulated by cell wall remodelling enzymes. Though fungal EVs have been reported to contain cell wall remodelling enzymes12, their role in cell wall remodelling has not been examined. Here, we studied the role of the ESCRT machinery in the production of yeast EVs and examined the function of EVs in recipient cell wall remodelling. A panel of ESCRT knockout yeast strains was used to examine the role of the ESCRT pathway in EV production and composition. Label-free quantitative proteomics analysis revealed that yeast EVs are not enriched with ESCRT proteins as occurs with mammalian EVs. In addition, we discovered that yeast strains with defects in cell wall biosynthesis secrete more EVs than wild type (WT) cells. Further analysis revealed that yeast EVs containing the Fks1 and Chs3 proteins could rescue cells from the toxic effects of the antifungal agents, caspofungin and NaD1. These results demonstrate a previously undescribed cell wall remodelling property for EVs in fungal cells. Depletion of ESCRT components alters S. cerevisiae EVs The role of the ESCRT machinery in the biogenesis of EVs was examined using a series of yeast knockout strains. The ESCRT machinery contains four complexes and accessory proteins. To understand the role of each ESCRT complex, one gene whose encoded protein is part of the complex was chosen for the yeast knockout strains. The deleted genes encoded single proteins in each of the four ESCRT subunits or the accessory proteins. They were Bro1 (ortholog of human Alix—ESCRT accessory proteins), Hse1 (ortholog of human STAM1 and 2—ESCRT 0), Vps23 (ortholog of human TSG101—ESCRT I), Vps36 (ortholog of human VPS36—ESCRT II) and Vps2 (ortholog of human CHMP2A and B—ESCRT III). EVs were isolated from WT and ESCRT knockout strains that had been grown for 18 h before the culture medium was collected and subjected to differential centrifugation coupled with ultracentrifugation. The total protein content and the morphology of isolated EVs was then examined. Strains with knockouts of the ESCRT components vps2Δ, vps23Δ and vps36Δ produced EVs with less protein than EVs from WT cells and strains with hse1Δ and bro1Δ knockouts (Fig. 1a). The morphological features and size of the EVs was examined using nanoparticle tracking analysis (NTA) and transmission electron microscopy (TEM) as recommended by MISEV standards5,27. NTA revealed a significant increase in the proportion of large EVs (150–500 nm) in the vps23Δ and vps36Δ knockouts (Fig. 1b). The reduction in EV release upon knockout of vps2Δ, vps23Δ and vps36Δ was further confirmed by NTA analysis (Supplementary Fig. 1a). Consistent with NTA results, TEM analysis revealed large vesicles in vps23Δ and vps36Δ knockouts (Fig. 1c). However, there was no significant difference in the size of EVs upon deletion of Vps2, Bro1 or Hse1. To ensure that the particles detected were EVs secreted from live cells, the isolation procedure was repeated with WT cells that had been heat killed. Very few particles between 30 and 150 nm were detected in the 100k pellet from heat-killed cells (Supplementary Fig. 1b) confirming that the EVs isolated from the growing cultures were not simply fragments of dead cells. Characterization of yeast EVs by protein quantitation, NTA and TEM. a Yeast EVs were isolated from WT and ESCRT knockout strains. The isolated EVs were subjected to protein quantitation. Total protein amounts of EVs isolated from vps2∆, vps23∆ and vps36∆ were significantly less than EVs from WT (normalised to OD) (** denotes P ≤ 0.01, *** denotes P ≤ 0.001 as determined by two-tailed t-test; Error bar = ±SEM, n = 3 independent experiments). b NTA of EVs shows that vesicles between 150 and 500 nm diameter are enriched in vps23∆ and vps36∆ strains. On the contrary, vesicles between 30 and 150 nm diameter are depleted in vps23∆ and vps36∆ strains (* denotes P ≤ 0.05, *** denotes P ≤ 0.001; Error bar = ±SD, n = 3 independent experiments). c TEM analysis of EVs isolated from WT and ESCRT knockout strains shows that vps23∆ and vps36∆ yeast cells release bigger EVs. Scale bars; 500 nm Proteomic analysis reveals the depletion of ESCRT proteins The protein cargo of EVs from WT and knockout yeast strains was identified using an LC-MS/MS-based label-free quantitative proteomics analysis. Equal amounts of protein (30 μg) from the isolated EVs were subjected to SDS-PAGE and separated proteins were excised from the gel, reduced, alkylated and digested with trypsin. At an FDR of <1%, a total of 3133 proteins were identified in the yeast EVs (Supplementary Data 1; data deposited to Vesiclepedia), which represents a remarkable 52% coverage of the total yeast proteome. A heatmap (FunRich28) of the proteomic profile of the isolated EVs revealed clustering between vps2Δ, vps23Δ and vps36Δ strains while EVs from WT, hse1Δ and bro1Δ were grouped together (Fig. 2a). Hence, the proteomic analysis confirmed that knockout of the ESCRT components altered the protein cargo of the EVs. Label-free quantitative proteomics analysis of yeast EVs. a Heatmap depicting the proteomic profile (n = 3 independent experiments) of EVs isolated from WT and ESCRT knockout yeast cells. WT and hse1∆ yeast cell EV proteomic profile clustered together while vps∆ yeast cell EVs clustered together. b Heatmap of ESCRT subunits identified in yeast EVs. The detected ESCRT subunits were not enriched in any of the yeast EV samples. c Venn diagram depicting the overlap of ESCRT subunits with proteins identified in the EVs from colorectal cancer, Vesiclepedia and yeast. Each dataset is broken as proteins that were detected in EVs and proteins that are part of the entire proteome but were not detected in EVs. The ESCRT subunits were queried to identify if they were detected in the EV or the non-EV fraction. Compared to the human EV dataset, WT yeast EVs are significantly depleted of ESCRT components. *** denotes P ≤ 0.001 as determined by Chi-square test. d Proteins with AAA domains that are important for membrane fusion are enriched in yeast EVs compared to the entire proteome (* denotes P ≤ 0.05, *** denotes P ≤ 0.001 as determined by hypergeometric test in FunRich28). e, f Functional enrichment analysis of EV proteins highlighted that the EVs are enriched with proteins implicated in cell wall remodelling. Fks1 is enriched in vps2Δ, vps23Δ and vps36Δ EVs while Chs1 was enriched in vps23Δ and vps36Δ EVs. * denotes P ≤ 0.05, ** denotes P ≤ 0.01, *** denotes P ≤ 0.001 as determined by two-tailed t-test; Error bar = ±SEM, n = 3 independent experiments Surprisingly, a follow up interrogation of the proteomic results highlighted the depletion of ESCRT components in yeast EVs (Fig. 2b, c). To calculate the statistical significance of this observation, EV proteomic datasets were downloaded from Vesiclepedia29,30 using FunRich. For an individual human colorectal cancer-derived EV dataset, 24 ESCRT components were identified among the 1592 EV proteins. Similarly, all the known 32 ESCRT components were detected in Vesiclepedia (mammalian) among a total of 8504 proteins reported in EVs. Compared to the colorectal cancer EV dataset and Vesiclepedia, EVs from WT yeast were depleted in ESCRT components. Next, a domain enrichment analysis using FunRich was performed on the proteins identified in EVs. Even though the yeast EVs were depleted with ESCRT components, they were enriched with proteins containing AAA domains that are implicated in membrane fusion31,32,33 (Fig. 2d). Furthermore, EVs from Vps knockout yeast strains were enriched with fungal cell wall remodelling enzymes including Fks1 (Fig. 2e), Chs1 and Chs3 (Fig. 2f). Chitin synthases Chs1 and Chs3 were enriched in EVs from vps23Δ and vps36Δ while the catalytic subunit of the major 1,3-β-glucan synthase Fks1 was enriched in EVs from vps2Δ, vps23Δ and vps36Δ. Overall, the proteomic analysis highlighted that the yeast EVs may be distinct from mammalians EVs in terms of their enriched proteins and physiological function. Furthermore, the enrichment of cell wall remodelling factors suggest that yeast EVs could be involved in regulating the dynamics of the yeast cell wall. To address these possibilities, the origins, as well as functions of yeast EVs, were investigated further. Cell wall mutants secrete more EVs EVs have been proposed to function in the transport of a variety of cellular cargo across the fungal cell wall12. In this scenario, the cell wall acts as a barrier to EV release and uptake. This led to the hypothesis that yeast mutant strains with weakened cell walls would secrete more EVs into the culture medium. To assess this, EVs were isolated from strains with deletions in the major cell wall chitin synthase and 1,3-β-glucan synthase subunit, chs3Δ and fks1Δ, respectively. Protein levels in the EV fractions from these two strains was significantly higher than in WT, suggesting that the strains with defects in cell wall biosynthesis were releasing more EVs (Supplementary Fig. 2a). The relative increase in EV release from each polysaccharide synthase mutant correlates with the abundance of the respective polysaccharide in the yeast cell wall. That is, fks1Δ which functions in synthesis of 1,3 β-glucan (~50% dry weight of cell wall) released more EVs than chs3Δ which functions in synthesis of chitin (1–2% dry weight of cell wall)34. This indicates that the cell wall is likely acting as a barrier to EV release because EV release increases when the barrier is weakened. Subtypes of EVs produced by S. cerevisiae EVs from mammalian cells can originate endocytically (exosomes) or via membrane blebbing (ectosomes or microvesicles) or through apoptosis (apoptotic bodies)4. To understand the subtype of EVs secreted by WT yeast cells, a comparative analysis was performed. The secretion of EV subtypes was assessed using mutants and chemical inhibitors that distinguish between the different pathways employed for EV production in mammalian cells. WT yeast strains were treated with H2O2 that induces apoptosis. Prior to EV collection, tolerance tests for H2O2 were performed to establish suitable concentrations for treatment of the yeast cultures (Supplementary Fig. 2). A knockout strain of Yca1, a metacaspase required for apoptosis in yeast, was also used. EVs were isolated from WT cells that had been treated with H2O2 as well as the mutant strains (chs3Δ, fks1Δ and yca1Δ) (Supplementary Data 2; data deposited to Vesiclepedia). Larger EVs were also isolated from the WT culture by centrifugation at 15,000 × g. The isolated EV samples (Table 1) along with whole-cell lysates were subjected to label-free quantitative proteomic analysis. Heatmap based clustering of the proteomic profiles (5533 proteins – 92% of yeast proteome) revealed that EVs from WT cells were different to other EV samples (Fig. 3a). To determine whether apoptotic body-like vesicles are produced by yeast cells, time-lapse live imaging was employed. No apoptosis were detected upon treatment with H2O2 at 1.25 or 3 mM. The lack of cell breakage and vesicle release indicates that apoptotic bodies are not formed during yeast programmed cell death (Fig. 3b). Hence, from the EV subtype enrichment and proteomics analysis, it can be concluded that the yeast EVs are heterogeneous with different proteomic profiles. Table 1 List of yeast EV subtypes Proteomic analysis and live imaging of EVs isolated from yeast that had been perturbed with EV subtype inducers and inhibitors. a Isolated samples were subjected to label-free quantitative proteomics analysis. Proteomic profiles of WT EVs differed to other EV samples. b Live imaging of 1.25 mM and 3 mM H2O2 treated yeast cells did not show any release of apoptotic bodies. Scale bars; 10 μm EVs enhances cell viability upon cell wall stress It was evident from the proteomic analysis that EVs are enriched with cell wall remodelling enzymes including Chs3 and Fks1. Furthermore, knockout of vps23 and vps2 results in enrichment of Chs3 and Fks1 in the EVs. This led to the hypothesis that EVs could contribute to cell wall remodelling. To examine whether EVs can be taken up by yeast cells, we optimized an EV uptake assay using FACS and confocal microscopy. WT EVs were labelled with the green fluorescent lipophilic dye PKH67 and these tagged EVs were added to yeast cells and incubated with shaking at 30 °C for different periods of time. As shown in Fig. 4a, FACS screening revealed an increase of fluorescent cells after 30 min of EV incubation. This was confirmed by confocal microscopy which revealed fluorescence within cells after the 30 min incubation (Fig. 4b). These results confirm that EVs can be taken up by yeast cells. Uptake of EVs by yeast cells by FACS and confocal microscopy. a WT EVs (100 µg) were labelled with green fluorescent dye PKH67, subjected to washing and ultracentrifugation and incubated together with WT cells for 0.5, 1, 2, 3 and 4 h at 30 °C. FACS screening revealed an increase in fluorescent cells after 30 min of incubation. Cells interacting with PKH67 dye (with or without EVs) is depicted by green peak. Black peak represents the background fluorescence from control cells. 'No EVs' represent PKH67 dye alone control while 'EVs' represent PKH67 labelled EVs (n = 3 independent experiments). b Confocal microscopy revealed fluorescence within cells after 30 min of incubation. 'No EVs' represent PKH67 dye alone control while 'EVs' represent PKH67 labelled EVs (n = 3 independent experiments). Scale bars; 10 μm. c A schematic illustration of the functional uptake assay used to study the cell wall remodelling functions of EVs. EV concentration: 10 µL of 1 µg/µL EVs + 10 µL of OD 0.1 yeast + 80 µL YPD Next, an assay was performed to assess the cell wall remodelling properties of EVs (Fig. 4c). This was conducted using a recipient yeast strain with defective cell wall (chs3∆). These cells are more sensitive to the 1,3-β-glucan synthase inhibitor, caspofungin, because reduced chitin synthase activity makes them more dependent on 1,3-β-glucan for cell wall integrity35. EVs from WT and mutant yeast cells were tested for their ability to protect the chs3∆ cells from the deleterious effects of caspofungin. Before these tests were initiated, a tolerance test for caspofungin was performed to determine a suitable concentration (0.1 µg/mL) for treating WT and chs3∆ yeast cells. EV uptake assays were then performed where the chs3∆ yeast strain was incubated with EVs from WT, vps23Δ, vps2Δ, fks1Δ and chs3Δ strains for 60 min at 30 °C to allow uptake of the EVs and time for the cargo to elicit its function. Next, the yeast culture was treated with caspofungin for 2 h. Cultures were washed and plated to visualize changes in cell viability in response to EV uptake and/or caspofungin treatment. EVs from vps2Δ and vps23Δ cells were most effective at rescuing caspofungin treated chs3∆ cells. WT, fks1∆ and chs3∆ EVs also had some protective effect, but to a lesser extent than the vps2Δ or vps23Δ EVs (Fig. 5a). Similarly, WT yeast strains were treated with EVs to determine whether resistance to caspofungin was enhanced in normal cells. An additional WT EV alone control was used in this assay to establish that EVs do not alter the cell viability. Consistent with previous results, EVs from vps2Δ, vps23Δ and WT protected the WT yeast strain against the antifungal activity of caspofungin and the fks1∆ and chs3∆ EVs did not rescue the WT yeast strains from caspofungin (Fig. 5b). Functional uptake assay of EVs revealed protection against antifungal agents. a The functional uptake assay with the chs3Δ strain revealed that more yeast cells survived exposure to antifungal agents after pre-treatment with EVs. The EVs from five strains with knockouts in ESCRT genes were tested, but EVs from vps2Δ and vps23Δ cells provided the best protection. EV concentration: 10 µL of 1 µg/µL EVs + 10 µL of OD 0.1 yeast + 80 µL YPD. b The functional uptake assay with the WT strain revealed the best survival when the cells had been pre-incubated with EVs from WT, vps2Δ and vps23Δ cells. In contrast, EVs from fks1Δ and chs3Δ cells did not exhibit significant rescue. c The functional uptake assay with the WT strain and increasing concentrations of caspofungin. EV rescue was significant at 0.1 µg/mL caspofungin, but was not significant with 0.2 µg/mL and 0.3 µg/mL concentrations. Error bar for all data is presented as ± SEM; * denotes P ≤ 0.05, ** denotes P ≤ 0.01, *** denotes P ≤ 0.001 as determined by two-tailed t-test; n = 3 independent experiments To understand the rescuing efficiency of EVs, the experiment was repeated with increasing concentrations of caspofungin. EVs protected cells against up to 0.1 µg/mL caspofungin, but the effect of the EVs was overwhelmed by caspofungin at 0.2 and 0.3 µg/mL. In the absence of higher concentration of caspofungin, there was no significant difference in survival between the EV treated and untreated cells (Fig. 5c). This indicates that the protective effect of EVs is dependent on the dose of the antifungal agent. We considered whether this protective effect had resulted from direct binding of the caspofungin to the EVs decreasing access to the glucan synthase in the plasma membrane or substantial uptake of the EVs and efficient cell wall reinforcement. As shown in Fig. 5b, EVs from the fks1∆ and chs3∆ strains did not rescue the WT yeast cells from caspofungin, indicating that it is not merely the caspofungin binding to the membrane of EVs, but it is the cargo that determines their protective effect. To investigate whether uptake of the EVs is required for protection against caspofungin, an additional thorough wash of the WT yeast cells was performed after the cells had been incubated with the EVs for 1 h and before the addition of caspofungin (0.1 µg/mL). The vps23∆ EVs still significantly increased the survival of WT yeast, even when free EVs had been removed from the system (Fig. 6a). To confirm that the rescuing effect of EVs is dependent on intact vesicles carrying cargo and not an impurity that co-purified with the EVs during isolation, sonication was performed to disrupt the intact membrane structure of EVs. Sonicated vps23Δ EVs provided less protective activity compared to unsonicated vps23∆ EVs (Fig. 6b), indicating that intact membranes of EVs are crucial for protective effect against the cell wall targeting antifungal caspofungin. These observations confirm that the rescue is mediated, at least in part, by delivery of the EV cargo to the recipient cells. Furthermore, to determine whether the protective effect of EVs was specific to caspofungin or was a general protective mechanism, vps23Δ EVs were tested for their ability to protect yeast cells against the antifungal plant-defensin, NaD1. Consistent with the caspofungin treatment, vps23∆ EVs also increased the survival of WT yeast cells after challenge with NaD1 (3 µM; Fig. 6c), even when the additional thorough wash had been applied (Fig. 6d). Hence, these data indicate that EVs protect yeast cells against antifungal agents that target the cell wall. Whether the mechanism of protection is the same across various antifungals remains to be elucidated. Chs3 containing EVs rescue yeasts from antifungal agents. a The rescuing effect of vps23Δ EVs was retained in the functional uptake assay that had been modified by an additional washing step before caspofungin (0.1 µg/mL) treatment. b Functional uptake assay revealed a decreased rescuing effect of sonicated vps23Δ EVs compared to unsonicated vps23Δ EVs (* denotes P ≤ 0.05, Error bar = ±SEM, n = 5). c A functional uptake assay with NaD1 (3 µM) treatment instead of caspofungin treatment revealed the consistent rescuing effect of vps23Δ EVs. d NaD1 (3 µM) functional uptake assay with an additional washing step before caspofungin treatment shows the retained rescuing effect of vps23Δ EVs. e Functional uptake assay revealed a decreased rescuing effect of vps23Δchs3Δ EVs compared to vps23Δ EVs. Error bar for all data is presented as ±SEM; * denotes P ≤ 0.05, ** denotes P ≤ 0.01, *** denotes P ≤ 0.001 as determined by two-tailed t-test; n = 3 independent experiments. Caspofungin concentration used in this figure is 0.1 µg/mL Chs3 containing EVs enhance cell viability From previous proteomic analyses, it was evident that the vps23∆ and vps2∆ yeast cells secreted high amounts of cell wall remodelling enzymes including Chs3 via EVs (Fig. 2f). Furthermore, EVs from chs3∆ cells failed to rescue WT cells from the toxic effects of caspofungin (Fig. 5b). To understand whether EV-associated Chs3 regulates the mechanism of cell protection against antifungal agents, a double knockout, yeast strain of Vps23 and Chs3 was established. The vps23Δchs3Δ EVs were then isolated for functional uptake assays. Consistent with previous results, EVs from WT and vps23∆ yeast cells increased the WT cell viability in presence of caspofungin. Interestingly, the rescue effect of vps23Δchs3Δ EVs was significantly weaker compared to vps23∆ EVs (Fig. 6e), suggesting it is the elevated levels of Chs3 in EVs from the vps23∆ mutant that is responsible, at least in part, for the rescue properties of these EVs. However, the rescuing efficiency was not completely abolished in the double knockout suggesting that other cargo molecules also aid in the protection against these antifungals. Nevertheless, these results support the notion that uptake of Chs3 via EVs has a critical role in cell wall remodelling and that the EV cargo is responsible for the protective effect of EVs against antifungals. Caspofungin treatment increases EV release in S. cerevisiae If EVs function to protect yeast cells against the effects of antifungal agents such as caspofungin, it would be expected that part of the cellular response to sub-inhibitory levels of antifungal would be to increase EV release. EVs were isolated from wildtype yeast cells treated with a range of caspofungin concentrations and subjected to protein quantification and NTA. The total protein amount of EVs isolated from caspofungin treated yeast increased compared to non-treated yeast in a concentration-dependent manner (Fig. 7a). These results confirm that part of the cellular response to caspofungin is the increased release of EVs. Caspofungin treatment increases the EVs secreted by yeast cells. a Total protein amounts of EVs isolated from 0.025 µg/mL caspofungin treated WT yeast were significantly more than untreated WT (normalized to OD, *** denotes P ≤ 0.001, Error bar = ±SEM, n = 3 independent experiments). b A graphical representation to depict the two potential roles of EVs in protecting fungal cells from antifungal agents. It is unclear as how EVs can be transported across the cell wall. c Physiological relevance of this study is depicted by a proposed speculative model where EVs from WT cells can rescue stressed fungal cells with defective cell walls The mechanism of EV biogenesis in fungi is poorly understood. In mammalian models, biogenesis of exosomes, a subclass of EVs, is regulated, at least in part, by the ESCRT machinery10. The ESCRT machinery is highly conserved throughout eukaryotes and much of the current knowledge pertaining to ESCRT is based on yeast models36. It is well established that the ESCRT components are enriched in exosomes secreted by mammalian cells and hence are often used as markers of exosomes37,38,39. However, fungal EVs are depleted of ESCRT proteins as cargo. This study was consistent with previous reports24 that yeast EVs are depleted of ESCRT components suggesting that yeast EVs are different to mammalian exosomes or that the ESCRT machinery is not secreted via EVs. Importantly, the amount, size, morphology and proteomic profiles of yeast EVs were altered in strains with selective knockouts of ESCRT components. This was consistent with previous reports where C. albicans ESCRT deletion strains produced less EVs than wildtype11. Thus, ESCRT components are likely to regulate EV production in yeast, but the mechanism is not the same as in mammals. The lack of ESCRT proteins in yeast EVs points to a fundamental difference in how the ESCRT complexes function to generate the small intraluminal vesicles within MVBs. Current models for mammalian cells indicate that when vesicles are formed in an ESCRT dependent pathway, the ESCRT machinery remains associated with the vesicle. Therefore, the lack of yeast homologues of these proteins in EVs indicates that in a yeast system, the mechanism of vesicle budding from the MVB membrane that is controlled by ESCRT is different to mammalian cells. The data presented here does not dispute the role for ESCRT in the biogenesis of yeast EVs, as strains with deletions in ESCRT components had significant differences in EV production. It merely indicates that the conservation of ESCRT mediated processes may not be as robust as previously thought. The protective effect of EVs against the antifungal agents caspofungin and NaD1 led to two hypotheses to explain the mechanism of protection (Fig. 7b). These were both different from the role fungal EVs have in protecting C. albicans against azole antifungals via secretion of biofilm matrix materials11. The first was that EVs were acting as decoys, binding to and sequestering the antifungal agents, preventing them from accessing the fungal cell thereby decreasing their efficacy. The second was that EVs were taken up by the target cells and elicit a response that results in the protection against the antifungal, in this case reinforcement of the cell wall. Based on our observations that labelled EVs are taken up by yeast cells, that the inclusion of a wash step between EV treatment and antifungal treatment does not result in a loss of protection and the enrichment of cell wall biosynthetic enzymes in yeast EVs, we hypothesize that these EVs have a role in cell wall dynamics and exert their protective effect via uptake and remodelling the cell wall. However, a decoy effect of EVs where they can protect the yeast cells by sequestering the antifungals cannot be ignored. Further research is warranted to dissect the precise role of these mechanisms in fungal cell stress response in physiological conditions. Rescue of the caspofungin sensitive phenotype of chs3Δ by treatment with EVs, especially the most prominent rescue with vps2∆ and vps23∆ EVs which are enriched in glucan and chitin synthases respectively, further supported the role for these vesicles in cell wall biosynthesis and remodelling. The protective effect against caspofungin extended to wild type yeast and was confirmed to be dependent on the protein cargo contained in the EVs and not merely on the presence of membrane-bound vesicles. Conversely, the protective effect of the EVs was also dependent on the integrity of the EVs as disruption of the EV membrane decreased the rescue. Hence the protective effect of EVs requires the packaging of specific cargo into an intact vesicle for secretion and uptake. This discovery of cargo-dependent functional EVs in yeast highlights the immense potential of extending the field of fungal EV research. Further supporting the role for EVs in cell wall dynamics and the response to cell wall damaging agents was the concentration-dependent increase in EV release upon caspofungin treatment. Depletion of cell wall 1,3-β-glucan by deletion of fks1 also led to increased EV release. The cell wall is often discussed as a barrier to EV release and it is difficult to dissect whether the increase in EV release is the result of increased biogenesis as part of a stress response or merely depleting the cell wall as a barrier to release. Hence, additional research is required to further investigate EVs and stress responses. Though our study advanced the understanding of fungal EV biology and proposes novel roles for EVs as either decoys or cell wall remodelling components, there are few limitations that need to be addressed by future studies. For instance, the physiological relevance of many EV studies has been unclear (see current issues in EV research4) as the stoichiometry of EV release remains elusive. Adding to the challenge, EVs are secreted constitutively in physiological conditions and how such a concentration of EVs specifically relate to a bolus dose of EVs that is often used in vitro in many studies is unclear. Hence, the physiological relevance of the concentration of EVs used in this study need to be examined. Perhaps, co-culture experiments with fungal cells whose EV release can be conditionally regulated would serve as an ideal setup to examine the physiological relevance of EVs. Another limitation relates to the possibility that perturbation of genes in cells may allow for depletion or enrichment of certain EV subtypes (e.g. exosomes). For instance, it is possible that vps23∆ mutant may secrete less endocytic EVs and hence the isolation procedure may enrich for membrane-derived EVs. Though we employed a 15,000 × g centrifugation step to deplete large EVs, it has to be acknowledged that none of the currently available protocols can isolate any one EV subtype to homogeneity5,40. Hence, it is possible that our EV preparation may contain some minor population of large EVs. Clearly, additional research is needed to develop efficient tools/reagents to purify EV subtypes to homogeneity and to characterise EVs based on specific markers. Nevertheless, in a natural environment, intercellular communications through EVs may assist in increasing the survival of yeast cells under conditions that damage the cell walls, such as NaCl stress (Fig. 7c). Demonstration of the uptake and transfer of functional cargos by EVs in fungal cells extends their role beyond host-pathogen interactions into the realm of quorum sensing and interspecies communication. It is fathomable that a population of drug-resistant fungi may be able to transfer resistant enzymes or enzyme products to non-resistant cells to bolster an infection. Whether it is possible for one fungal species to transfer cargo to another via EVs is a question that will need to be addressed in future studies. Yeast strains and media The S. cerevisiae non-essential gene deletion collection was purchased from OpenBiosystems (Thermo Scientific) and is in the S. cerevisiae BY4741 (MATα his3Δ0 leu2Δ0 met15Δ0 ura3Δ0) background. The mutant strains were retrieved from the deletion collection and compared to the wildtype BY4741. Double mutants were made by amplifying the URA3 gene from the pRS426 plasmid using primers containing 40-bp regions corresponding to the 5' and 3' ends of the coding region upstream of the pRS426 binding sequence (bases shown below in italics). The primer pairs were Vps23F (ATCTTAACGGCCAAGAAAAGAGAGAGAGTGAAGAGCAACGCTGTGCGGTATTTCACACCG) and Vps23R (ATATTTTTTATGGCACTTCGGCGATGCGAAAGAAAGTGAGAGATTGTACTGAGAGTGCAC). PCR products were purified using a Wizard PCR clean-up kit (Promega) and transformed into yeast cells via electroporation. The mutant colonies were selected on synthetic defined medium without uracil (SD-Ura) (0.67% yeast nitrogen base without amino acids [Sigma], 0.077% uracil dropout (Ura DO) supplement [Clontech]) agar plates. Overnight cultures for all S. cerevisiae experiments were grown in YPD medium (1% yeast extract, 2% peptone, 2% dextrose). All mutants including those retrieved from the library were confirmed by PCR genotyping41,42. EV isolation Isolation of EVs was performed as described previously with some modifications24,43,44,45. Overnight cultures of S. cerevisiae were diluted to an OD600 of 0.2 with YPD medium. Cultures were then incubated for 18 h at 30 °C with shaking (130 rpm). For EV isolation, cells and debris were removed by differential centrifugation at 4000 × g for 15 min and 15,000 × g for 15 min. Supernatants were collected and ultracentrifuged at 100,000 × g for 1 h at 4 °C. Pellets were collected and washed once with 1 × phosphate-buffered saline (PBS). The resulting EV pellets were resuspended in 1 × PBS and stored at −80 °C. For isolation of larger vesicles when required, pellets from 15,000 × g centrifugations were collected and washed once with 1 × PBS. The resulting pellets were resuspended in 1 × PBS and stored at −80 °C. For EV isolation from caspofungin treated yeast, overnight cultures of S. cerevisiae were diluted to an OD600 of 0.4 with YPD medium, incubated for 2 h at 30 °C with shaking until reaching an OD600 of 0.86, treated with caspofungin and incubated for 14 h at 30 °C with shaking. Cultures were then subjected to EV isolation. For EV isolation from heat killed yeast, cell pellets from 4000 × g centrifugation were also collected. Cell pellets were resuspended in YPD medium and heated at 70 °C for 2.5 h with shaking (750 rpm). Heat killed cells were then resuspended in same volume of YPD medium as the previous culture, incubated for 18 h at 30 °C with shaking again, and proceeded to the same process of EV collection. Microscopy was performed as described previously46. EV samples (0.2 μg/μL each) were examined with a JEM-2010 transmission electron microscope (JEOL, 100 kV) or Tecnai TF30 transmission electron microscope (FEI, 300 kV). Preparations were fixed to 400 mesh carbon-layered copper grids for up to 2 min. Surplus material was drained by blotting, followed by negative staining of samples with 10 μL of uranyl acetate solution (2% w/v; Electron Microscopy Services). Size distributions of EV samples were analysed with NanoSight NS300. Samples were diluted in water (Milli-Q) and injected using a syringe pump with a flow rate of 50, and 1 min videos were taken. Data were obtained in triplicate and was analysed using NTA 3.2 Dev Build 3.2.16 with the auto-analysis settings. The SYPRO Ruby staining method was used to quantify protein amounts. Proteins were separated using SDS-PAGE, and Benchmark ladder (Life Technologies) was used as a protein standard for the following quantifications. The SDS-PAGE gels were immersed in fixation solution (40% (v/v) methanol, 10% (v/v) acetic acid) for 30 min with shaking. Gels were kept overnight in SYPRO Ruby fluorescent dye stain (Molecular Probes) and were then washed with destain solution (7.5% (v/v) acetic acid, 20% (v/v) methanol) for 30 min. Fluorescent signals were visualized using Typhoon Trio™ scanner (GE Healthcare) or Typhoon™ FLA 7000 scanner (GE Healthcare). Protein amounts were quantified by densitometric analysis with ImageQuant™ software (GE Healthcare). SDS-PAGE and tryptic digestion Proteins (30 µg) were separated by SDS-PAGE and visualized with Coomassie staining (Bio-Rad). Gel lanes were excised into sections (10 for each lane) and subjected to in-gel reduction, alkylation and trypsination as described previously with modifications43,44,47. Briefly, proteins in the gel sections were reduced with 10 mM DTT (Bio-Rad) for 30 min at 55 °C, alkylated for 30 min with 25 mM iodoacetamide (Sigma), and then digested overnight at 37 °C with 750 ng of sequencing grade trypsin (Promega). Digestion products were extracted with 50% (v/v) acetonitrile and 0.1% trifluoroacetic acid and were then analysed by liquid chromatography-mass spectrometry (LC-MS/MS). LC-MS/MS Extracted tryptic peptides from each gel band were concentrated to ~10 μL by centrifugal lyophilisation and analysed by LC-MS/MS using LTQ Orbitrap Elite and Fusion Lumos mass spectrometer (Thermo Scientific), both fitted with nanoflow reversed-phase-HPLC (Ultimate 3000 RSLC, Dionex). The nano-HPLC system was equipped with an Acclaim Pepmap nano-trap column (Dionex—C18, 100 Å, 75 μm × 2 cm) and an Acclaim Pepmap RSLC analytical column (Dionex—C18, 100 Å, 75 μm × 15 cm). Typically, for each LC-MS/MS experiment, 1 μL of the peptide mix was loaded onto the enrichment (trap) column at an isocratic flow of 5 μL/min of 3% CH3CN containing 0.1% formic acid for 5 min before the enrichment column is switched in-line with the analytical column. The eluents used for the LC were 0.1% v/v formic acid (solvent A) and 100% CH3CN/0.1% formic acid v/v. The gradient used was 3% B to 25% B for 23 min, 25% B to 40% B in 2 min, 40% B to 85% B in 2 min and maintained at 85% B for 2 min before equilibration for 10 min at 3% B prior to the next injection. All spectra were acquired in positive mode with full scan MS spectra scanning from m/z 300–1650 in the FT mode at 240,000 resolution after accumulating to a target value of 1.00e6 with maximum accumulation time of 200 ms. Lockmass of 445.12003 m/z was used. For MSMS on the Lumos orbitrap, the "top speed" acquisition method mode (3 s cycle time) on the most intense precursor was used whereby peptide ions with charge states ≥2 were isolated with isolation window of 1.6 m/z and fragmented with low energy CID using normalized collision energy of 35 and activation Q of 0.25. For MSMS on the Elite orbitrap, the 20 most intense peptide ions with minimum target value of 2000 and charge states ≥2 were isolated with isolation window of 1.6 m/z and fragmented by low energy CID with normalized collision energy of 30 and activation Q of 0.25. Dynamic exclusion settings of 2 repeat counts over 30 s and exclusion duration of 45 s was applied. Database searching and protein identification Mascot Generic File Format (MGF) files were generated using MSConvert with the parameter of peak picking set. X!Tandem VENGEANCE (2015.12.15) was then used to search the MGF files against a target and decoy Yeast RefSeq protein database. Search parameters used were: fixed modification (carboamidomethylation of cysteine; +57Da), variable modifications (oxidation of methionine; +16 Da and N-terminal acetylation; +42 Da), three missed tryptic cleavages, 20 ppm peptide mass tolerance and 0.6Da fragment ion mass tolerance. Protein identifications were shortlisted to obtain a master list with less than 1% false discovery rate43. Label-free spectral counting The relative protein abundance between the samples was obtained by estimating the ratio of normalized spectral counts (RSc) as described previously44,48. $$\begin{array}{l}{\mathrm{RSc}}\,{\mathrm{for}}\,{\mathrm{protein}}\,{\mathrm{A}}\\ {\mathrm{ = }}\, \left[ {\left( {{\mathrm{sY + c}}} \right)\left( {{\mathrm{TX - sX + c}}} \right){\mathrm{/}}\left( {{\mathrm{sX + c}}} \right)\left( {{\mathrm{TY - sY + c}}} \right)} \right]\end{array}$$ Where s is the significant MS/MS spectra for protein A, T is the total number of significant MS/MS spectra in the sample, c is the correction factor set to 1.25, and X or Y are the EV samples. When RSc is less than 1, the negative inverse RSc value was used. S. cerevisiae growth and death assays Assays were performed as described previously with modifications41,49. For growth assays, cells were grown in YPD overnight and diluted to an OD600 of 0.01. Diluted cells (90 µL) were added to the wells of a 96-well plate along with 10 µL of the test drugs. Growth of cells was monitored by measuring absorbance at 595 nm in a SpectraMAX M5e plate reader (Molecular Devices), using a 96-well scan. Measurements were taken at t = 0 and t = 18 h during incubation at 30 °C. For death assays, cells were grown in YPD overnight and diluted to an OD600 of 0.5. Diluted cells (90 µL) were added to 96-well plate along with 10 µL of the test drugs. Plates were incubated for 1 h at 30 °C. Death of cells was measured by spot and plating assays. Time-lapse live imaging Cells diluted to 2 × 106 cells/mL (OD600 of 1 is ~3 × 107 cells/mL) from YPD overnight cultures were used for live imaging. Cells with or without H2O2 (Merck) treatment were loaded into four-well Nunc Lab-Tek II chambered cover glass. Imaging was performed as described previously with modifications50. Time-lapse DIC microscopy was performed at 30 °C using the Zeiss Spinning Disk Confocal with a 63 oil-immersion objective. For most of the experiments, samples were imaged every 3 min for 3.5 h. Image processing and data analysis were performed using the ZEN imaging software (Zeiss, Germany). Uptake assays To stain EVs, 100 µg of EVs was incubated with 2 µM of lipophilic membrane dye PKH67 (Sigma® Life Science's cell linker kit) for 3–5 min at room temperature. The labelling reaction was stopped by addition of 1% bovine serum albumin (BSA). Stained EVs were collected on a 100 kDa filter, washed three times with 5 mL 1 × PBS, suspended in 1 mL of 1 × PBS and centrifuged at 120,000 × g for 50 min at 4 °C. The resulting pellet of stained EVs was resuspended in 1 × PBS. Cells were grown in YPD overnight. Cells (3 × 106) were added into a 1.5 mL microfuge tube along with stained EVs and were incubated for different time points at 30 °C with shaking. The resulting cells were washed once with 1 mL 1 × PBS and checked by flow cytometry (BD FACSCantoTM II) and confocal microscopy (LSM 780, Zeiss) as described previously37. Image processing was performed with the ZEN imaging software (Zeiss, Germany). Functional uptake assays Cells were grown in YPD overnight and diluted to an OD600 of 0.1. EVs were diluted to a concentration of 1 µg/µL with 1 × PBS. 10 µL of diluted cells, 10 µL of diluted EVs and 80 µL of medium (YPD medium for caspofungin [Sigma] assays, half-strength PDB medium for NaD1 assays) (half-strength PDB medium: 1.2% potato dextrose broth powder [BD Difco]) were added together into 1.5 mL Eppendorf tubes and incubated for 30 min at 30 °C with shaking. Tubes were then incubated for another 30 min at 30 °C without shaking. When needed, cells were washed once with medium before drug treatments, 90 µL of cells was then added into new 1.5 mL Eppendorf tubes along with 10 µL of caspofungin or NaD1 at 10× the final concentration and were incubated for 2 h at 30 °C. Cell survival was measured by serial dilution spot assays and plating for colony counts as described below. NaD1 used in the assays was purified from plant sources as described previously51. EV sonication EVs were diluted with YPD medium in 1.5 mL Eppendorf tubes and sonicated using Vibra-Cell™ (Sonics & Materials) sonicator and Microtip probe (QSonica). Ten cycles (5 min of 100 output control, 1 min interval) of sonication were performed for samples in an ice-water bath. For confirming the outcome of sonication, samples were ultracentrifuged at 100,000 × g for 1 h at 4 °C. Pellets were then resuspended with 1 × PBS and subjected to NTA. For functional uptake assays, 10 µL of cells with an OD600 of 0.1 was added in 90 µL of sonicated YPD solution with 5 µg of EVs, then followed by incubations and caspofungin treatments as described earlier. Spot and plating assays Assays were performed as described previously with modifications41,42,49. For death assays, cells were serially diluted (5×) four times in YPD medium in the wells of a 96-well plate. For functional uptake assay, cells were washed once with YPD medium before the serial dilution. 4 µL of each dilution was spotted onto YPD agar medium. Plates were incubated for 24 h at 30 °C and images of the resulting colony growth were taken. To quantify the results, for death assays, cells were 1:2000 diluted with YPD medium. For functional uptake assays, cells were diluted 1:25. For co-culture assays, cells were diluted 1:250 and 50 µL of diluted cells were plated on YPD agar medium. Plates were incubated for 24 h at 30 °C and left overnight at room temperature. The resulting colonies were imaged and counted. Statistical analysis was performed with t-test or Chi-square test, and differences were considered as significant while P-value <0.05. Data are presented as mean ± standard deviation (SD) or standard error of the mean (SEM). The results included in this manuscript were obtained from at least 3 independent experiments and were reproducible. All data analysed during this study are included in this published article and its Supplementary Information files. The lists of identified EV proteins are available as Supplementary Data 1 and 2. Proteomics data of EVs are also available through Vesiclepedia29,30 database: [http://microvesicles.org/browse_results?org_name=Saccharomyces%20cerevisiae&cont_type=&tissue=&gene_symbol=&ves_type=]. The source data behind the graphs are available as Supplementary Data 3. All other data are available from the corresponding author on reasonable request. Gangoda, L., Boukouris, S., Liem, M., Kalra, H. & Mathivanan, S. Extracellular vesicles including exosomes are mediators of signal transduction: are they protective or pathogenic? Proteomics 15, 260–271 (2015). Chitti, S. V., Fonseka, P. & Mathivanan, S. Emerging role of extracellular vesicles in mediating cancer cachexia. Biochem. Soc. Trans. 46, 1129–1136 (2018). Théry, C. Exosomes: secreted vesicles and intercellular communications. F1000 Biol. Rep. 3, 130 (2011). Kalra, H., Drummen, G. P. & Mathivanan, S. Focus on extracellular vesicles: introducing the next small big thing. Int. J. Mol. Sci. 17, 170 (2016). Thery, C. et al. Minimal information for studies of extracellular vesicles 2018 (MISEV2018): a position statement of the International Society for Extracellular Vesicles and update of the MISEV2014 guidelines. J. Extra. Vesicles 7, 1535750 (2018). French, K. C., Antonyak, M. A. & Cerione, R. A. Extracellular vesicle docking at the cellular port: Extracellular vesicle binding and uptake. Semin. Cell Dev. Biol. 67, 48–55 (2017). Hurley, J. H. ESCRT complexes and the biogenesis of multivesicular bodies. Curr. Opin. Cell Biol. 20, 4–11 (2008). Colombo, M. et al. Analysis of ESCRT functions in exosome biogenesis, composition and secretion highlights the heterogeneity of extracellular vesicles. J. Cell Sci. 126, 5553–5565 (2013). Babst, M. A protein's final ESCRT. Traffic 6, 2–9 (2005). Anand, S., Samuel, M., Kumar, S. & Mathivanan, S. Ticket to a bubble ride: cargo sorting into exosomes and extracellular vesicles. Biochim. Biophys Acta. Proteins Proteom., https://doi.org/10.1016/j.bbapap.2019.02.005 (2019). Zarnowski, R. et al. Candida albicans biofilm–induced vesicles confer drug resistance through matrix biogenesis. PLoS Biol. 16, e2006872 (2018). Samuel, M., Bleackley, M., Anderson, M. & Mathivanan, S. Extracellular vesicles including exosomes in cross kingdom regulation: a viewpoint from plant-fungal interactions. Front. Plant Sci. 6, 766 (2015). Joffe, L. S., Nimrichter, L., Rodrigues, M. L. & Del Poeta, M. Potential roles of fungal extracellular vesicles during infection. mSphere 1, e00099–00016 (2016). Rodrigues, M. L. et al. Vesicular polysaccharide export in Cryptococcus neoformans is a eukaryotic solution to the problem of fungal trans-cell wall transport. Eukaryot. Cell 6, 48–59 (2007). Rodrigues, M. L. et al. Extracellular vesicles produced by Cryptococcus neoformans contain protein components associated with virulence. Eukaryot. Cell 7, 58–67 (2008). Vallejo, M. C. et al. Lipidomic analysis of extracellular vesicles from the pathogenic phase of Paracoccidioides brasiliensis. PLoS One 7, e39463 (2012). Vallejo, M. C. et al. Vesicle and vesicle-free extracellular proteome of Paracoccidioides brasiliensis: comparative analysis with other pathogenic fungi. J. proteome Res. 11, 1676–1685 (2012). da Silva, R. P. et al. Extracellular vesicle-mediated export of fungal. Rna. Sci. Rep. 5, 7763 (2015). da Silva, R. P. et al. Extracellular vesicles from Paracoccidioides pathogenic species transport polysaccharide and expose ligands for DC-SIGN receptors. Sci. Rep. 5, 14213 (2015). da Silva, T. A., Roque-Barreira, M. C., Casadevall, A. & Almeida, F. Extracellular vesicles from Paracoccidioides brasiliensis induced M1 polarization in vitro. Sci. Rep. 6, 35867 (2016). Vargas, G. et al. Compositional and immunobiological analyses of extracellular vesicles released by Candida albicans. Cell. Microbiol. 17, 389–407 (2015). Wolf, J. M., Espadas, J., Luque-Garcia, J., Reynolds, T. & Casadevall, A. Lipid biosynthetic genes affect Candida albicans extracellular vesicle morphology, cargo, and immunostimulatory properties. Eukaryot. Cell 14, 745–754 (2015). Bartnicki-Garcia, S. Chitosomes: past, present and future. FEMS Yeast Res. 6, 957–965 (2006). Oliveira, D. L. et al. Characterization of yeast extracellular vesicles: evidence for the participation of different pathways of cellular traffic in vesicle biogenesis. PLoS ONE 5, e11113 (2010). Brown, L., Wolf, J. M., Prados-Rosales, R. & Casadevall, A. Through the wall: extracellular vesicles in Gram-positive bacteria, mycobacteria and fungi. Nat. Rev. Microbiol 13, 620–630 (2015). Walker, L. et al. The viscoelastic properties of the fungal cell wall allow traffic of AmBisome as intact liposome vesicles. mBio 9, e02383–02317 (2018). Lotvall, J. et al. Minimal experimental requirements for definition of extracellular vesicles and their functions: a position statement from the International Society for Extracellular Vesicles. J. Extra. Vesicles 3, 26913 (2014). Pathan, M. et al. A novel community driven software for functional enrichment analysis of extracellular vesicles data. J. Extra. Vesicles 6, 1321455 (2017). Kalra, H. et al. Vesiclepedia: a compendium for extracellular vesicles with continuous community annotation. PLoS Biol. 10, e1001450 (2012). Pathan, M. et al. Vesiclepedia 2019: a compendium of RNA, proteins, lipids and metabolites in extracellular vesicles. Nucleic Acids Res. 47, D516–D519 (2019). Ogura, T. & Wilkinson, A. J. AAA+ superfamily ATPases: common structure–diverse function. Genes Cells 6, 575–597 (2001). Xia, D., Tang, W. K. & Ye, Y. Structure and function of the AAA+ ATPase p97/Cdc48p. Gene 583, 64–77 (2016). Stach, L. & Freemont, P. S. The AAA+ ATPase p97, a cellular multitool. Biochem. J. 474, 2953–2976 (2017). Lesage, G. & Bussey, H. Cell wall assembly in Saccharomyces cerevisiae. Microbiol. Mol. Biol. Rev. 70, 317 (2006). Markovich, S., Yekutiel, A., Shalit, I., Shadkchan, Y. & Osherov, N. Genomic approach to identification of mutations affecting caspofungin susceptibility in Saccharomyces cerevisiae. Antimicrob. Agents Chemother. 48, 3871–3876 (2004). Williams, R. L. & Urbe, S. The emerging shape of the ESCRT machinery. Nat. Rev. Mol. Cell Biol. 8, 355–368 (2007). Keerthikumar, S. et al. Proteogenomic analysis reveals exosomes are more oncogenic than ectosomes. Oncotarget 6, 15375–15396 (2015). Kowal, J. et al. Proteomic comparison defines novel markers to characterize heterogeneous populations of extracellular vesicle subtypes. Proc. Natl Acad. Sci. USA 113, E968–E977 (2016). Gangoda, L. et al. Proteomic profiling of exosomes secreted by breast cancer cells with varying metastatic potential. Proteomics 17, 1600370 https://doi.org/10.1002/pmic.201600370 (2017). Simpson, R. J. & Mathivanan, S. Extracellular microvesicles: the need for internationally recognised nomenclature and stringent purification criteria. J. Proteom. Bioinform 5, ii–ii (2012). Bleackley, M. R. et al. Agp2p, the plasma membrane transregulator of polyamine uptake, regulates the antifungal activities of the plant defensin NaD1 and other cationic peptides. Antimicrob. Agents Chemother. 58, 2688–2698 (2014). Bleackley, M. R., Young, B. P., Loewen, C. J. & MacGillivray, R. T. High density array screening to identify the genetic requirements for transition metal tolerance in Saccharomyces cerevisiae. Metallomics 3, 195–205 (2011). Gangoda, L. et al. Inhibition of cathepsin proteases attenuates migration and sensitizes aggressive N-Myc amplified human neuroblastoma cells to doxorubicin. Oncotarget 6, 11175–11190 (2015). Kalra, H. et al. Comparative proteomics evaluation of plasma exosome isolation techniques and assessment of the stability of exosomes in normal human blood plasma. Proteomics 13, 3354–3364 (2013). Fonseka, P. et al. Exosomes from N-Myc amplified neuroblastoma cells induce migration and confer chemoresistance to non-N-Myc amplified cells: implications of intra-tumor heterogeneity. J. Extra. Vesicles 8, 1597614 (2019). Samuel, M. et al. Bovine milk-derived exosomes from colostrum are enriched with proteins implicated in immune response and growth. Sci. Rep. 7, 5933 (2017). Mathivanan, S., Ji, H., Tauro, B. J., Chen, Y. S. & Simpson, R. J. Identifying mutated proteins secreted by colon cancer cell lines using mass spectrometry. J. Proteomics 5, 76 (2012). Liem, M., Ang, C. S. & Mathivanan, S. Insulin mediated activation of PI3K/Akt signalling pathway modifies the proteomic cargo of extracellular vesicles. Proteomics, https://doi.org/10.1002/pmic.201600371 (2017). Hayes, B. M. et al. Identification and mechanism of action of the plant defensin NaD1 as a new member of the antifungal drug arsenal against Candida albicans. Antimicrob. Agents Chemother. 57, 3667–3675 (2013). Atkin-Smith, G. K. et al. A novel mechanism of generating extracellular vesicles during apoptosis via a beads-on-a-string membrane structure. Nat. Commun. 6, 7439 (2015). van der Weerden, N. L., Lay, F. T. & Anderson, M. A. The plant defensin, NaD1, enters the cytoplasm of Fusarium oxysporum hyphae. J. Biol. Chem. 283, 14445–14452 (2008). Suresh Mathivanan is supported by Australian Research Council DECRA (DE150101777), Australian Research Council Discovery Project (DP170102312) and Australian Research Council Future Fellowship (FT180100333). Australian Research Council Discovery Project (DP160100309) supports the work of M.B., M.A. and V.B. Electron and optical microscopy presented in this study was performed in the LIMS Bioimaging Facility. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Confocal microscopy was performed in the LIMS BioImaging facility. These authors contributed equally: Kening Zhao, Mark Bleackley. Department of Biochemistry and Genetics, La Trobe Institute for Molecular Science, La Trobe University, Melbourne, VIC, 3086, Australia Kening Zhao, Mark Bleackley, Lahiru Gangoda, Pamali Fonseka, Michael Liem, Hina Kalra, Haidar Al Saffar, Shivakumar Keerthikumar, Christopher G. Adda, Lanzhou Jiang, Ivan K. Poon, Peter Lock, Marilyn Anderson & Suresh Mathivanan Department of Computer Science and Information Technology, La Trobe University, Melbourne, VIC, 3086, Australia David Chisanga Cancer Research Division, Peter MacCallum Cancer Centre, Melbourne, VIC, 3000, Australia Shivakumar Keerthikumar Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, VIC, 3010, Australia Bio21 Institute, University of Melbourne, Melbourne, VIC, 3010, Australia Ching-Seng Ang ARC Centre of Excellence in Plant Cell Walls and Adelaide Glycomics, The University of Adelaide, Waite Campus, Urrbrae, SA, 5064, Australia Kuok Yap & Vincent Bulone Kening Zhao Mark Bleackley Lahiru Gangoda Pamali Fonseka Michael Liem Hina Kalra Haidar Al Saffar Christopher G. Adda Lanzhou Jiang Kuok Yap Ivan K. Poon Peter Lock Vincent Bulone Marilyn Anderson Suresh Mathivanan S.M., M.B., and M.A. conceived and directed the project; K.Z., M.B., P.F., H.A.S., M.L., L.G., I.K.P., H.K. and L.J. performed the experiments; C.G.A. did the TEM analysis; S.K., D.C. and S.M. performed the data analysis; C.S.A. performed the mass spectrometry; K.Y. and V.B. performed the carbohydrate analysis; P.L. and K.Z. performed the confocal analysis; K.Z., M.B., M.A. and S.M. drafted and finalized the manuscript with inputs from other authors; K.Z. and S.M. prepared the figures; all authors read and approved the manuscript. Corresponding authors Correspondence to Marilyn Anderson or Suresh Mathivanan. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary Figures Zhao, K., Bleackley, M., Chisanga, D. et al. Extracellular vesicles secreted by Saccharomyces cerevisiae are involved in cell wall remodelling. Commun Biol 2, 305 (2019). https://doi.org/10.1038/s42003-019-0538-8 Proteomic characterization of extracellular vesicles produced by several wine yeast species Ana Mencher , Pilar Morales , Eva Valero , Jordi Tronchoni , Kiran Raosaheb Patil & Ramon Gonzalez Microbial Biotechnology (2020) Extracellular vesicles from the apoplastic fungal wheat pathogen Zymoseptoria tritici Erin H. Hill & Peter S. Solomon Fungal Biology and Biotechnology (2020) Glucose availability dictates the export of the soluble and prion forms of Sup35p via periplasmic or extracellular vesicles Mehdi Kabani , Marion Pilard & Ronald Melki Molecular Microbiology (2020) Cross-Kingdom Extracellular Vesicles EV-RNA Communication as a Mechanism for Host–Pathogen Interaction Isadora Filipaki Munhoz da Rocha , Rafaela Ferreira Amatuzzi , Aline Castro Rodrigues Lucena , Helisson Faoro & Lysangela Ronalte Alves Frontiers in Cellular and Infection Microbiology (2020) Protein markers for Candida albicans EVs include claudin‐like Sur7 family proteins Charlotte S Dawson , Donovan Garcia‐Ceron , Harinda Rajapaksha , Pierre Faou , Mark R Bleackley & Marilyn A Anderson Journal of Extracellular Vesicles (2020)
CommonCrawl
Diphtheria outbreak in Yemen: the impact of conflict on a fragile health system Fekri Dureab ORCID: orcid.org/0000-0002-8414-41291, Maysoon Al-Sakkaf2, Osan Ismail2, Naasegnibe Kuunibe1,3, Johannes Krisam4, Olaf Müller1 & Albrecht Jahn1 War in Yemen started three years ago, and continues unabated with a steadily rising number of direct and indirect victims thus leaving the majority of Yemen's population in dire need of humanitarian assistance. The conflict adversely affects basic socioeconomic and health conditions across the country. This study analyzed the recent ongoing diphtheria outbreak in Yemen and in particular, the health system's failure to ensure immunization coverage and respond to this outbreak. Data from the weekly bulletins of the national electronic Disease Early Warning System's (eDEWS) daily diphtheria reports and district immunization coverage were analyzed. The number of diphtheria cases and deaths, and immunization coverage (DPT) were reviewed by district including the degree to which a district was affected by conflict using a simple scoring system. A logistic regression and bivariate correlation were applied using the annual immunization coverage per district to determine if there was an association between diphtheria, immunization coverage and conflict. The study results confirm the association between the increasing cases of diphtheria, immunization coverage and ongoing conflict. A total of 1294 probable cases of diphtheria were reported from 177 districts with an overall case fatality rate of 5.6%. Approximately 65% of the patients were children under 15 years, and 46% of the cases had never been vaccinated against diphtheria. The risk of an outbreak increased by 11-fold if the district was experiencing ongoing conflict p < 0.05. In the presence of conflict (whether past or ongoing), the risk of an outbreak decreased by 0.98 if immunization coverage was high p > 0.05. The conflict is continuously devastating the health system in Yemen with serious consequences on morbidity and mortality. Therefore, the humanitarian response should focus on strengthening health services including routine immunization procedures to avoid further outbreaks of life-threatening infectious diseases, such as diphtheria. Diphtheria is a life-threatening bacterial disease caused by Corynebacterium diphtheria, a non-encapsulated gram-positive bacillus. It is transmitted through close respiratory contact, causes airway obstruction due to nasopharyngeal infection, and may spread to other organs [1,2,3]. Diphtheria is a vaccine-preventable disease, which was largely eliminated in industrialized countries decades ago. In low-income countries, diphtheria control was much improved by global efforts, such as the Expanded Program on Immunization (EPI) in the second half of the twentieth century [4]. However, diphtheria re-emerged during the 1990s in a number of countries In Europe, triggered by the breakdown of health services across the former Soviet Union [2, 5]. Diphtheria remains a problem in a number of low-income countries with poor immunization coverage. Several outbreaks have been reported in sub-Saharan Africa (e.g. Nigeria and Madagascar) since 2000 [6]. Bangladesh experienced recently an outbreak in a large refugee camp for the Rohinga in 2017 [7]. Currently, India, Indonesia and Nepal have the highest number of diphtheria cases in Asia [6]. Even in countries with rather good immunization coverage, such as Thailand and Iran, outbreaks of 157 and 513 cases respectively, have occurred in recent years [6]. Since the last major outbreaks of diphtheria in the 1990s, cases continue to be reported from Europe as well. In 2014, for example, 22 cases of confirmed diphtheria were reported in the European Union, and about half of these cases were in Latvia [8]. Yemen had experienced no serious diphtheria outbreaks until very recently. From October 2017 to August 2018, 2203 probable diphtheria cases (including 116 deaths) were reported. Unfortunately, few diphtheria case alerts were generated prior to the declaration of an outbreak by the electronic surveillance system [9]. Yemen has been engaged in civil war since March 2015, which has severely affected the country's infrastructure including health services. Less than 50% of existing health facilities are fully functional and there is a serious shortage of staff, medicine and equipment [10]. The conflict has led to major population movements, increased direct morbidity and mortality, and indirect adverse effects on the population due to dysfunctional services and a lack of food, clean water, and sanitation [11]. One consequence has been a significant cholera epidemic since 2016 [12,13,14]. Yemen is in the southwest of the Arabian Peninsula, bordered by Saudi Arabia to the north and Oman to the east and surrounded by water to the south and west [15]. The country is administratively divided into 22 governorates and 333 districts. It is a low-income country with high poverty and illiteracy rates [16]. The country has experienced many crises since 2011, which began with the Arab Spring's efforts against poverty, unemployment, corruption, and political instability. The political situation moved into a new complicated stage in March 2015 with the beginning of civil war [17], which has led to the country's fragmentation into multiple semi-autonomous entities running basic services [18, 19]. The health systems include four levels of health facilities: health units, health centers, district or governorate hospitals, and referral hospitals [20]. There are approximately 4207 public health facilities including 243 hospitals [21]. Approximately 16.4 million have no access to basic healthcare [22], and only 43% of the functional health facilities have communicable diseases services. Maternal and new-born services including immunization services are available in only 35% of functional health facilities [23]. Diphtheria outbreaks reflect a huge gap in the immunization coverage in the last three years due to the obviously collapsed health system in Yemen. A recent WHO report shows that the coverage for vaccination against diphtheria/pertussis/tetanus 1 (DPT1) coverage shrank gradually over the last three years: approximately 89, 88 and 83% in 2015, 2016 and 2017 respectively [24, 25]. This paper describes the recent diphtheria outbreak and explains the relationship between diphtheria cases, immunization and conflict dynamics in Yemen. We used multiple national-level data sources to describe and analyze the recent diphtheria outbreak in Yemen. First, data from the weekly bulletins of the electronic Disease Early Warning System (eDEWS) (1st epidemiological week of 2017 to the 10th epidemiological week of 2018) were used to identify the trend of the diphtheria outbreak. The second data source was the daily diphtheria surveillance reports on district and governorate levels. The third source was the 2017 annual immunization coverage report from 333 Yemeni districts. Finally, the 2017 report on the level of conflict was analyzed after districts were categorized according to conflict dynamic: 1 = experience ongoing armed conflict (65 districts), 2 = history of past armed conflict (48 districts), and 3 = no conflict (219 districts). The primary outcome, "diphtheria outbreak" (yes/no) was determined for each district by assessing the presence of diphtheria cases within a district. If there was at least one case of diphtheria in a particular district, this was considered as an outbreak. Diphtheria case definitions [26] Clinical description An illness characterized by laryngitis or pharyngitis or tonsillitis, and an adherent membrane of the tonsils, pharynx and/or nose. Laboratory criteria for diagnosis Isolation of Corynebacterium diphtheriae from a clinical specimen, or fourfold or greater rise in serum antibody (but, only if both serum samples were obtained before the administration of diphtheria toxoid or antitoxin). Case classification Suspected: Not applicable. Probable: A case that meets the clinical description. Confirmed: A probable case that is laboratory confirmed or linked epidemiologically to a laboratory confirmed case. We describe the trend of diphtheria cases between the 1st week of 2017 to the 10th week of 2018 in Yemen based on data obtained from the weekly epidemiological eDEWS Bulletin. To determine the relationship between the outbreak, immunization and conflict dynamics, we regressed conflict dynamic on immunization on diphtheria for all districts in 2017, whether or not the district had experienced conflict in the past year, was currently experiencing armed conflict, or had never experienced armed conflict. We tested for an interaction between immunization and past armed conflict (imcoconf1) and between immunization and current armed conflict (imcoconf2). Given that the outcome variable (diphtheria outbreak denoted as dipht) was binary, we used the binary logit regression [27]. Following applications in health research [28], we specified the model as $$ L=\ln \left(\frac{P_i}{1-{P}_i}\right)=\forall + X\beta +\epsilon $$ where \( \mathrm{in}\left(\frac{P_i}{1-{P}_i}\right) \) is the natural logarithm of the ratio of the probability that a diphtheria outbreak will occur in a location given the explanatory variables of the respective district (P_i = P[dipht = 1|X_i]) divided by the probability that an outbreak will not occur (1-P_i), and ∀ is a constant term, X is a vector of explanatory variables, β is a vector of coefficients and ϵ is the random error term. Table 1 presents the definitions of all variables. Table 1 Measurement of variables for analysis of diphtheria, conflict and immunization status We estimated the eq. (2) and calculated the odd ratios (OR) to compare the relative odds of a diphtheria outbreak with the given conflict dynamics while controlling for number of immunized children in the district. $$ \ln \left(\frac{P_i}{1-{P}_i}\right)=\forall +{\beta}_1 imco+{\beta}_2 confhis0+{\beta}_3 confhis1+{\beta}_4 confhis2+{\beta}_5 imcoconf1+{\beta}_6 imcoconf2+\epsilon $$ OR = 1 means the particular conflict dynamic does not affect the odds of a diphtheria outbreak, OR > 1 means a particular conflict dynamic is associated with higher odds of a diphtheria outbreak, and OR < 1 means a particular conflict dynamic is associated with lower odds of a diphtheria outbreak. The variable confhis0 serves as base category for confhis1 and confhis2 and so does not enter the model. A p-value smaller than 0.05 was regarded as statistically significant. Analyses were conducted using the software Stata version 15. A diphtheria outbreak was announced on 29 October 2017 by the Ministry of public Health and population and WHO in Yemen. From that date to March 10, 2018, a total of 1294 probable cases were recorded in 177/333 (53%) districts in 20/23 (87%) governorates. Table 2 presents the distribution of reported cases, deaths and corresponding case fatality rate (CFR) by governorates. Most cases occurred in three governorates, Ibb governorate (441cases, 34%), Hodeida governorate (151 cases, 12%) and Sana'a governorate (133 cases, 10%). A total of 73 deaths were reported in all governorates, which resulted in an overall CFR of 5.6%. Table 2 Distribution of diphtheria cases, deaths and corresponding case fatality rates by governorates in Yemen (October 2017 – March 2018) Figure 1 shows the trend in reported number of probable diphtheria cases and alerts generated by eDEWS, from epidemiological week 39 in 2017 to week 10 in 2018. The trend shows a gradual increase in diphtheria cases reaching a peak in epidemiological week 4 in 2018 (102 cases/week; reported by 14 governorates). The trend then shows a gradual decline in the number of cases. Trend of the diphtheria probable cases from epidemiological week 39 in 2017 to week 10 in 2018 in Yemen Table 3 shows the distribution of diphtheria cases and deaths and corresponding CFRs by age, sex and vaccination status. Diphtheria morbidity and mortality did not significantly differ between males and females and the majority of cases occurred in children and adolescents who also had the highest CFRs (11% in children under 5 years old, but only approximately 1% in those 35–50 years old). Diphtheria vaccination status was strongly and inversely associated with CFRs. However, although 31% of diphtheria cases were people reported to have taken three doses of diphtheria vaccination, their CFR was only 2.9%. Table 3 Distribution of diphtheria cases and deaths and corresponding CFRs by age, sex and vaccination status Relationship between conflict and diphtheria outbreak Table 4 shows the results from the bivariate logistic regression analysis. To explore the relationship between the conflict situation in Yemen and the diphtheria outbreak, we regressed the outcome variable (whether a district experienced outbreak or not as diphtheria outbreak = 1, no outbreak = 0) with the conflict dynamic in the district. The bivariate regression results show that immunization generally does affect the probability of diphtheria outbreak (OR = 1.02; CI = 1.01–1.03, p < 0.05). The same was observed for the effects of immunization in areas where conflict was ongoing (OR = 1.01; CI = 1.004–1.016, p < 0.05). However, the probability of an outbreak increased significantly in areas where conflict was ongoing (OR = 1.89; CI = 1.20–2.99, p < 0.05). From the multivariate regression results, immunization did affect the risk of an outbreak (OR = 1.04; CI =1.012–1.058, p < 0.05). This is to say, that in a district without conflict, immunization had a minimal effect on the diphtheria outbreak situation. The odds of an outbreak significantly increased 11-fold if the district was experiencing ongoing conflict (OR = 11.21; CI =1.29–97.69, p < 0.05) and 3-fold if the district had had a history of conflict in the past year (although not statistically significant) see Table 5. Table 4 Bivariate logistic regression results for number of immunized children and conflict Table 5 Multivariate logistic regression results for immunization and conflict The number of cases detected weekly by eDEWS revealed that a number of diphtheria alerts had already been detected in 2017, starting in epidemiological week 5. An official statement of the Ministry of Public Health and Population (MOPHP) on the outbreak was only launched in epidemiological week 39. This gap can be explained by the intensity of the current war that delays the timeliness of reporting and response and increases the inaccessibility to basic services [14]. Children under 15 years were the most affected during this diphtheria outbreak (65%) compared to those in older age groups. For comparison, a similar study in Lao People's Democratic Republic (Laos), in 2016, revealed that 69% of diphtheria cases were among children under 15 years [29]. The overall diphtheria CFR in Yemen has been 5.6%, and was highest among under-five children (11%). Likewise, during the recent diphtheria outbreak among the Rohingya refugee in Bangladesh, 13% of affected cases were children under five [30]. In our study, 46% of diphtheria cases and 69% of deaths were among the unvaccinated group. This corresponds to findings from a case-control study in Laos where approximately 34% of the people with diphtheria had not received any DPT doses [29, 31]. This study revealed the relationship between conflict and the diphtheria outbreak in Yemen, showing that the risk of a diphtheria outbreak significantly increases by 11-fold if the district is currently experiencing armed conflict. This supports findings from another study on this topic in Yemen [14]. There is no doubt that conflict affects the delivery of health services, since about half of the health facilities are currently not functioning in Yemen [32]. As one consequence, immunization coverage is severely affected by the ongoing conflict, which largely explains the current infectious disease outbreaks in the country. Despite much evidence on the role of immunization coverage to prevent infection [33], this study shows that in a district that does not experience conflict, immunization coverage has a minimal effect on diphtheria outbreaks. One potential explanation of this unexpected finding is the inaccuracy of coverage rates per district, since the national EPI uses the number of a total population from old census data of 2004 as denominators without considering the increasing population due to movements by internally displaced persons (IDPs) during the war. Moreover, the quality of information has likely been affected by the breakdown of national reporting mechanisms. Also, this unexpected finding might have occurred due to unmeasured, unknown confounders that were not included into the logistic regression model. Finally, the quality of the vaccines in use could be affected due to breakdowns in the cold chain. Continuous population movement is one of the conflict-related factors that contributes to the rapid spread of infectious diseases. Unfortunately, in a domestic armed conflict, individuals are forced to leave their homes and become IDPs within the country or take refuge outside the country [34]. IDPs and host communities become more susceptible to many infectious diseases due to multiple factors such as unplanned overcrowding, hygiene problems, lack of health services, and the introduction of new infectious agents. IDPs in Yemen may thus be an important contributing factor for the diphtheria outbreak. For example, this may explain the serious situation in the Ibb governorate, which has a very large number of the IDPs in Yemen [35]. In addition to the impact of the current war that has destroyed the country's infrastructure and deprived many people of the most basic services, the problem of IDPs and low immunization coverage, there are many additional factors that may have contributed to the emergence of epidemic diseases in Yemen over the past years. These include already existing poverty and malnutrition [36, 37]. Undernutrition has already been a huge public health problem in Yemen and contributes to the high mortality associated with infectious diseases, especially among children [38]. Approximately 2.2 million children in Yemen are acutely malnourished and 462,000 out of them suffer from severe acute malnutrition [39]. This study has some limitations. There were and still are challenges to obtain reliable data from Yemen, for example with regard to the distribution of IDPs by district. Another major limitation is related to our secondary analysis using existing poor quality data, e.g., old population counts for calculating the EPI coverage. Finally, the Yemeni health authorities have depended on patients or relatives to recall information regarding the vaccination status of cases in the daily diphtheria reports, which has a high probability of recall bias. We conclude that the conflict is continuously devastating the health system in Yemen with serious consequences on morbidity and mortality. While emergency immunization campaigns are crucial interventions in the current situation to control and prevent infectious disease outbreaks, in the long-term, Yemen needs peace and the re-establishment of functioning health services within a frame of universal health coverage. DPT1: Diphtheria/pertussis/tetanus 1 eDEWS: Electronic Disease Early Warning System EPI: Expanded Program on Immunization IDPs: MOPHP: Ministry of Public Health and Population Odd Ratio WHO. Diphtheria Geneva: World Health Organization 2018 [Available from: http://www.who.int/immunization/monitoring_surveillance/burden/diphtheria/en/. Accessed 18 Mar 2018. Vitek C, Wenger J. Diphtheria. Bull World Health Organ. 1998;76(Suppl 2):129–30. M Lo Bruce. Diphtheria: Medscape 2017 [Available from: https://emedicine.medscape.com/article/782051-overview. Accessed 18 Mar 2018. Vitek CR. Diphtheria. Curr Top Microbiol Immunol. 2006;304:71–94. Vitek CR, Wharton M. Diphtheria in the former Soviet Union: reemergence of a pandemic disease. Emerg Infect Dis. 1998;4(4):539–50. Clarke K. Reveiw of the epidemiological of Diphtheria - 2000-2016. US centers for diseases control and prevention; 2017. WHO. Diphtheria – Cox's Bazar in Bangladesh Geneva: World Health Organization; 2017. Available from: http://www.who.int/csr/don/13-december-2017-diphtheria-bangladesh/en/. Accessed 18 Mar 2018. European Centre for Disease Prevention and Control. Annual Epidemiological Report – Diphtheria. [Internet] Stockholm: ECDC; 2016. Available from: https://ecdc.europa.eu/en/publications-data/diphtheria-annual-epidemiological-report-2016-2014-data#copy-to-clipboard. Accessed 18 Mar 2018. WHO, MOPH&P. Weekly Epidemiological Bulletin. Epi week 32, Volume 06. Yemen: Ministry of Health. Aug 2018;2018:06–12. OCHA-Yemen. Humanitarian Needs Overview Yemen: UNOCHA. 2018. WHO. Potential impact of conflict on health in Iraq Geneva World Health Organization 2003 [Available from: http://www.who.int/features/2003/iraq/briefings/iraq_briefing_note/en/. Accessed 24 Mar 2018. Camacho A, Bouhenia M, Alyusfi R, Alkohlani A, Naji MAM, de Radigues X, et al. Cholera epidemic in Yemen, 2016-18: an analysis of surveillance data. Lancet Glob Health. 2018;6(6):e680–e90. Gormley M. Untangling the causes of the 2016-18 cholera epidemic in Yemen. Lancet Glob Health. 2018;6(6):e600–e1. Dureab F, Shibib K, Al-Yousufi R, Jahn A. Yemen: cholera outbreak and the ongoing armed conflict. J Infect Dev Countr. 2018;12(5):397–403. Hadden RL. The geology of Yemen: an annotated bibliography of Yemen's geology, geography and earth science. Alexandria, Virginia: US Army Corps of Engineers , Army Geospatial Center; 2012. p. 385. UNDP. Human Development Report 2016 ,Human Development for Everyone. New York, the USA; 2016. Contract No.: ISSN: 0969–4501. CIA. Middle East : YEMEN Internet: the Central Intelligence Agency'; 2016 . Available from: https://www.cia.gov/library/publications/the-world-factbook/geos/ym.html. Accessed 29 Mar 2018. Hill G. Yemen's urban-rural divide and the ultra-localisation of the civil war. London: Middle East Centre, London school of economics and. Political Science. 2017. ECHO. Humanitarian Aid and Civil Protection Factsheet. Yemen: European Commission; 2017 Jan. 2017. National Health Accounts Team, Republic of Yemen, Partners for Health Reformplus. Yemen National Health Accounts: estimate for 2003. Yemen; 2006. MOPH&P. Annual Statistical Health Report. Yemen: Ministry of Public Health and Population; 2014. Giles Clarke. Humanitarian Needs Overview. Yemen: United Nations Office for the Coordination of Humanitarian Affairs; 2018 Dec 2017. WHO MOPHP. Service availability and health facilities functionality in 16 governorates (health services and resources availability mapping system). World Health Organization Yemen country office and Ministery of public health and population; 2016. WHO. Reported official target population, number of doses administrered and official coverage Geneva: World Health Organization; 2018. Available from: www.who.int/immunization/monitoring_surveillance/data/en/. Accessed 18 Mar 2018. EPI-Yemen. Annual accomulative report of the Imunization coverage 2017. National Expanded program of Immunization2018. WHO. WHO-recommended surveillance standard of diphtheria 2014 [Available from: http://www.who.int/immunization/monitoring_surveillance/burden/vpd/surveillance_type/passive/diphtheria_standards/en/. Accessed 20 May 2018. Greene HW. Econometric Analysis. 5th ed. Upper Saddle River, New Jersey: Pearson Education Inc; 2003. Kuunibe N, Domanban PB. Demand for complementary and alternative medicine in Ghana. Int J Humanit Soc Sci. 2012;2:288–94. Sein C, Tiwari T, Macneil A, Wannemuehler K, Soulaphy C, Souliphone P, et al. Diphtheria outbreak in Lao People's Democratic Republic, 2012-2013. Vaccine. 2016;34(36):4321–6. Rahman MR, Islam K. Massive diphtheria outbreak among Rohingya refugees: lessons learnt. J Travel Med. 2018. Besa NC, Coldiron ME, Bakri A, Raji A, Nsuami MJ, Rousseau C, et al. Diphtheria outbreak with high mortality in northeastern Nigeria. Epidemiol Infect. 2014;142(4):797–802. El Bcheraoui C, Jumaan AO, Collison ML, Daoud F, Mokdad AH. Health in Yemen: losing ground in war time. Glob Health. 2018;14(1):42. Hinman AR, Orenstein WA, Schuchat A. Vaccine-preventable diseases, immunizations, and the epidemic intelligence service. Am J Epidemiol. 2011;174(11 Suppl):S16–22. Ramirez JB, Franco H. The effect of conflict and displacement on the health of internally displaced people: the Colombian. Crisis Medical Journal of the University of Ottawa. 2016;6(2: Global Health):26–9. Dureab F, Muller O, Jahn A. Resurgence of diphtheria in Yemen due to population movement. J Travel Med. 2018;25(1). EHINZ. Household crowding New Zealand: University of New Zealand 2018 [Available from: http://www.ehinz.ac.nz/indicators/indoor-environment/household-crowding/. Accessed 20 Dec. 2018. Virtanen M, Terho K, Oksanen T, Kurvinen T, Pentti J, Routamaa M, et al. Patients with infectious diseases, overcrowding, and health in hospital staff. Arch Intern Med. 2011;171(14):1296–8. Franca TGD, Ishikawa LLW, Zorzella-Pezavento SFG, Chiuso-Minicucci F, da Cunha MLRS, Sartori A. Impact of malnutrition on immunity and infection. J Venom Anim Toxins. 2009;15(3):374–90. UNICEF. Malnutrition amongst Children in Yemen at an All-Time High, Warns UNICEF New York2016 [Available from: https://www.unicefusa.org/press/releases/malnutrition-amongst-children-yemen-all-time-high-warns-unicef/31545. Accessed 20 Dec. 2018. We express our gratitude to staff of the Ministry of Health and WHO in Yemen for their continuous efforts supporting the surveillance system and providing data. We thank Dr. Eshraq Al-Falahi from WHO Yemen, and Dr. Asma Dureab from SAWT foundation in Yemen for their valuable support and feedback. We acknowledge the financial support of Deutsche Forschungsgemeinschaft within the Open Access Publishing funding program and the Baden- Württemberg Ministry of Science, Research and the Arts and Ruprecht-Karls-Universität Heidelberg. There was no fund contribution allocated for this study. The date was obtained from the published daily diphtheria surveillance reports (Oct 2017-March 2018), the daily reports were sent by Ministry of Health via email. The 2017 annual immunization coverage reports of 333 Yemeni districts was obtained from the EPI program in the Ministry of Health. Heidelberg Institute of Global Health, Hospital University- Heidelberg, Heidelberg, Germany Fekri Dureab , Naasegnibe Kuunibe , Olaf Müller & Albrecht Jahn World Health Organization, WHO, Yemen Country Office, Sana'a, Yemen Maysoon Al-Sakkaf & Osan Ismail University for Development Studies, Tamale, Ghana Naasegnibe Kuunibe Institute of Medical Biometry and Informatics, Heidelberg University, Heidelberg, Germany Johannes Krisam Search for Fekri Dureab in: Search for Maysoon Al-Sakkaf in: Search for Osan Ismail in: Search for Naasegnibe Kuunibe in: Search for Johannes Krisam in: Search for Olaf Müller in: Search for Albrecht Jahn in: FD: the principle investigator, data analysis and wrote the manuscript. OI: contributed in writing the second draft of the manuscript. MA: Participate in data collection and analysis. NK: analyzed and interpreted the results of the logistic regression. JK: contributed in data analysis and revision of the manuscript based on the reviewers' comments. OM: major contributor in writing the manuscript, review and check the analysis. AJ: the main supervisor and contributed in the manuscript design and review. All authors read and approved the final manuscript. Correspondence to Fekri Dureab. Dureab, F., Al-Sakkaf, M., Ismail, O. et al. Diphtheria outbreak in Yemen: the impact of conflict on a fragile health system. Confl Health 13, 19 (2019). https://doi.org/10.1186/s13031-019-0204-2 Diphtheria outbreak
CommonCrawl
Cao, Jiling ; Reilly, Ivan L. $\alpha$-continuous and $\alpha$-irresolute multifunctions. (English). Mathematica Bohemica, vol. 121 (1996), issue 4, pp. 415-424 MSC: 54C60, 54E55 | MR 1428143 | Zbl 0879.54020 | DOI: 10.21136/MB.1996.126038 upper (lower) $\alpha$-continuous; upper (lower) $\alpha$-irresolute; strongly $\alpha$-closed graph; almost compact; almost paracompact Recently Popa and Noiri [10] established some new characterizations and basic properties of $\alpha$-continuous multifunctions. In this paper, we improve some of their results and examine further properties of $\alpha$-continuous and $\alpha$-irresolute multifunctions. We also make corrections to some theorems of Neubrunn [7]. [1] C. E. Aull: Paгacompact subsets. General Topology and its Relations to Modern Analysis and Algebra II. Proc. of the Symposium Prague, 1966, Academia, Praha, 1967, pp. 45-51. MR 0234420 [2] C. Berge: Topological spaces. Oliver and Boyd Ltd., 1963. Zbl 0114.38602 [3] E. Klein A. Thompson: Theory of correspondences. A Wiley-Interscience Publication. John Wiley and Sons, 1984. MR 0752692 [4] S. N. Maheshwari S. S. Thakur: On $\alpha$-irresolute functions. Tamkang J. Math. 11 (1980), 209-214. MR 0696921 [5] S. N. Maheshwari S. S. Thakur: On $\alpha$-compact spaces. Bull. Inst. Math. Acad. Sinica 13 (1985), 341-347. MR 0866569 [6] A. S. Mashhour I. A. Hasanein S. N. El-Deeb: $\alpha$-Continious and $\alpha$-open mappings. Acta Math. Hung. 41 (1983), 213-218. DOI 10.1007/BF01961309 | MR 0703734 [7] T. Neubrunn: Strongly quasi-continuous multivalued mappings. General Topology and its Relations to Modern Analysis and Algebra VI. Proc. of the Symposium, Prague, 1986, Heldermann Verlag Berlin, 1988, pp. 351-359. MR 0952621 [8] O. Njåstad: On some classes of nearly open sets. Pacific J. Math. 15 (1965), 961-970. DOI 10.2140/pjm.1965.15.961 | MR 0195040 [9] T. Noiri: On $\alpha$-continuous functions. Časopis Pěst. Mat. 109 (1984), 118-126. MR 0744869 | Zbl 0544.54009 [10] V. Popa T. Noiri: On upper and lower $\alpha$-continuous multifunctions. Math. Slovaca 4З (1993), 261-265. MR 1248981 [11] I. L. Reilly M. K. Vamanamurthy: Connectedness and strong semi-continuity. Časopis Pěst. Mat. 109 (1984), 261-265. MR 0755590 [12] I. L. Reilly M. K. Vamanamurthy: On $\alpha$-continuity in topological spaces. Acta Math. Hung. 45 (1985), 27-32. DOI 10.1007/BF01955019 | MR 0779514 [13] I. L. Reilly M. K. Vamanamurthy: On $\alpha$-sets in topological spaces. Tamkang J. Math. 16 (1985), 7-11. MR 0805724 [14] M. K. Singal R. Asha: On almost M-compact spaces. Ann. Soc. Sci. Bruxelles 82 (1968), 233-242. MR 0236879 | Zbl 0183.27301 [15] M. K. Singal S. P. Arya: On M-paracompact spaces. Math. Anal. 181 (1969), 119-133. DOI 10.1007/BF01350631 | MR 0246256 [16] G. T. Whyburn: Retracting multifunctions. Proc. Nat. Acad. Sci. U.S.A. 59 (1968), 343-348. DOI 10.1073/pnas.59.2.343 | MR 0227959
CommonCrawl
Taylor Series Matrix differentiation The chain rule Multivariable integration Multivariable CalculusTaylor Series We can define a polynomial which approximates a smooth function in the vicinity of a point with the following idea: match as many derivatives as possible. The utility of this simple idea emerges from the convenient simplicity of polynomials and the fact that a wide class of functions look pretty much like polynomials when you zoom in around a given point. First, a bit of review on the exponential function x\mapsto \exp(x): we define \exp to be the function which maps 0 to 1 and which is everywhere equal to its own derivative. It follows (nontrivially) from this definition that \exp(x) = \exp(1)^x, so may define \mathrm{e} = \exp(1) and write the exponential function as x\mapsto \mathrm{e}^x. The value of \mathrm{e} is approximately 2.718. Find the quadratic polynomial P_2 whose zeroth, first, and second derivatives at the origin match those of the exponential function. Solution. Since P_2 is quadratic, we must have \begin{align*}P_2(x) = a_0 + a_1x + a_2x^2\end{align*} for some a_0, a_1, and a_2. To match the derivative, we check that P_2(0) = a_0 and f(0) = 1. So we must have a_0 =1. Similarly, P_2'(0) = a_1, so if we want P_2'(0) = f'(0) = 1, have to choose a_1 = 1 as well. For a_2, we calculate P_2''(x) = (a_1 + 2a_2x)' = 2a_2, so to get P_2''(0) = f''(0) = 1, we have to let a_2 = \tfrac{1}{2}. So \begin{align*}P_2(x) = 1 + x + \tfrac{1}{2}x^2\end{align*} is the best we can do. Looking at the figure, we set that P_2 does indeed do a better job of 'hugging' the graph of f near x=0 than the best linear approximation (L(x) = 1 + x) does. The best constant, linear, and quadratic approximations of XEQUATIONX41XEQUATIONX near the origin We can extend this idea to higher order polynomials, and we can even include terms for all powers of x, thereby obtaining an infinite series: Definition (Taylor Series) The Taylor series, centered at c, of an infinitely differentiable function f is defined to be \begin{align*}f(c) + f'(c)(x-c) + \frac{f''(c)}{2!}(x-c)^2 + \frac{f'''(c)}{3!}(x-c)^3 + \cdots\end{align*} Find the Taylor series centered at the origin for the exponential function. Solution. We continue the pattern we discovered for the quadratic approximation of the exponential function at the origin: the $n$th derivative of a_0 + a_1x + \cdots + a_n x^n + \cdots is n!a_n, while the $n$th derivative of the exponential function is 1 at the origin. Therefore, a_n = 1/n!, and we obtain the Taylor series \begin{align*}1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots\end{align*} It turns out that this series does in fact converge to \mathrm{e}^x, for all x \in \mathbb{R}. Taylor series properties It turns out that if the Taylor series for a function converges, then it does so in an interval centered around c. Furthermore, inside the interval of convergence, it is valid to perform term-by-term operations with the Taylor series as though it were a polynomial: We can multiply or add Taylor series term-by-term. We can integrate or differentiate a Taylor series term-by-term. We can substitute one Taylor series into another to obtain a Taylor series for the composition. All the operations described above may be applied wherever all the series in question are convergent. In other words, f and g have Taylor series P and Q converging to f and g in some open interval, then the Taylor series for fg, f+g, f', and \int f converge in that interval and are given by PQ, P+Q, P', and \int P, respectively. If P has an infinite radius of convergence, then the Taylor series for f\circ g is given by P\circ Q. The following example shows how convenient this theorem can be for finding Taylor series. Find the Taylor series for f(x) = \cos x + x \mathrm{e}^{x^2} centered at c = 0. Solution. Taking many derivatives is going to be no fun, especially with that second term. What we can do, however, is just substitute x^2 into the Taylor series for the exponential function, multiply that by x, and add the Taylor series for cosine: \begin{align*}&\left(1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \cdots\right) + x\left(1 + x^2 + \frac{(x^2)^2}{2!} + \frac{(x^2)^3}{3!} + \cdots\right) \\\ &= 1 + x - \frac{x^2}{2!} + x^3 + \frac{x^4}{4!} + \frac{x^5}{2!} + \cdots.\end{align*} In summation notation, we could write this series as \sum_{n=0}^\infty a_n x^n where a_n is equal to (-1)^{n/2}/n! if n is even and 1/((n-1)/2)! if n is odd. Find the Taylor series for 1/(1-x) centered at the origin, and show that it converges to 1/(1-x) for all -1 < x < 1. Use your result to find x + 2x^2 + 3x^3 + 4x^4 + \cdots. Hint: think about differentiation. Solution. Calculating derivatives of 1/(1-x), we find that the Taylor series centered at the origin is 1 + x + x^2 + \cdots. Furthermore, we know that \begin{align*}\frac{1}{1-x} = 1 + x + x^2 + x^3 + \cdots,\end{align*} for -1 < x < 1, by the formula for infinite geometric series. We can use this result to find \sum_{k = 1}^\infty k x^k by differentiating both sides and multiplying both sides by x: \begin{align*}\frac{1}{(1-x)^2} = 1 + 2x + 3x^2 + 4x^3 + \cdots\end{align*} \begin{align*}\frac{x}{(1-x)^2} = x + 2x^2 + 3x^3 + 4x^4 + \cdots\end{align*} Show that \lim_{n\to\infty}(1+x/n)^n is equal to \mathrm{e}^x by showing that \lim_{n\to\infty}\log (1+x/n)^n = x. Solution. Integrating the equation \begin{align*}\frac{1}{1+x} = 1 - x + x^2 - x^3 + x^4 - \cdots\end{align*} term by term, we find that \begin{align*}\log(1+x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \cdots\end{align*} Substituting gives \begin{align*}n \log (1+x/n) = x - \frac{x}{2n} + \frac{x^3}{3n^2} - \cdots.\end{align*} Each of the terms other than the converges to 0, and we can take limits term-by-term since x/n is inside of the interval of convergence for this series. Therefore, \lim_{n\to\infty}\log(1+x/n)^n = x, and since the exponential function is continuous, this implies that \lim_{n\to\infty}(1+x/n)^n = \mathrm{e}^x
CommonCrawl
Polymorphic functions in vector calculus While teaching multi-variable calculus for the first time in a while, I came across a tricky notational point in our textbook (Thomas' calculus - I'm not sure how widespread this notation is). When $\mathbf{r}(t)$ is a vector-valued function, our book writes the arc length parametrization as $\mathbf{r}(s)$, with the same name for the function, changing only the variable. From the right perspective, this makes a lot of sense: one thinks of $\mathbf{r}$ as a polymorphic function that interprets its inputs in the uniquely sensible way, so when passed a value of $t$ it returns $\mathbf{r}(t)$, and when passed an arc length returns $\mathbf{r}(s)$. On the other hand, this isn't the perspective my students (mostly first-year college students) have on functions. It violates the way functions normally behave---if $\mathbf{r}(t)=\langle t,t^2\rangle$ then $\mathbf{r}(s)$ doesn't equal $\langle s,s^2\rangle$. It creates some ambiguities---what does $\mathbf{r}(2)$ mean? Assuming I'm stuck with this notation (it's the one our textbook uses, and I share a final with other people using the same book), what are good practices to work with and explain this to minimize how confusing it is? (And maybe help my students get the most out of this new perspective on functions.) undergraduate-education vector-calculus Henry TowsnerHenry Towsner $\begingroup$ students are not so logical by in large. For example, try writing $f = f(x)$ and $g = g(y)$. I don't think many of them will complain you conflated the function with its value. Instead, they will usually understand what is meant by such shorthand. The same with $\vec{r}$. One sneaky way around it, just say $\vec{r}(s)$ is really just an abbreviation for $\vec{r}(s(t))$. Or, just be honest, it is an abuse of notation. That said, I really think most students do not notice if you don't draw attention to it. $\endgroup$ – James S. Cook Jan 31 '18 at 3:50 $\begingroup$ @TheChef I like the idea of telling them it's an abbreviation. But I don't agree that student won't be confused. My students don't complain about conflating f with f(x), but they definitely find it confusing. They don't complain because they don't understand either concept well enough to articulate why it's confusing, but they definitely don't understand that shorthand. $\endgroup$ – Henry Towsner Jan 31 '18 at 5:17 $\begingroup$ @TheChef I doubt any mathematician really understands what is meant by that shorthand. They all think they understand it, but I haven't seen anyone who is able to formalise it (in the sense of implementing a proof checker on a computer that can consistently figure out what is going on). $\endgroup$ – Michael Bächtold Jan 31 '18 at 9:46 $\begingroup$ Maybe address the ambiguity, give examples, and from then on use $\mathbf{r}_t(t)$ and $\mathbf{r}_s(s)$ on the blackboard and in exercises. And alert them to spots in the text where the ambiguity makes a difference. $\endgroup$ – Joseph O'Rourke Jan 31 '18 at 13:06 $\begingroup$ @MichaelBächtold it an abuse of notation. That means two different ideas are being described by the same symbol in this case. I would also be unable to code it, yet, I think it has meaning. When we write f = f(x) and g = g(y) it likely means I have two variables $x$ and $y$ and for whatever reason $f(x,y)$ has partial derivative w.r.t. $y$ of zero and likewise $g(x,y)$ has partial derivative w.r.t. $x$ of zero. That said, Henry Towsner is correct, many students don't understand it. $\endgroup$ – James S. Cook Feb 1 '18 at 0:38 By coincidence, I'm teaching multi-variable Calculus for the first time this semester, and have given some thought to how to handle this precise issue. This seems to me to be closely related to an example I first encountered at http://math.oregonstate.edu/bridge/ideas/functions/, which I strongly encourage you to visit before you read the rest of this answer. This example is discussed in some detail in a paper by Edward F. Redish and Eric Kuo, published at https://link.springer.com/article/10.1007/s11191-015-9749-7 (free preprint version at https://arxiv.org/pdf/1409.6272). The key insight concerns what the authors call Corinne's Shibboleth: One of your colleagues is measuring the temperature of a plate of metal placed above an outlet pipe that emits cool air. The result can be well described in Cartesian coordinates by the function $$T(x,y) = k(x^2 + y^2)$$ where $k$ is a constant. If you were asked to give the following function, what would you write? $$T(r,\theta) = \textrm{ ?}$$ The context of the problem encourages you to think in terms of a particular physical system. Physicists tend to think of $T$ as a physical function – one that represents the temperature (in whatever units) at a particular point in space (in whatever coordinates). Mathematicians tend to consider $T$ as a mathematical function – one that represents a particular functional dependence relating a value to a pair of given numbers. As a result, physicists tend to answer that $T(r,\theta) = kr^2$ because they interpret $x^2 + y^2$ physically as the square of the distance from the origin. If $r$ and $\theta$ are the polar coordinates corresponding to the rectangular coordinates $x$ and $y$, the physicists' answer yields the same value for the temperature at the same physical point in both representations. In other words, physicists assign meaning to the variables $x$, $y$, $r$, and $\theta$ – the geometry of the physical situation relating the variables to one another. Mathematicians, on the other hand, may regard $x$, $y$, $r$, and $\theta$ as dummy variables denoting two arbitrary independent variables. The variables $(r, \theta)$ or $(x,y)$ don't have any meaning constraining their relationship. Mathematicians focus on the mathematical grammar of the expression rather than any possible physical meaning. The function as defined instructs one to square the two independent variables, add them, and multiply the result by $k$. The result should therefore be $T(r,\theta) = k(r^2 + \theta^2)$. Typically, a physicist will be upset at the mathematician's result. You might hear, "You can't add $r^2$ and $\theta^2$! They have different units!" The mathematician is likely to be upset at the physicist's result. You might hear, "You can't change the functional dependence without changing the name of the symbol! You have to write something like $$T(x,y) = S(r,\theta) = kr^2.$$ To which the physicist might respond, "You can't write that the temperature equals the entropy! That will be too confusing." (Physicists often use $S$ to represent entropy.) (Parenthetically, coming up soon in my multivariable Calculus course -- and presumably in the OPs course as well -- are functions expressed in polar, cylindrical, and spherical coordinates, so the particulars of this example may be salient in that context as well.) The relationship between this example and the one in the OP is that the convention of using the notations $\mathbf{r}(t)$ and $\mathbf{r}(s)$ to refer to two different parametrizations of the same path seems (to me) to be in accordance with the "Physicist's interpretation": the variables $t$, $s$ and $\mathbf{r}$ are understood to have physical meaning (respectively: time, arc length, and position), so that $\mathbf{r}(t)$ means "the position corresponding to time $t$" while $\mathbf{r}(s)$ means "the position corresponding to arc length $s$". This is perfectly coherent and reasonable if the variables are understood as "physical variables", but it is at odds with the mathematician's understanding of a function as a mapping that assigns a value to an arbitrary input represented by a dummy variable. What to do about it? Here I think one strategy that can be helpful is to use less functional notation, rather than more. Consider the example of an object moving in a circular path of radius $5$, i.e. $\mathbf{r}(t)=\left<5\cos(t),5\sin(t)\right>$. I would propose actually eliding the variable from the left-hand side entirely, and writing it this way: Consider a particle whose position at time $t$ is given by $$\mathbf{r} = \left<5\cos(t),5\sin(t)\right>$$ Then the velocity is $$\mathbf{r}' = \left<-5\sin(t), 5\cos(t)\right>$$ so the speed of the object is $$|\mathbf{r}'| = \sqrt{25\cos^2(t) + 25\sin^2(t)} = 5 $$ and therefore the distance traveled between $t=0$ and $t=T$ is $$ \int_0^T 5 dt = 5T$$ This gives us the relationship $s = 5t$, where $s$ is the distance traveled by time $t$. Inverting this relationship we have $t = \frac{s}{5}$, which allows us to express the position of the object when it has traveled a distance $s$ as $$\mathbf{r} = \left<5\cos\left(\frac{s}{5}\right),5\sin\left(\frac{s}{5}\right)\right>$$ Notice that in this exposition functional notation has been almost completely avoided. An expression like $\mathbf{r}(2)$ (which, as the OP notes, is ambiguous) would be handled by using a descriptive phrase: either "$\mathbf{r}$ when $t=2$" or "$\mathbf{r}$ when $s=2$", depending on which is intended. If you think about it, this is really no different from the common single-variable usage of writing something like $y = 5x^3 +2x$, which also avoids functional notation. And just as we do not hesitate to write $\frac{dy}{dx}$, I think it is perfectly reasonable to write $\frac{d\mathbf{r}}{dt}$ and $\frac{d\mathbf{r}}{ds}$ to refer to the respective derivatives. mweissmweiss $\begingroup$ I very much like you answer and would have suggested the same for teaching. Let me just add that there is a well known notation for "$\mathbf{r}$ when $s=2$". Its $\mathbf{r}|_{s=2}$. But beware that many colleagues might object to it when it is not applied to a derivative. Also I'm not a 100% sure about its semantics. For example $d(x|_{x=2})$ cannot be the same as $(dx)|_{x=2}$, so it's not literally the same as logicians substitution. $\endgroup$ – Michael Bächtold Feb 1 '18 at 8:12 $\begingroup$ That example is fantastic, and clarifying. $\endgroup$ – Henry Towsner Feb 1 '18 at 20:07 $\begingroup$ Note that it's not just reasonable to write $d\mathbf{r}/dt$ and $d\mathbf{r}/ds$ but preferable, since otherwise how is one to know which one $\mathbf{r}'$ is supposed to be? (If only two parametrizations are used, one with independent variable $t$ given first, and another with independent variable $s$ parametrized by arclength, then it's probably a safe bet that $d\mathbf{r}/dt$ is meant, and you should tell the students so, but I still always write $d\mathbf{r}/dt$ in class.) $\endgroup$ – Toby Bartels Aug 20 '18 at 8:07 I don't have much to add to mweiss' nice answer in terms of teaching suggestions. But I would like to add my own point of view on what the abuse of notation $\mathbf{r}=\mathbf{r}(s)=\mathbf{r}(t)$ means and where it comes from historically. The first person to do this abuse of notation (implicitly) seems to have been Jacobi around 1830 (I suggest you read that link after reading the rest of what I'll say). Note that 1830 is more than 100 years after Bernoulli and Euler introduced the notation $y=f(x)$ and more than 140 years after Leibniz startet to talk of functions. In the period between Leibniz and Jacobi we find luminaries like Euler, Lagrange, Laplace, Fourier, Bolzano, Cauchy and Gauss who, as far as I can tell, never did this. So it's not the case, as some people believe, that mathematicians and physicist have been writing $y=y(x)$ ever since the invention of functions. Even after Jacobi there are many people, like Riemann, Peano or Planck who apparently didn't do it. So it's also not the case that physicists invented $y=y(x)$ or somehow need it more urgently than mathematicians. So why did Jacobi start doing it? In the above link you can read what he himself had to say about it (I highly recommend it to anyone teaching multivariable calculus), but his own words actually don't explain it completely. To understand it better we first need to understand how the word function was used prior to about 1900. Most mathematicians seem unaware of the dramatic change of meaning this word underwent during the period 1900-1920. And it's a non-trivial task for a modern mathematician with a set theoretic perspective to make sense of what it meant prior to 1900. But let's try. If you open any calculus textbook written after Leibniz and before at least 1910 (a ~200 year period) you'll find that the word "function" is always defined as function of something. Here is a typical example from the end of that period taken from Peano's Calcolo differenziale e principii di calcolo integrale 1884, p.3: Among the variables there are those to which we can assign arbitrarily and successively different values, called independent variables, and others whose values depend on the values given to the first ones. These are called dependent variables or functions of the first ones. We shall first treat functions of a single independent variable, and we shall say that: a function $y$ of $x$ is given in an interval $(a, b)$, if to any value of $x$ in between $a$ and $b$ corresponds a unique and determinate value for $y$ - whatever the means of determining it. So for example $x^{2}$ is a definite function of $x$ for any value of $x$, and is hence given in any interval; $\sqrt{x}$ understood as the arithmetic root of $x$, is given for all positive values of $x$ ; while $\frac{1}{1} + \frac{1}{2} +\frac {1}{3} +\cdots + \frac{1}{x}$ is a function of $x$ defined only for integer and positive values of the variable, etc. So a function, like $x^2$ or $y$, is a variable quantity, just like $x$, only that it satisfies some additional property. If you feel uncomfortable with "$y$ is a function of $x$" because you are so used to modern functions, maybe paraphrasing it as "$y$ depends on $x$" helps a bit. (Certainly if $f:\mathbb{R}\to \mathbb{R}$, no modern mathematician would say that $f$ depends on $x\in \mathbb{R}$.) Hence whenever they called something a function, they had to add of what that thing is a function. But often it was clear from the context or the notation, so they would soon drop the "of $x$" and simply talk of functions. So for example on p.13 we find Peano writing One says that a function $f(x)$ becomes maximal relative to an interval $(a,b)$ when $x=x_0$... To be correct he should have said something like: "One says that a function of $x$, $f(x)$, becomes ...". But it was clear form the notation. (Observe that nothing prevents $f(x)$ from being a function of something else too. For example when $x$ is itself a function of $t$, then $f(x)$ would also be a function of $t$.) We inherited the phrase "$f(x)$ is a function" —which we find in every modern calculus textbook (and btw. you used the question)— from this pre 1900 period, even though according to our modern convention we should correctly say "$f$ is a function". To emphasise again the difference between "function of ..." and our modern "functions": when $y=f(x)$ they called $y$ and $f(x)$ a function, while we call $f$ a function. If they called $f(x)$ the function, how did they call $f$? After all the notation $f(x)$ existed since Bernoulli ~1718. Didn't they also call $f$ a function? No, not officially. The first ones (Bernoulli, Euler, Lagrange) called $f$ the characteristic of the function $f(x)$. I interpret this as saying: its just a character used to distinguish one function of $x$, say $f(x)$, from another, say $g(x)$. But mainly people before 1900 didn't call $f$ anything at all! Peano for example doesn't in his calculus book. In fact, no one treated $f$ as a mathematical object on its own, before Dedekind, Peano, Cantor and Frege (to a certain extent independently) changed that around 1890. (There are some surprising historical bits in that thread. For example: Dedekind, Peano and Cantor all gave $f$ a new name, calling it resp. "map", "prefix sign for a function" and "allocation". They all preserved the original use of "function" for $f(x)$! Moreover Dedekind formulated his notion of map nine years before realising that he could identify it with the $f$ appearing in their functions $f(x)$. Only Frege (unfortunately) suggested calling $f$ a function, and somehow it became standard. We would probably have less difficulty communicating today, if $f$ had not gotten the name function.) Returning to $y=y(x)$ and why Jacobi started that. Besides the good reasons about partial derivatives he gives in De Determinantibus 1840, I suspect that he simply wanted a notation to indicate of what a certain variable is to be considered a function. I used to believe that in the equation $y=y(x)$ the $y$ on the left and on the right were objects of different types which by abuse where given the same name (the right one being of type $\mathbb{R}\to\mathbb{R}$, the left one of type $\mathbb{R}$.) But I now think that this is not what Jacobi intended and how physicist would like to use it. Instead, we should literally think of the $y$ on the right and left as the same object (of type "variable quantity"), and what is being abused is the notation for function application $f(x)$. In other words: had Jacobi chosen another notation like square brackets $y[x]$ to annotate that $y$ is considered as function of $x$, instead of using the already existing notation $f(x)$ for "application of a map to a variable", we might have less trouble with it now. I suspect one could formalise this idea similarly to how computer scientist formalise "ascription". See for example Pierce, Types and Programming Languages. Having said that, I don't think that by doing that we would gain much. Writing $\mathbf{r}=\mathbf{r}[t]$ seems mainly an aide for the reader, and we might as well do without it, as mweiss suggested. The harder problem seems to be to correctly formalise the notion of "variable quantity" and surrounding things like the notation $dy/dx$. Michael BächtoldMichael Bächtold $\begingroup$ This is an interesting history that I wasn't aware of. I agree that mathematicians are too glib about taking the modern set theoretic definition of a function for granted, but this is a facet of that I haven't seen before. $\endgroup$ – Henry Towsner Feb 1 '18 at 20:07 $\begingroup$ This is a wonderful answer. I probably am not careful enough in my teaching. I'm torn between being correct and making connections. I know if I am a bit fuzzy it allows me to see more students as correct. Anyway, one thing I think we can say about math before 1900 is that the focus was on equations rather than functions. Your point that the word function has been revised in our modern usage vs. the 19th century use is very helpful. $\endgroup$ – James S. Cook Mar 27 '19 at 2:45 When I was a student learning the difference, I found the unit-speed interpretation to be more helpful. Try using an animation in Matlab, Python, or Mathematica to illustrate this for simple curves by simultaneously tracing out $r(t) = (\cos{2t}, \sin{2t})$ and $r(s)$ or $\tilde{r}(t) = (t, t^2)$ and $\tilde{r}(s)$. Seeing the curve being traced out in its "normal" parametrization and its arclength parametrization will help show that the parameters are describing the same curve but at different speeds (the two parameters for the curve corresponding to $\tilde{r}$ show this especially well since the standard parametrization is not constant speed). I think students will understand that implicitly, the arclength parametrization is just hiding the composition with a function involving the inverse of the arclength integral and doing so allows you to travel at unit speed. In other words, $r(s)$ really just means $r(t(s))$. The technicalities may be better suited for a differential geometry course (see the first chapter of Elementary Differential Geometry by Andrew Pressley), as concepts like regularity of a curve and the Inverse Function Theorem are required. geometry_geekgeometry_geek $\begingroup$ If the $r$ is to have the same meaning in $r(s)$ as it has in $r(t)$, except that $r(s)$ is really an abbreviation, then $r(s)$ has to be an abbreviation of $r(t(s))$, not of $r(s(t))$. And if the $t$ is also to have the same meaning in $r(t)$ as in $r(t(s))$, then $r(t)$ also has to be an abbreviation for $r(t(s))$. There is some sense in that, to say that the fundamental parametrization is always by arclength. (Although note that this parametrization still depends on orientation, and possibly a choice of base point, at least for infinite curves.) $\endgroup$ – Toby Bartels Aug 20 '18 at 8:16 $\begingroup$ Thanks for the correction! $\endgroup$ – geometry_geek Mar 24 '19 at 18:38 Not the answer you're looking for? Browse other questions tagged undergraduate-education vector-calculus or ask your own question. How to help new students accept function notation Can $y^{(n)}$ be used as a way of representing higher order derivatives? Applications of Vector Calculus to Economics/Finance Surfaces and volumes for vector calculus Reviewing single variable limits and calculus: how to go about it? Examples of Applications of Basic Mathematics to Computing Is "hat notation" for unit vectors commonly used in mathematics? Justifying the multi-variable chain rule to students Vector calculus texts that are free-as-in-speech?
CommonCrawl
Therapeutic effects of different doses of prebiotic (isolated from Saccharomyces cerevisiae) in comparison to n-3 supplement on glycemic control, lipid profiles and immunological response in diabetic rats Janina de Sales Guilarducci1, Breno Augusto Ribeiro Marcelino1, Isaac Filipe Moreira Konig1, Tamira Maria Orlando1, Mary Suzan Varaschin1,2 & Luciano José Pereira ORCID: orcid.org/0000-0002-0502-25541 Diabetology & Metabolic Syndrome volume 12, Article number: 69 (2020) Cite this article The regular intake of fiber generates numerous health benefits. However, the efficacy depends on the duration of consumption and the ingested dose. Studies investigating the optimal dose are of interest to enable the inclusion of fiber in the routine treatment of diabetic patients. We aimed to evaluate the effects of different doses of β-glucan (BG—isolated from Saccharomyces cerevisiae), in comparison to n-3 supplement, on the inflammatory and metabolic parameters of Wistar rats induced to diabetes by streptozotocin. Forty animals were randomly divided into six groups receiving 0 mg/kg, 10 mg/kg, 20 mg/kg, or 40 mg/kg BG daily for 4 weeks or fish oil derivative [1000 mg/kg of omega-3 fatty acids (n-3)] for the same period. One additional group was composed of healthy controls. Serum metabolic and immunological parameters were evaluated by colorimetric and ELISA assays respectively. Histopathological analysis of the liver, small intestine and pancreas were also conducted. Significant changes due to BG intake were set into regression models with second-degree fit in order to estimate the optimal BG dose to achieve health benefits. The animals that ingested BG had lower food and water intake (p < 0.05) than the negative control group (0 mg/kg). However, consumption was still elevated in comparison to healthy controls. Blood glucose and serum levels of total cholesterol, LDL-c, and TG (p < 0.05) reduced in comparison to diabetic animals without treatment (better or similar to n-3 group depending on dose), but did not reach normal levels (in comparison to healthy controls). HDL-c was not different (p > 0.05) among all groups. These reductions were already seen with the lowest dose of 10 mg/kg. On average, the serum levels of the hepatic enzymes ALT and AST were 40% and 60% lower in the BG groups in comparison to diabetic animals without treatment (better results than n-3 group). The group receiving 40 mg/kg reached similar values of healthy controls for ALT; whereas the same result occurred from the dose of 10 mg/kg for AST. The ideal dose, estimated from the mean of all metabolic parameters was approximately 30 mg/kg/day. Regarding the immunological profile, TNF-α significantly decreased in the BG groups compared to controls (p < 0.05), reaching better values than n-3 group and similar to healthy controls. No significant differences were found between the groups in IL-1β or IL-10 (p > 0.05). No histological changes were found in the pancreas, liver, or intestine due to treatment among diabetic animals. BG significantly reduced blood glucose as well as serum total cholesterol, LDL-c and TG. There was a hepatoprotective effect due to the reduction in ALT and AST and a reduction in TNF-α, indicating a modulation of the immune response. In general, BG effects were better than n-3 supplement (or at least comparable) depending on the dose. Diabetes mellitus (DM) is a chronic disease characterized by the autoimmune or idiopathic destruction of pancreatic cells [1] and/or insulin resistance [2]. Diabetic individuals have several abnormalities in the metabolism of macronutrients [3], resulting in hyperglycemia and predisposition to the development of several comorbidities, such as atherosclerosis, arterial hypertension, stroke, and acute myocardial infarction [4]. These associated complications compromise the quality of life of the affected individuals, harming their emotional, physical, and social well-being [5]. Additionally, this disease puts a great financial burden to the health systems worldwide [6]. Conventional treatment of DM involves changes in lifestyle with an emphasis on food and nutrition education and regular physical activity [7], in addition sometimes to oral medicine [8] and/or insulin therapy [9]. Variations in blood glucose are a major challenge for patients with DM, especially type 1 DM [10]. Functional and nutraceutical foods have been investigated as adjuvants in the control of this disease [11]. In this context, prebiotics comprise substrates that are selectively utilized by host microorganisms conferring a health benefit. This broader definition includes even non-carbohydrate substances [12]. Fermentable soluble fibers modifyes intestinal microflora, promoting increase of Lactobacillus and Bifidobacterium, and decrease Bacteroides and Clostridium [13]. Besides, omega-3 poliunsatured free fatty acids (PUFA) supplementation decrease Faecalibacterium, and increase Bacteroidetes and butyrate-producing bacteria belonging to the Lachnospiraceae family [14, 15]. Evidence from systematic reviews evaluating randomized clinical trials indicate that probiotics and/or prebiotics (and symbiotics–combining both) present antidiabetic effects by interfering with the composition of the gut microbial environment, reducing intestinal endotoxin concentrations and decreasing energy absorption [16]. The consumption of up to 3 g/day of marine-based n-3 PUFAs is generally regarded as safe (GRAS) by US Food and Drug Administration (FDA) [15]. Yeast BG supplementation derived from S. cerevisiae has been approved for use in food supplements by the FDA and received GRAS status in 2008 (goverment revenue number [GRN]: 000239) at a maximum dose of 200 mg per serving [17], with the daily dose ranging from 100–500 mg [18]. Soluble fiber, including β-glucan (BG), has received attention due to its hypoglycemic [19,20,21] and hypocholesterolemic effects [22, 23], with consequent reduction in insulin resistance [24], hepatoprotection [19], and immunostimulation [25]. These effects can help decrease DM comorbidities [26] by forming a protective intestinal barrier, delaying the absorption of lipids and free cholesterol [27]. This barrier acts as one of the main defense mechanisms of the body and produces immunoregulatory signals [28]. The regular intake of fiber generates numerous health benefits [29]. However, the greater efficacy of BG depends on the time of consumption and the ingested dose [30]. Thus, studies that investigate the optimal intake dose are of public health interest to produce data that will enable cost savings, promoting the inclusion of fiber in the routine treatment of DM patients [31] and reducing the risk of toxicity in comparison to other treatments [32]. Most studies investigate the effects of cereal fibers. The present study aimed to evaluate the effects of the ingestion of different doses of BG (isolated from Saccharomyces cerevisiae), in comparison to n-3 supplement, on the metabolic and inflammatory profile of rats with streptozotocin-induced DM. The present study was approved by the Ethics Committee on Animal Use (CEUA) under protocol number 082/17. A total of 40 male rats of the Wistar breed (Rattus norvegicus albinus) were used. The animals were 11 weeks old, with a weight of 278.4 grams (± 19 g). These animals were subjected to quarantine (38 days) and acclimated for 7 days. Then the animals were randomly distributed into five groups (N = 7/group) and kept in collective boxes. One additional group was composed of healthy controls (n = 5). The animals were treated in a climate-controlled room with a constant temperature of 23 ± 2°C and a light-dark cycle of 12/12 h. Commercial food and water were provided ad libitum throughout the experiment. Experimental induction of diabetes DM was induced by the intraperitoneal administration of 70 mg/kg of streptozotocin (STZ) (Sigma, ST. Louis, MO, USA) dissolved in ice-cold citrate buffer (pH 4.5) (4 °C) [33]. Induction was done at the end of the afternoon, and after 48 h, the animals were fasted for 8 h. Blood glucose was measured by incision of the tail tip with previous topic anesthesia by 1 mg/kg lidocaine ointment using the Accu-Check Active device (© 2016 Roche Diabetes Care, lot 06061982, Germany). Animals with blood glucose above 250 mg/dL were considered diabetic [34]. Oral administration of β-glucan and fish oil The doses of BG were given through gavage and diluted in 0.3 mL of saline solution daily (Table 1). The BG used was obtained from the extract of the cell wall of Saccharomyces cerevisiae [Macrogard (Açucareira Quatá S/A—Divisão Biorigin, Lençois Paulista, SP, Brazil; Composition: β-glucan—minimum 60.0%; raw protein—maximum. 8.0%; pH (2% solution) 4.0–7.0; ash—maximum 10.0 g/100 g]. Table 1 Doses of BG administered by gavage in animals with streptozotocin-induced DM (70 mg/kg) Commercially acquired fish oil capsules were broken and poured into an amber glass daily. The dose was also administered by oral gavage as listed in Table 1. According to the data reported by the manufacturer, the fish-derived oil used had 0.58 g eicosapentaenoic acid (EPA) and 0.37 g docosahexaenoic acid (DHA) for each 3 grams of the product. Collecting material for analysis At the end of the 28-day experimental period, the animals were fasted for 8 h and euthanized through exsanguination by cardiac puncture after anesthesia containing 50 mg/kg of sodium thiopental intraperitoneally. The blood samples were stored in sterile siliconized tubes (Vacuette®, Centerlab, Belo Horizonte, MG, Brazil), vacuum-sealed with clot activator (micronized silica particles), and plasma collection was performed using 4% EDTA (anticoagulant). Next, the tubes were centrifuged at 4000 rpm for 20 min. The liquid contents were poured into 2 mL Eppendorf tubes and stored in an ultrafreezer at − 80 °C until the time of analysis. Liver, small intestine, and pancreas were collected and kept in 10% formalin solution for 48 h (Fig. 1). Histopathological analysis of the liver, small intestine, and pancreas After 48 h in 10% buffered formalin, the samples were conditioned in 70% ethanol. The fragments were processed in the Histotechnic (DM-70/12D OMA, São Paulo, SP, Brazil) and embedded in paraffin for cutting in a rotary microtome. The sections were cut to 4 µm thick. They were stained with hematoxylin and eosin for analysis of morphological features under light microscopy. Histological slides were evaluated by an experienced veterinary pathologist blind to the treatments. The pancreas were analyzed histologically for the number of cells present in the islets of Langerhans, and these were classified according to the following score: normal (−) (number of islet cells greater than 30) (+) mild lesion (number of islet cells 20–30); (++) moderate lesion (number of islet cells 10–20) (+++) severe lesion (number of islet cells less than 10) [35]. The liver and small intestine (duodenum, jejunum, and ileum) were analyzed histologically to identify the occurrence of lesions or any other type of microscopic alteration. Histological images were captured using coupled camera to a light microscope (CX31 binocular microscope, Olympus Optical do Brasil Ltda, São Paulo, SP, Brazil). Metabolic and immunological analyzes The serum concentrations of triglycerides (TG), total cholesterol (TC), and HDLc were determined using a colorimetric assay according (Lab Test®, Lagoa Santa, Minas Gerais, Brazil). The level of LDLc was calculated using the Friedewald equation [36], where LDLc = TC − HDLc − TG/5. For these analyses, serum and reagent were pipetted together and then incubated in a water bath at 37 °C for 10 min. The reading was done at a wavelength of 505 nm for TG and TC at 500 nm for HDLc in an Epoch Biotek ® spectrophotometer (Biotek®, Winooski, USA). Calculations were done with the formulas described below: $${\text{TG and TC }}\left( {{\text{mg}}/{\text{dL}}} \right) = \frac{\text{Test Absorbance}}{\text{Standard Absorbance}} \times{\text{200}}$$ $${\text{HDL }}\left( {{\text{mg}}/{\text{dL}}} \right) \, = \frac{\text{Test Absorbance}}{\text{Standard Absorbance}} \times{\text{40}}$$ The liver enzymes aspartate aminotransferase (AST) and alanine aminotransferase (ALT) were obtained by plasma analysis in a colorimetric assay (Bioliquid®, Pinhais, Paraná, Brazil). The reagent (1 mL) was incubated for 3 min at 37 ℃. Next, the plasma samples were added, and the reading was taken at 340 nm after 1 min. The concentration was determined according to the following formula: $${\text{Sample}}/{ \hbox{min} }\left( {{\text{U}}/{\text{L}}} \right) = \frac{{\delta \;{\text{Absorbance }}}}{\text{Total Absorbance }} \times {\text {1746}}$$ The serum concentrations of interleukin (IL)-1β, IL-10, and tumor necrosis factor alpha (TNF-α) were determined by enzyme-linked immunosorbent assay with commercial kits (Invitrogen®, Thermo Fisher Scientific, Vienna, Austria). Serological samples were diluted (1:5) and pipetted with the reagent. These were incubated for 120 min at room temperature (21°~ 25° C) in a 3-dimensional homogenizer KJMR-V® (Global Equipment, Global Trade Technology, São Paulo, Brazil). The readings were taken at 450 nm in the Epoch Biotek® spectrophotometer (Winooski, VT, USA). The data were compared by analysis of variance (ANOVA) followed by the Student–Newman–Keuls post hoc test (p < 0.05) in the statistical software Prism 5.0 (GraphPad Prism, CA, USA). Data are expressed as the mean ± standard deviation. For the regression model, the second-degree fit was performed using Excel software (Microsoft Excel, 2013). Experimental design over time The water and food intake (Fig. 2) were higher in the vehicle (negative control group - 0 mg BG) and 10 mg/kg BG groups than the other treatments (p < 0.05), demonstrating the classic symptom of polydipsia and polyphagia, especially in the 0 mg group. However, consumption was still elevated in comparison to healthy controls (p < 0.05). There was a significant reduction in blood glucose (approximately 27%) at all nonzero BG doses (p < 0.05) compared with the 0 mg and omega-3 groups (Fig. 3). There was a significant reduction in total cholesterol (Fig. 3) above 10 mg/kg BG doses (approximately 23%), as well as in TG (Fig. 3) (32% reduction) and LDL-c (Fig. 3) (approximately 30%) (p < 0.05). On the other hand, no significant differences were seen in HDL-c (Fig. 3). In general, all BG presented similar results compared to omega-3 (p > 0.05), except for blood glucose levels in which BG presented better results (Fig. 3). Although several parameters significantly improved, the reached values were still higher than the healthy group. HDL-c was not different (p > 0.05) among all groups, including the healthy controls. Daily water and food intake of diabetic Wistar rats induced by intraperitoneal injection of streptozotocin (70 mg/kg) and treated with different doses of β-glucan from Saccharomyces cerevisiae for 28 days. Different lowercase letters indicate significant differences by the Student–Newman–Keuls test at 5% probability (p < 0.05) Metabolic parameters of diabetic Wistar rats induced by intraperitoneal injection of streptozotocin (70 mg/kg) and treated with different doses of β-glucan from Saccharomyces cerevisiae for 28 days. Different lowercase letters indicate significant differences by the Student–Newman–Keuls test at 5% probability (p < 0.05) Liver enzymes ALT and AST (Fig. 3) were both lower (up to 40% and 60%, respectively) in BG groups vs. the 0 mg group. There was a significant reduction in ALT in the omega-3 group, similar to the 20 mg BG group (p < 0.05). The group receiving 40 mg/kg reached similar values of healthy controls for ALT; whereas the same result occured from the dose of 10 mg/kg for AST. No significant differences were found between the groups in serum IL-1β, IL-10, or IL-1β/IL-10 ratio (Fig. 4b), but a significant reduction in TNF-α was observed when compared to 0 mg (p < 0.05), reaching healthy control values. Inflammatory parameters of diabetic Wistar rats induced by intraperitoneal injection of streptozotocin (70 mg/kg) and treated with different doses of β-glucan from Saccharomyces cerevisiae for 28 days. Different lowercase letters indicate significant differences by the Student–Newman–Keuls test at 5% probability (p < 0.05) In order to estimate the best dose of BG for each parameter (blood glucose, TC, LDL-c, TG, ALT, and AST) were fit into a second-degree linear model (Fig. 5). The optimal dose was estimated by the mean of all parameters, and the value found was approximately 30 mg/kg/day. No significant changes were found in histological samples of the liver or duodenum, jejunum, or ileum submucosa, regardless of the treatment. According to the established criteria, there was a significant reduction in the number of pancreatic islet cells due to the induction of diabetes (Table 2), but no differences were found among treated or 0 mg groups (p > 0.05). Healthy controls present only normal scores, differing from all other diabetic groups. Adjustment of the second-order linear model according to the R2 coefficient of determination in diabetic Wistar rats induced by intraperitoneal injection of streptozotocin (70 mg/kg) and treated with different doses of β-glucan from Saccharomyces cerevisiae for 28 days Table 2 Histopathological scores of the pancreas of streptozotocin-induced diabetic rats (70 mg/kg) treated with different doses of yeast β-glucan The present study demonstrated the beneficial effects of the consumption of BGs on blood glucose, reducing clinical signs of polyphagia and polydipsia. There were also positive effects on serum total cholesterol, LDL-c, and TG, in addition to a hepatoprotective effect. Water and food intake were significantly lower in the groups receiving BG regardless of the dose. This reduction may be associated with the ability of BG to promote satiety [37]. BGs increase central YY peptide secretion [38] and delay gastric emptying by increasing viscosity and water retention in the intestine [39] (reducing intestinal peristalsis, i.e., postprandial contractility is extended) aiding in glucose homeostasis [40] and reducing the appetite. The decrease in water and food intake with omega-3 supplementation also occurs due to its ability to modify the expression of neuropeptides related to appetite in the hypothalamic axis [41]. In the present study, the results of BG and n-3 were quite equivalent, with even better results for BG on reducing blood glucose levels depending on the dose. BG is able to reduce the absorption of glucose due to its ability to form a barrier (gel/viscose) in the intestine, causing a delay in the absorption of carbohydrates [42]. In addition, the capacity of yeast BG to aid in the decrease in glucose transporters SGLT1 and GLUT2 in the small intestine has been reported [43]. This same mechanism was also demonstrated for oat BG [44]. In obese and type 2 diabetes, the composition of the gut microbiota is associated to excessive ingestion of high-fat diets (HFD) and low-grade inflammation. Taking together, these factors play important roles for development of obesity and other chronic diseases [45]. Briefly, HFD generates microbiota shift increasing the expression of fat translocase, scavenger receptor CD36 and the scavenger receptor class B type 1 (SR-BI). SR-BI binds and incorporate lipids and lipopolysaccharides (LPS) to chilomicrons. After epithelial translocation, LPS is transferred to lipoproteins (such as HDL) and is directed to adipocytes. In this site, LPS contributes to M2 to M1 phenotype macrophage polarization and adipocyte hypertrophy [45]. Tryptophan-derived metabolites produced by the gut microbiota also controls the expression of the miR-181 family in white adipocytes that regulates energy expenditure and insulin sensitivity [46]. In the present study diabetes was induced by streptozotocin β-cell destruction, which approximates to type 1 diabetes. Even in this situation, we found positive effects of BG ingestion. Microbial fermentation of prebiotics generates short-chain fatty acids (SCFA), such as acetate and butyrate, that have been shown to protect against oxidative and mitochondrial stress [47], enhance gut barrier, increase glucangon-like peptide 1 and 2 (GLP-1 and GLP-2) secretion [48], which delay gastric emptying and induce satiety [49]. Besides, incretins reduce hepatic expression of inflammatory and oxidative stress markers during obesity and diabetes [50]. GLP-1 stimulates insulin and reduces glucagon secretions from pancreatic α cells, reducing liver glucose output [51] and improving peripheral uptake of glucose [52]. Besides, SCFA inhibits lipolysis and reduces inflammation, enhancing energy metabolism regulation [16]. A recent study showed that SCFAs enhanced the viability of islets and β-cells, prevented STZ-induced cell apoptosis, viability reduction, mitochondrial dysfunction, and the overproduction of reactive oxygen species (ROS) and nitric oxide (NO) [47]. Mechanisms responsible for the efficacy of dietary n-3 PUFAs include reduction in IFN-γ, IL-17, IL-6, and TNF-α corroborating our findings (reduction in TNF-α) [53]. Indeed, n-3 PUFAs preventes lymphocyte infiltration into regenerating pancreatic islets, and elevates the expression of the β cell markers (Pdx1) and paired box 4 (Pax4) [53] what resembles our present findings on increased glucose control in animals ingesting n-3 supplement against control group. Decreased glucose was also found in diabetic rats receiving doses of 6 mg and 12 mg of BG derived from fungus. The authors found significant reductions of 17% and 52% in blood glucose, respectively [54]. A significant reduction in blood glucose of 32% was also reported in a previous study of our group using a dose of 30 mg/kg BG for 28 days in streptozotocin-induced diabetic animals [19]. No significant differences were found in blood glucose levels with the use of omega-3 in the present study, corroborating previous studies in type 1 diabetic patients [55, 56]. Indeed, side effects of the use of omega-3 in glucose homeostasis have been reported, caused by increasing need for insulin [57], because polyunsaturated fatty acids can induce changes in the fluidity of cell membranes [58], decreasing the affinity of insulin for its receptors [57]. The biochemical parameters TC, TG, and LDLc also decreased significantly with the ingestion of BG. These hypoglycemic and hypolipidemic effects are also related to the ability of BG to form a gel barrier in the intestine, delaying the absorption of carbohydrates and lipids in enterocytes and consequently reducing cholesterol [59]. With the formation of the gel, there is an increase in the fecal bulk viscosity that prolongs gastric emptying [60], increasing the water layer with a consequent decrease in the uptake of cholesterol in the intestine [61] and greater elimination in the feces [62]. Among the main mechanisms of cholesterol reduction is the decreased absorption of bile salts due to the ability of BG to adsorb these salts, reducing their resorption and return to the liver [63]. Hepatic cholesterol reduction regulates the synthesis of the LDLc receptor. This fact generates increased uptake of the LDL-c by the liver, as well as negatively modulate the synthesis of 3′-hydroxy-3-methyl-glutaryl-coenzyme A reductase (HMG coA reductase), the enzyme responsible for cholesterol synthesis [63], through fermentation and production of SCFAs. Hypolipidemic effect of omega-3 was demonstrated in the present study. This effect seems to be related to increased EPA and DHA in the hepatic membrane [64]. In addition, it has been suggested that EPA can act as a second messenger, also reducingn HMG-coA reductase [65], the same effect attributed to BG. Shinozaki et al. [66] found that a dose of 1800 mg/day of EPA significantly reduced TC, TG, and LDLc after 6–24 months. Lobato et al. [19] demonstrated a 32% reduction in TG concentration and 41% reduction in ALT, demonstrating a hepatoprotective effect of BG. In addition, another study from our group showed a significant reduction in glucose, cholesterol, and TG levels in diabetic animals [59]. The reduction in blood glucose is directly related to the decrease in ALT due to the inhibition of the participation of this enzyme in the gluconeogenesis pathway [67]. In this study, BG did not affect HDLc. Previous meta-analysis was not able to determine whether dietary fiber intake was associated with HDLc metabolism [68]. However, it can be inferred that fiber intake reduced LDLc without reducing HDL-c, which is a benefit [26]. This process may have occurred due to the high level of cholesterol in the bloodstream (due to diabetes) and through processes triggered by epinephrine and hydrolysis enzymes, generating proportional synthesis of HDLc to carry free cholesterol to be metabolized in the liver [69]. It is important that HDLc was not decreased by BG because, given its association with reverse cholesterol transport, it can suppress the accumulation of cholesterol in peripheral tissues [70], also aiding in its systemic decrease. These results are consistent with previous research [71]. BG significantly reduced TNF-α blood levels in the present study, corroborating previous results [72] probably because of an increase in the intestinal barrier. These results were similar to omega-3 , in which EPA and DHA supplementation increases adiponectin and reduces TNF-α [73]. It is important to highlight the safety of BG for human and animal consumption. Yeast BG at different concentrations showed no adverse inflammatory, hematological or toxicological effects in mice [74]. Several studies report the safety of oral BGs consumption regardless of the source (oat, mushroom, or yeast) or used doses [75, 76]. Although we have some limitations regarding microbiota sequencing and lack of molecular markers quantification, the present study results may be speculated into potential translation into humans. A randomised, double-blind, placebo-controlled clinical trial testing BG from Saccharomyces cerevisiae was performed in overweight and obese subjects (3 g/day) for 12 weeks. Results indicated that daily supplementation was useful for improving body weight and waist circumference, without adverse effects [77]. Meta-analysis evaluating the effect of oat BG intake on glycaemic control of diabetic patients (using only randomized controlled trials) indicated that BG ingestion significantly lowered concentrations in fasting plasma glucose and glycosylated hemoglobin (HbA1c) [78]. Another meta-analysis also showed that beta-glucan extracted from oats were effective in decreasing fasting glucose and fasting insulin of T2D and tented to lower HbA1c [79]. Consistent with our results, another meta-analysis of clinical trials showed that β-glucan has a lowering effect on LDLc, non-HDLc and apoB [26]. Although oat and yeast β-glucans have some chemical differences, a previous study showed that as long as the purity of β-glucan is high, there is no difference among the sources of β-glucans [43, 80]. The broader definition of prebiotics including non-carbohydrate sources [12] opened space for other substances that could join this concept. Several substances such as Polydextrose (PDX), Xylo-oligosaccharides (XOS), Pectic-oligosaccharides (POS), Gluco-oligosaccharides, Malto-oligosaccharides, Isomaltooligosaccharides (IMO), Soya-oligosaccharides (SOS), Fenugreek, Gold-based nanomaterials, selenium compounds, and nanoceria can be considered [13, 81] candidates. In this sense, even omega-3 can be considered a candidate if we consider this definition. Our results highlight the improvement of important metabolic parameters such as blood glucose levels and liprotein profile in a dose-dependent manner after consuming yeast BG. These outcomes add some new information regarding potential preventive and therapeutic care for type 1 diabetes, which often presents difficulties in control clinical set. The future of prebiotic research will probably include more studies focusing on the specificity of prebiotics for intestinal bacteria [13]. Clinical trials investigating the role of yeast BG in both type 1 and type 2 diabetic patients are necessary in order to establish optimal clinical protocols for general care. Dietary fiber ingestion of 100–500 mg/day has been reported as safe [18]. BGs may be consumed in different formulations; such as breakfast cereals or baked goods [22]. The consumption of Saccharomyces cerevisiae BGs demonstrated promising effects in the treatment of DM by decreasing metabolic parameters (blood glucose, total cholesterol, LDLc, and TG) in addition to its hepatoprotective effect through the reduction in ALT and AST. It significantly reduced blood levels of TNF-α. The optimal estimated dose for the observed benefits in diabetic rats was around 30 mg/kg/day. In general, BG effects were better than n-3 supplement (or at least comparable) depending on the dose. Atkinson MA, Eisenbarth GS, Michels AW. Type 1 diabetes. The Lancet. 2014;383:69–82. Hardy OT, Czech MP, Corvera S. What causes the insulin resistance underlying obesity? Curr Opin Endocrinol Diabetes Obesity. 2012;19:81–7. Al-Maskari AY, Al-Maskari MY, Al-Sudairy S. Oral manifestations and complications of diabetes mellitus: a review. Sultan Qaboos Univ Med J. 2011;11:179–86. de Ferranti SD, de Boer IH, Fonseca V, Fox CS, Golden SH, Lavie CJ, et al. Type 1 diabetes mellitus and cardiovascular disease. Circulation. 2014;130:1110–30. Semenkovich K, Brown ME, Svrakic DM, Lustman PJ. Depression in type 2 diabetes mellitus: prevalence, impact, and treatment. Drugs. 2015;75:577–87. Cho NH, Shaw JE, Karuranga S, Huang Y, da Rocha Fernandes JD, Ohlrogge AW, et al. IDF Diabetes Atlas: global estimates of diabetes prevalence for 2017 and projections for 2045. Diabetes Res Clin Pract. 2018;138:271–81. Levesque C. Therapeutic lifestyle changes for diabetes mellitus. Nurs Clin North Am. 2017;52:679–92. Tanabe M, Motonaga R, Terawaki Y, Nomiyama T, Yanase T. Prescription of oral hypoglycemic agents for patients with type 2 diabetes mellitus: a retrospective cohort study using a Japanese hospital database. J Diabetes Investigat. 2017;8:227–34. Hatz K, Elisabeth Minder A, Lehmann R, Drescher T, Gerendas B, Schmidt-Erfurth U, et al. The prevalence of retinopathy in patients with type 1 diabetes treated with education-based intensified insulin therapy and its association with parameters of glucose control. Diabetes Res Clin Pract. 2019;148:234–9. Frid A, Tura A, Pacini G, Ridderstråle M. Effect of oral pre-meal administration of betaglucans on glycaemic control and variability in subjects with type 1 diabetes. Nutrients. 2017;9:1004. PubMed Central Google Scholar Alkhatib A, Tsang C, Tiss A, Bahorun T, Arefanian H, Barake R, et al. Functional foods and lifestyle approaches for diabetes prevention and management. Nutrients. 2017;9:1310. Gibson GR, Hutkins RW, Sanders ME, Prescott SL, Reimer RA, Gibson GR, et al. The International Scientific Association for Probiotics and Prebiotics (ISAPP) consensus statement on the definition and scope of prebiotics. Med Biochem Comm. 2017;14:491–502. Wang S, Xiao Y, Tian F, Zhao J, Zhang H, Zhai Q, et al. Rational use of prebiotics for gut microbiota alterations: Specific bacterial phylotypes and related mechanisms. J Funct Foods. 2020;66:103838. Costantini L, Molinari R, Farinon B, Merendino N. Impact of omega-3 fatty acids on the gut microbiota. Int J Mol Sci. 2017;18:2645. Parolini C. Effects of fish n-3 PUFAs on intestinal microbiota and immune system. Marine Drugs. 2019;17:374. CAS PubMed Central Google Scholar Kim YA, Keogh JB, Clifton PM. Probiotics, prebiotics, synbiotics and insulin sensitivity. Nutr Res Rev. 2018;31:35–51. Thompson IJ, Oyston PCF, Williamson DE. Potential of the β-glucans to enhance innate resistance to biological agents. Expert Rev Anti-Infect Therapy. 2010;8:339–52. Vetvicka V, Vannucci L, Sima P, Richter J. Beta glucan: supplement or drug? From laboratory to clinical trials. Molecules. 2019;24:1251. Vieira Lobato R, De Oliveira Silva V, Francelino Andrade E, Ribeiro Orlando D, Gilberto Zangeronimo M, Vicente de Sousa R, et al. Metabolic effects of Β-Glucans (Saccharomyces cerevisiae) Per Os administration in rats with streptozotocin-induced diabetes. Nutrición Hospitalaria. 2015;32:256–64. de O. Silva V, Lobato R, Andrade E, Orlando D, Borges B, Zangeronimo M, et al. Effects of β-Glucans ingestion on alveolar bone loss, intestinal morphology, systemic inflammatory profile, and pancreatic β-Cell function in rats with periodontitis and diabetes. Nutrients. 2017;9:1016. Andrade E, Lima A, Nunes I, Orlando D, Gondim P, Zangeronimo M, et al. Exercise and beta-glucan consumption (Saccharomyces cerevisiae) improve the metabolic profile and reduce the atherogenic index in type 2 diabetic rats (HFD/STZ). Nutrients. 2016;8:792. Francelino Andrade E, Vieira Lobato R, Vasques Araújo T, Gilberto Zangerônimo M, Vicente Sousa R, José Pereira L. Effect of beta-glucans in the control of blood glucose levels of diabetic patients: a systematic review. Nutr Hosp. 2014;31:170–7. de Araújo TV, Andrade EF, Lobato RV, Orlando DR, Gomes NF, de Sousa RV, et al. Effects of beta-glucans ingestion (Saccharomyces cerevisiae) on metabolism of rats receiving high-fat diet. J Animal Physiol Animal Nutrit. 2017;101:349–58. Tosh SM. Review of human studies investigating the post-prandial blood-glucose lowering ability of oat and barley food products. Eur J Clin Nutr. 2013;67:310–7. Kim SY, Song HJ, Lee YY, Cho K-H, Roh YK. Biomedical issues of dietary fiber beta-glucan. J Korean Med Sci. 2006;21:781–9. Ho HVTT, Sievenpiper JL, Zurbau A, Blanco Mejia S, Jovanovski E, Au-Yeung F, et al. The effect of oat β-glucan on LDL-cholesterol, non-HDL-cholesterol and apoB for CVD risk reduction: a systematic review and meta-analysis of randomised-controlled trials. Br J Nutr. 2016;116:1369–82. Whitehead A, Beck EJ, Tosh S, Wolever TM. Cholesterol-lowering effects of oat β-glucan: a meta-analysis of randomized controlled trials. Am J Clin Nutrit. 2014;100:1413–21. Shan M, Gentile M, Yeiser JR, Walland AC, Bornstein VU, Chen K, et al. Mucus enhances gut homeostasis and oral tolerance by delivering immunoregulatory signals. Science. 2013;342:447–53. Verspreet J, Damen B, Broekaert WF, Verbeke K, Delcour JA, Courtin CM. A critical look at prebiotics within the dietary fiber concept. Annu Rev Food Sci Technol. 2016;7:167–90. Cugnet-Anceau C, Nazare J-A, Biorklund M, Le Coquil E, Sassolas A, Sothier M, et al. A controlled study of consumption of β-glucan-enriched soups for 2 months by type 2 diabetic free-living subjects. Br J Nutr. 2010;103:422. Silva FM, Kramer CK, de Almeida JC, Steemburgo T, Gross JL, Azevedo MJ. Fiber intake and glycemic control in patients with type 2 diabetes mellitus: a systematic review with meta-analysis of randomized controlled trials. Nutr Rev. 2013;71:790–801. Bowers GJ, Patchen ML, MacVittie TJ, Hirsch EF, Fink MP. A comparative evaluation of particulate and soluble glucan in an endotoxin model. Int J Immunopharmacol. 1986;8:313–21. de la Garza-Rodea AS, Knaän-Shanzer S, den Hartigh JD, Verhaegen APL, van Bekkum DW. Anomer-equilibrated streptozotocin solution for the induction of experimental diabetes in mice (Mus musculus). J Am Assoc Lab Animal Sci. 2010;49:40–4. Coskun O, Kanter M, Korkmaz A, Oter S. Quercetin, a flavonoid antioxidant, prevents and protects streptozotocin-induced oxidative stress and ?-Cell damage in rat pancreas. Pharmacol Res. 2005;51:117–23. Sharma AK, Bharti S, Kumar R, Krishnamurthy B, Bhatia J, Kumari S, et al. Syzygium cumini ameliorates insulin resistance and β-cell dysfunction via modulation of PPAR, dyslipidemia, oxidative stress, and TNF-α in type 2 diabetic rats. J Pharmacol Sci. 2012;119:205–13. Friedewald WT, Levy RI, Fredrickson DS. Estimation of the concentration of low-density lipoprotein cholesterol in plasma, without use of the preparative ultracentrifuge. Clin Chem. 1972;18:499–502. Rebello CJ, Burton J, Heiman M, Greenway FL. Gastrointestinal microbiome modulator improves glucose tolerance in overweight and obese subjects: a randomized controlled pilot trial. J Diabetes Complicat. 2015;29:1272–6. Vitaglione P, Lumaga RB, Stanzione A, Scalfi L, Fogliano V. β-Glucan-enriched bread reduces energy intake and modifies plasma ghrelin and peptide YY concentrations in the short term. Appetite. 2009;53:338–44. Johansen HN, Knudsen KE, Sandström B, Skjøth F. Effects of varying content of soluble dietary fibre from wheat flour and oat milling fractions on gastric emptying in pigs. Br J Nutr. 1996;75:339–51. Müller M, Canfora EE, Blaak EE. Gastrointestinal Transit Time, Glucose Homeostasis and Metabolic Health: modulation by Dietary Fibers. Nutrients. 2018;10:275. Ma S, Ge Y, Gai X, Xue M, Li N, Kang J, et al. Transgenic n-3 PUFAs enrichment leads to weight loss via modulating neuropeptides in hypothalamus. Neurosci Lett. 2016;611:28–32. Bashir KMI, Choi JS. Clinical and physiological perspectives of β-Glucans: the past, present, and future. Int J Mol Sci. 2017;18:1906. Cao Y, Sun Y, Zou S, Li M, Xu X. Orally administered Baker's Yeast β-glucan promotes glucose and lipid homeostasis in the livers of obesity and diabetes model mice. J Agric Food Chem. 2017;65:9665–74. Abbasi NN, Purslow PP, Tosh SM, Bakovic M. Oat β-glucan depresses SGLT1- and GLUT2-mediated glucose transport in intestinal epithelial cells (IEC-6). Nutr Res. 2016;36:541–52. Hersoug LG, Møller P, Loft S. Gut microbiota-derived lipopolysaccharide uptake and trafficking to adipose tissue: implications for inflammation and obesity. Obesity Rev. 2016;17:297–312. Virtue AT, McCright SJ, Wright JM, Jimenez MT, Mowel WK, Kotzin JJ, et al. The gut microbiota regulates white adipose tissue inflammation and obesity via a family of microRNAs. Sci Translat Med. 2019;11:1892. Hu S, Kuwabara R, de Haan BJ, Smink AM, de Vos P. Acetate and butyrate improve β-cell metabolism and mitochondrial respiration under oxidative stress. Int J Mol Sci. 2020;21:1542. Tolhurst G, Heffron H, Lam YS, Parker HE, Habib AM, Diakogiannaki E, et al. Short-chain fatty acids stimulate glucagon-like peptide-1 secretion via the g-protein-coupled receptor FFAR2. Diabetes. 2012;61:364–71. Hellström PM, Näslund E, Edholm T, Schmidt PT, Kristensen J, Theodorsson E, et al. GLP-1 suppresses gastrointestinal motility and inhibits the migrating motor complex in healthy subjects and patients with irritable bowel syndrome. Neurogastroenterol Motil. 2008;20:649–59. Cani PD, Possemiers S, Van De Wiele T, Guiot Y, Everard A, Rottier O, et al. Changes in gut microbiota control inflammation in obese mice through a mechanism involving GLP-2-driven improvement of gut permeability. Gut. 2009;58:1091–103. Hare KJ, Knop FK, Asmar M, Madsbad S, Deacon CF, Holst JJ, et al. Preserved inhibitory potency of GLP-1 on glucagon secretion in type 2 diabetes mellitus. J Clin Endocrinol Metabol. 2009;94:4679–87. Stahel P, Xiao C, Nahmias A, Lewis GF. Role of the gut in diabetic dyslipidemia. Front Endocrinol. 2020;11:116. Bi X, Li F, Liu S, Jin Y, Zhang X, Yang T, et al. ω-3 polyunsaturated fatty acids ameliorate type 1 diabetes and autoimmunity. J Clin Investigat. 2017;127:1757–71. Miranda-Nantes CCBO, Fonseca EAI, Zaia CTBV, Dekker RFH, Khaper N, Castro IA, et al. Hypoglycemic and Hypocholesterolemic effects of Botryosphaeran from Botryosphaeria rhodina MAMB-05 in diabetes-induced and hyperlipidemia conditions in rats. Mycobiology. 2011;39:187–93. De Caterina R, Madonna R, Bertolotto A, Schmidt EB. n-3 fatty acids in the treatment of diabetic patients: biological rationale and clinical data. Diabetes Care. 2007;30:1012–26. Poreba M, Mostowik M, Siniarski A, Golebiowska-Wiatrak R, Malinowski KP, Haberka M, et al. Treatment with high-dose n-3 PUFAs has no effect on platelet function, coagulation, metabolic status or inflammation in patients with atherosclerosis and type 2 diabetes. Cardiovasc Diabetol. 2017;16:50. Stacpoole PW, Alig J, Ammon L, Crockett SE. Dose-response effects of dietary marine oil on carbohydrate and lipid metabolism in normal subjects and patients with hypertriglyceridemia. Metabolism Clin Exp. 1989;38:946–56. Wang X, Chan CB. n-3 polyunsaturated fatty acids and insulin secretion. J Endocrinol. 2015;224:R97–106. Silva de O V, Lobato RV, Andrade EF, de Macedo CG, Napimoga JTC, Napimoga MH, et al. β-Glucans (Saccharomyces cereviseae) reduce glucose levels and attenuate alveolar bone loss in diabetic rats with periodontal disease. Plos ONE. 2015;10:e0134742. Jenkins DJ, Kendall CW, Axelsen M, Augustin LS, Vuksan V. Viscous and nonviscous fibres, nonabsorbable and low glycaemic index carbohydrates, blood lipids and coronary heart disease. Curr Opin Lipidol. 2000;11:49–56. Gee JM, Blackburn NA, Johnson IT. The influence of guar gum on intestinal cholesterol transport in the rat. Br J Nutrit. 1983;50:215–24. El Khoury D, Cuda C, Luhovyy BL, Anderson GH. Beta glucan: health benefits in obesity and metabolic syndrome. J Nutr Metabol. 2012;2012:851362. Chen J, Huang X-F. The effects of diets enriched in beta-glucans on blood lipoprotein concentrations. J Clin Lipidol. 2009;3:154–8. Kim HK, Choi H. Dietary alpha-linolenic acid lowers postprandial lipid levels with increase of eicosapentaenoic and docosahexaenoic acid contents in rat hepatic membrane. Lipids. 2001;36:1331–6. Das UN. Beneficial effect(s) of n-3 fatty acids in cardiovascular diseases: but, why and how? Prostaglandins Leukotrienes Essential Fatty Acids (PLEFA). 2000;63:351–62. Shinozaki K, Kambayashi J, Kawasaki T, Uemura Y, Sakon M, Shiba E, et al. The long-term effect of eicosapentaenoic acid on serum levels of lipoprotein (a) and lipids in patients with vascular disease. J Atherosclerosis Thrombosis. 1996;2:107–9. Wang B, Smyl C, Chen C-Y, Li X-Y, Huang W, Zhang H-M, et al. Suppression of postprandial blood glucose fluctuations by a low-carbohydrate, high-protein, and high-omega-3 diet via inhibition of gluconeogenesis. Int J Mol Sci. 2018;19:1823. Yanai H, Katsuyama H, Hamasaki H, Abe S, Tada N, Sako A. Effects of carbohydrate and dietary fiber intake, glycemic index and glycemic load on HDL metabolism in Asian populations. J Clin Med Res. 2014;6:321–6. Wang Y, Xu D. Effects of aerobic exercise on lipids and lipoproteins. Lipids Health Dis. 2017;16:132. Zhou Q, Wu J, Tang J, Wang J-J, Lu C-H, Wang P-X. Beneficial effect of higher dietary fiber intake on plasma HDL-C and TC/HDL-C ratio among Chinese Rural-to-urban migrant workers. Int J Environ Res Public Health. 2015;12:4726–38. Drozdowski LA, Reimer RA, Temelli F, Bell RC, Vasanthan T, Thomson ABR. β-Glucan extracts inhibit the in vitro intestinal uptake of long-chain fatty acids and cholesterol and down-regulate genes involved in lipogenesis and lipid transport in rats. J Nutr Biochem. 2010;21:695–701. Cao Y, Sun Y, Zou S, Duan B, Sun M, Xu X. Yeast β-Glucan suppresses the chronic inflammation and improves the microenvironment in adipose tissues of ob/ob mice. J Agric Food Chem. 2018;66:621–9. Becic T, Studenik C. Effects of omega-3 supplementation on adipocytokines in prediabetes and type 2 diabetes mellitus: systematic review and meta-analysis of randomized controlled trials. Diabetes Metabol J. 2018;42:101. Delaney B, Carlson T, Zheng GH, Hess R, Knutson N, Frazer S, et al. Repeated dose oral toxicological evaluation of concentrated barley beta-glucan in CD-1 mice including a recovery phase. Food Chem Toxicol. 2003;41:1089–102. Chen SN, Chang CS, Chen S, Soni M. Subchronic toxicity and genotoxicity studies of Antrodia mushroom β-glucan preparation. Regul Toxicol Pharmacol. 2018;92:429–38. Túrmina J, Carraro E, Alves da Cunha M, Dekker R, Barbosa A, dos Santos F, et al. toxicological assessment of β-(1à6)-glucan (Lasiodiplodan) in mice during a 28-day feeding study by gavage. Molecules. 2012;17:14298–309. Santas J, Lázaro E, Cuñé J. Effect of a polysaccharide-rich hydrolysate from Saccharomyces cerevisiae (LipiGo®) in body weight loss: randomised, double-blind, placebo-controlled clinical trial in overweight and obese adults. J Sci Food Agric. 2017;97:4250–7. Shen XL, Zhao T, Zhou Y, Shi X, Zou Y, Zhao G. Effect of oat β-glucan intake on glycaemic control and insulin sensitivity of diabetic patients: a meta-analysis of randomized controlled trials. Nutrients. 2016;8:39. He LX, Zhao J, Huang YS, Li Y. The difference between oats and beta-glucan extract intake in the management of HbA1c, fasting glucose and insulin sensitivity: a meta-analysis of randomized controlled trials. Food Funct. 2016;7:1413–28. Korolenko TA, Bgatova NP, Ovsyukova MV, Shintyapina A, Vetvicka V. Hypolipidemic effects of β-glucans, mannans, and fucoidans: mechanism of action and their prospects for clinical application. Molecules. 2020;25:1819. Bubnov R, Babenko L, Lazarenko L, Kryvtsova M, Shcherbakov O, Zholobak N, et al. Can tailored nanoceria act as a prebiotic? Report on improved lipid profile and gut microbiota in obese mice. EPMA J. 2019;10:317–35. The authors acknowledge the Department of Veterinary Medicine of the Universidade Federal de Lavras–UFLA for their assistance in Animal Laboratory and analysis. This study was supported by the National Council for Scientific and Technological Development (Conselho Nacional de Desenvolvimento Científico and Tecnológico—CNPq), the Research and Support Foundation of the State of Minas Gerais (Fundação de Amparo and Pesquisa do Estado de Minas Gerais—FAPEMIG) and the Coordination for the Improvement of Higher Education Personnel (Coordenação de Aperfeiçoamento de Nível Superior—CAPES). Departamento de Ciências da Saúde – DSA, Universidade Federal de Lavras – UFLA, 3037, Lavras, 37200-000, Brazil Janina de Sales Guilarducci, Breno Augusto Ribeiro Marcelino, Isaac Filipe Moreira Konig, Tamira Maria Orlando, Mary Suzan Varaschin & Luciano José Pereira Departamente de Medicina Veterinária – DMV, Universidade Federal de Lavras – UFLA, 3037, Lavras, 37200-000, Brazil Mary Suzan Varaschin Janina de Sales Guilarducci Breno Augusto Ribeiro Marcelino Isaac Filipe Moreira Konig Tamira Maria Orlando Luciano José Pereira JSG: Animal experiments, Data curation, Writing-Original draft preparation. BM: Animal experiments. MSV and IK: Laboratory analysis. LJP: Conceptualization, Writing-Reviewing and Editing Supervision. All authors read and approved the final manuscript. Correspondence to Luciano José Pereira. Ethics Committee on Animal Use (CEUA) of the Universidade Federal de Lavras—UFLA under protocol number 082/17. de Sales Guilarducci, J., Marcelino, B.A.R., Konig, I.F.M. et al. Therapeutic effects of different doses of prebiotic (isolated from Saccharomyces cerevisiae) in comparison to n-3 supplement on glycemic control, lipid profiles and immunological response in diabetic rats. Diabetol Metab Syndr 12, 69 (2020). https://doi.org/10.1186/s13098-020-00576-6 Dietary fibers Beta-glucans
CommonCrawl
Malaria impact of large dams in sub-Saharan Africa: maps, estimates and predictions Solomon Kibret1, Jonathan Lautze2, Matthew McCartney3, G. Glenn Wilson1 & Luxon Nhamo2 14k Accesses 346 Altmetric While there is growing recognition of the malaria impacts of large dams in sub-Saharan Africa, the cumulative malaria impact of reservoirs associated with current and future dam developments has not been quantified. The objective of this study was to estimate the current and predict the future impact of large dams on malaria in different eco-epidemiological settings across sub-Saharan Africa. The locations of 1268 existing and 78 planned large dams in sub-Saharan Africa were mapped against the malaria stability index (stable, unstable and no malaria). The Plasmodium falciparum infection rate (PfIR) was determined for populations at different distances (<1, 1–2, 2–5, 5–9 km) from the associated reservoirs using the Malaria Atlas Project (MAP) and WorldPop databases. Results derived from MAP were verified by comparison with the results of detailed epidemiological studies conducted at 11 dams. Of the 1268 existing dams, 723 are located in malarious areas. Currently, about 15 million people live in close proximity (<5 km) to the reservoirs associated with these dams. A total of 1.1 million malaria cases annually are associated with them: 919,000 cases due to the presence of 416 dams in areas of unstable transmission and 204,000 cases due to the presence of 307 dams in areas of stable transmission. Of the 78 planned dams, 60 will be located in malarious areas and these will create an additional 56,000 cases annually. The variation in annual PfIR in communities as a function of distance from reservoirs was statistically significant in areas of unstable transmission but not in areas of stable transmission. In sub-Saharan Africa, dams contribute significantly to malaria risk particularly in areas of unstable transmission. Additional malaria control measures are thus required to reduce the impact of dams on malaria. Construction of large dams—water infrastructure with a crest height greater than 15 m, or a storage capacity exceeding 3 million cu m for heights between 5 and 15 m [1]—has been widely recognized as a key factor in promoting economic growth, ensuring food security, alleviating poverty, and increasing resilience in the face of climate variability and change in sub-Saharan Africa (SSA) [2–4]. The World Bank has applied language such as 'infrastructure gap' to the paucity of the continent's dams and water storage capacity [2, 5], and the African Ministers Council on Water declared that Africa is "held hostage" by its hydrology due to the deficit of water infrastructure [6]. Encouraged by the increased volume of international aid for water resource development, SSA has, in recent years, experienced a new era of large dam construction to address pressing challenges related to food security and increasing demands for economic development [7]. Environmental modification such as dam construction has long been recognized to enhance malaria transmission, a disease that globally claims an estimated 627,000 lives each year, 90 % of which are in SSA [8]. In Africa, increased malaria incidence following dam construction has been reported around the Bamendjin Dam in Cameroon [9], the Kamburu Dam in Kenya [10], the Koka reservoir in central Ethiopia [11, 12], the Gilgel Gibe Dam in southwest Ethiopia [13], the Manyuchi Dam in Zimbabwe [14], and the Akosombo Dam in Ghana [15]. While a cursory review may lead to the presumption that dams' impacts on malaria are important and negative, there is evidence that the dynamics of malaria transmission around reservoirs may be more complicated. For example, no increase in malaria was reported around the Manantali Dam in Mali [16] and the Foum Glaita Dam of Mauritania [17]. Lack of a malaria impact around these two dams appears mainly due to the replacement of the most efficient vector (Anopheles gambiae sensu stricto and Anopheles funestus) by less anthropophilic species (Anopheles arabiensis and Anopheles pharoensis) that use the shoreline environment as breeding habitat [16, 17]. Despite the growing evidence pointing to a potentially major cumulative effect of dams on the malaria burden of SSA, and the important nuance on variation in dam impacts on malaria in diverse eco-epidemiological settings, neither issue has been systematically investigated. Keiser et al. [18] reviewed the literature and found that 3.1 million people are at risk of malaria due to large dams in SSA, but stopped short of determining the aggregated contribution of dams to malaria burden across the region. Another study [19] examined how the effects of environmental management could offset malaria transmission in different eco-epidemiological settings; these authors nonetheless did not rigorously explore how malaria impacts differ between alternative eco-epidemiological contexts. The present study investigated the impact of current and planned large dams on malaria across SSA. Dams with georeferenced locations were mapped in relation to areas of stable and unstable malaria transmission. The population at risk of malaria in different epidemiological settings was estimated and the difference in malaria infection rate at different distances from the reservoirs analysed. Finally, the contribution of these dams to the malaria burden in the region was determined. This study focused on SSA—geographically, the area of the African continent that lies south of the Sahara Desert. It consists of all African countries except Morocco, Algeria, Tunisia, Libya, and Egypt [20]. This region accounts for the greatest burden of malaria in the world where Plasmodium falciparum, the most severe of the malaria parasite species that infect humans, is predominant. Annually, an estimated 174 million cases occur in this region [8]. Malaria transmission is generally stable in western and central Africa, unstable in much of eastern Africa and unstable to absent in southern Africa [21] (Fig. 1). Map showing spatial distribution of existing and planned dams in Africa with respect to the 2010 malaria stability indexing (E no. existing dams, P no. planned dams) (adapted from Kibret et al. [22]) The present study used available databases to quantify the impact of dams on malaria. Dam databases were used to collect information on the number and location of dams across SSA. Population data and malaria prevalence databases were used to estimate population at risk around dams in different eco-epidemiological settings of SSA. Data on existing and planned African dams To identify and locate existing and planned dams in SSA, georeferenced locations of individual dams (both existing and planned) were obtained from the FAO African dams database [23] and the International Rivers database [24]. Data on water storage capacity, dam height and reservoir surface area were obtained from the ICOLD World Register of Dams [25] and the Global Reservoirs and Dams (GRanD) database [26]. Locations and parameters of additional dams were obtained from a number of journal articles, project reports and dissertations. Overall, georeferenced locations and dam parameters were gathered for a total of 1268 existing dams (out of an estimated total of over 2000 [7]) and 78 planned dams (out of an estimated total of 150 [24]) in SSA. A planned dam was defined as a dam currently under construction or planned for construction in the next 5 years. While the number of existing and planned dams for which locations could be found was below the known total of each, the set of existing and planned dams mapped for this study is the most extensive yet utilized in an analysis of the malaria impacts of dams in SSA. Estimating reservoir perimeters Data on reservoir perimeter are necessary to estimate the population at risk of malaria due to a dam. However, these data are not easily available for most dams. Thus reservoir perimeter was estimated using a method proposed by Lehner et al. [27] and Keiser et al. [18]. First, it was assumed that reservoirs have a rectangular shape [18]. Length of reservoir (LR) was calculated for each dam according to LR = A/LD, where A represents the surface area of the reservoir and LD the length of the dam. A and LD were obtained from the World Register of Dams and FAO database, respectively. Then, the perimeter of the reservoir was estimated as 2LR + 2LD. For each dam, the calculation was based on the reservoirs' maximum water storage (i.e., the reservoir at full supply level, when the surface area is at a maximum). The 'rectangular' reservoir shape assumption was validated using known reservoir perimeters from the literature. The actual shape of 11 reservoirs, derived from the literature review, indicated that dams with reservoir size >1000 sq km had a much longer reservoir length than dam length (median LR/LD = 65.4) while dams with reservoir size <100 sq km had comparable dam and reservoir lengths (median LR/LD = 4.2). This supports the 'rectangular' reservoir shape approach in the present study, which assumed a much greater length than width. It is also recognized that in many cases the reservoir water level varies substantially throughout the year and this will change both the perimeter of the reservoir shoreline and the relative distance of the shoreline to communities. However, data on fluctuations in reservoir water levels are not generally available, so it was not possible to make allowance for any temporal variability in reservoir surface area. Data on malaria transmission stability The Gething et al. [21] classification was utilized to characterize the epidemiological settings in which the dams were located. This is defined as: Stable transmission in areas with annual P. falciparum infection rates (PfIR) greater than 0.1 cases per 1000 population; Unstable transmission in areas with annual PfIR between 0 and 0.1 cases per 1000 population; No malaria in areas having zero annual PfIR. Data on malaria transmission The Malaria Atlas Project (MAP) database was used to produce annual predictions of spatial PfIR rates at high resolution (1 × 1 km grid) [21]. MAP is an initiative founded in 2005 to generate new and innovative methods of mapping malaria risk and has continuously updated georeferenced PfIR surveys since 2005. The updated version, completed on 1 June 2010, consisted of 22,212 quality-checked and spatiotemporally unique malaria prevalence survey data points. The 2010 dataset was used to determine annual PfIRs for populations at different distances from reservoirs in areas of stable and unstable transmission [21]. All dams classified as 'existing' in this study were commissioned before 2010. Additional data were obtained from literature review and the World Health Organization [28]. A systematic review of the peer-reviewed literature, dissertations and technical reports was carried out, with an emphasis on published research findings from assessments of the impact of large dams on malaria transmission. Articles were searched mostly through PubMed using the combination of keywords such as 'malaria', 'Anopheles vector', 'dams', 'mosquito breeding', 'reservoir shoreline' and 'sub-Saharan Africa'. Relevant references cited by each reviewed study were also examined. Pertinent book chapters and websites (e.g., 27) were also consulted. Two types of studies were included: (1) those that assessed epidemiological (malaria prevalence or incidence) and/or entomological (malaria mosquito bionomics, density and vectorial capacity) variables before and after the construction of a dam; and, (2) those that compared dam/reservoir villages and non-dam/reservoir settings with similar social and eco-epidemiological settings were included. Studies without a control comparison design were excluded from this review to ensure causality in the environmental factors responsible for changes in malaria transmission in nearby villages. A total of 17 studies showing the effects of 11 large dams on malaria incidence and/or vector breeding in SSA were found. The impact of dams on malaria was analysed in relation to areas of stable and unstable transmission. Mapping dams and malaria The distribution of existing and planned dams was overlaid on the malaria stability index map using ArcGIS and the number of large dams in each malaria stability category (stable, unstable and no malaria) was determined. ArcGIS was used to produce all the maps and for population estimates. Estimating the population at risk around dams To estimate the population at risk at different distances from a dam and its associated reservoir, high resolution (1 × 1 km) Worldpop Project population distribution database [29] was used. Population at risk was estimated as all persons living within a 5-km distance of the reservoir, upstream of a dam. The impact of a dam on malaria was assumed to be negligible beyond 5 km due to mosquitoes' limited flight range [30]. Malaria incidence around dams Using the MAP database, annual PfIR was computed for four distance cohorts (i.e. <1, 1–2, 2–5, 5–9 km). These cohorts lie within the same climatic region at each of the dam sites. The 5–9 km cohort was taken as the control group for each dam: the assumption being that malaria incidence in this zone equated to what would have occurred in the other distance cohorts if the dam had not been built. The Odds Ratio (OR) (i.e., the ratio of malaria in each cohort relative to the control) was calculated to compare the PfIR among the cohorts. The annual number of malaria cases for each cohort was calculated by multiplying PfIR by the population present in that cohort. Since the population density varies among the cohorts, the difference between PfIRs between at risk cohorts (<1, 1–2 and 2–5 km) and the control cohort (5–9 km) was compared as follows [31]: $$z = \frac{{(\hat{\rho }_{1} - \hat{\rho }_{2} ) - 0}}{{\sqrt {\hat{\rho }(1 - \hat{\rho })\left( {\frac{1}{{n_{1} }} + \frac{1}{{n_{2} }}} \right)} }}$$ where, \(\hat{\rho }_{1}\) is the PfIR (in per cent) in the at risk cohort, \(\hat{\rho }_{2}\) is the PfIR (in per cent) in the control cohort, \(\hat{\rho }\) is the odds ratio of \(\hat{\rho }_{2}\) and \(\hat{\rho }_{1}\), and n1 and n2 are population size of at risk and control cohorts, respectively, z is a value on the Z-distribution. Determining the increased cases associated with dams The annual number of malaria cases associated with current and future dams was determined for unstable and stable transmission areas. The number of annual malaria cases attributable to dams was estimated by calculating the difference in the number of annual malaria cases for communities less than 5 km and for communities greater than 5 km (i.e., 5–9 km) from the reservoir, allowing for differences in population size. The annual PfIR calculated for 5–9 km was applied to the <5-km cohorts multiplied by the population in the <5-km cohorts. In the planned dams, the rate of malaria case increase between at risk (<5 km) and control (5–9 km) cohorts in the existing dams was taken to predict the potential increase in malaria cases in planned dams after dam construction—with differential rates for stable and unstable areas. No adjustment was made for the population growth that often accompanies dam construction. Validating with evidence from literature A total of 17 dam-malaria studies focused on 11 dams explored relationships between dams and malaria. For the 11 dams where literature is available, results from the MAP-based analysis and those reported in the literature were compared. Due to limitations in the results reported in the literature, it was not possible to determine malaria incidence for the four distance cohorts used in the MAP-based analyses. However, the data were sufficient to enable the range of malaria prevalence and OR to be calculated for those living close (<3 km) to dams and further away (>3 km) in stable and unstable areas. For the MAP data, two distance groups were recreated [<3 and >3 km (3–6 km)] to enable comparison with the literature dataset. The OR of malaria prevalence from MAP and literature were compared using the Chi square test. Statistical analyses were done using statistical software, SPSS version 22 (SPSS Inc, Chicago, IL, USA). The level of significance was determined at the 95 % confidence interval (P < 0.05). Spatial distribution of dams in Africa Dams are distributed across stable, unstable and no malaria transmission areas of SSA (Fig. 1). Among the 1,268 existing dams, 33 % (n = 416), 24 % (n = 307) and 43 % (n = 545) are located in areas of unstable, stable and no malaria transmission, respectively. Dams in stable transmission areas are largely distributed across western Africa, while dams in unstable areas and no malaria areas are mainly located in southeast and southern Africa, respectively. Of the planned dams, 65.4 % (n = 51) are located in areas of unstable transmission, 11.5 % (n = 9) in areas of stable transmission, and 23.1 % (n = 18) in areas without malaria. Population at risk of malaria around dams Approximately 20 million people in SSA (2 % of the total population) live within 5 km of the 1268 reservoirs investigated in this study (Table 1). Of these, 14.6 million (1.42 % of the total population of SSA) live in areas at risk of malaria: 6.4 million in areas of stable transmission and 8.2 million in areas of unstable transmission. In addition, approximately 442,000 people currently live within 5 km of the reservoirs associated with the 60 planned dams located in areas at risk of malaria transmission (i.e., either stable or unstable) (Table 2). Table 1 Summary of number dams and people living around existing dams in stable, unstable and no malaria zones across sub-Saharan Africa Table 2 Summary of number dams and people living around planned dams in stable, unstable and no malaria zones across sub-Saharan Africa Malaria incidence around reservoirs derived from MAP Malaria incidence in communities living closer to reservoirs was greater than those living farther away (Table 3). In areas of unstable transmission, annual PfIR was greater in communities living within 1, 1–2 and 2–5 km from reservoirs, than in those living 5–9 km away. This difference was statistically significant in the <1 km (z = −9.842; P < 0.05) and 1–2 km (z = −6.513; P < 0.05) cohorts. In areas of stable transmission, the annual PfIR in people living within 1, 1–2 and 2–5 km from reservoirs also appeared greater than for those located 5–9 km away. However, the differences in PfIR were not statistically significant among the cohorts (X 2 = 6.252; df = 3; P > 0.05). Table 3 Mean Plasmodium falciparum infection rate (PfIR) in communities in the vicinity of dams in stable and unstable areas of the sub-Saharan Africa Annual number of malaria cases associated with large dams In areas of unstable transmission, approximately 919,000 malaria cases per year were associated with the presence of the 416 dams. In areas of stable malaria transmission, 204,000 malaria cases per year were associated with the presence of the 307 dams (Table 4). Overall, allowing for differences in both the number of dams and population, the effect of dams on annual malaria cases was 3.5–4.5-fold higher in areas of unstable transmission than in areas of stable transmission. The data also suggest that malaria cases in areas of unstable malaria transmission were on average 3.2 times greater in communities living close to existing reservoirs than those living more than 5 km from them. Overall, the reservoirs investigated account for 0.6 % of the total malaria burden in the SSA. However, in the vicinity of the reservoirs in stable and unstable areas, on average reservoirs associated with large dams contribute to 47 % of malaria cases in communities living within 5 km of them. Table 4 Estimates of annual malaria cases (using MAP database) attributable to proximity to reservoirs (<5 km) in stable and unstable areas of sub-Saharan Africa Additional malaria cases associated with planned dams Completion of the 78 planned dams assessed here is expected to exacerbate the local malaria burden—particularly in areas of unstable transmission. Making no allowance for possible population change, the 60 planned dams located in areas with malaria will add approximately 45,000 cases in areas of unstable transmission and about 11,000 cases in areas of stable transmission (Table 4). Malaria prevalence around dams derived from the literature Previous studies generally confirm that large dams have a greater impact on malaria prevalence in areas of unstable transmission (Table 5). Fourteen studies around dams in areas of unstable malaria transmission indicated that malaria prevalence in villages located <3 km from the dams, was 2.3–19.9 times higher than in villages located >3 km from the dams. By comparison, the analyses of the MAP database for the same dam communities indicated a 3.6–24.5 greater malaria prevalence in the communities located close to reservoirs (Table 6). The differences in malaria prevalence (both derived from the literature and MAP) between those living close to (<3 km) and farther away (>3 km) from the reservoirs were statistically significant (Chi square test, P < 0.05). In stable areas, four studies indicated that malaria prevalence increased by 1.2–1.4 times in communities living within 3 km as compared to those living between 6 and 9 km from the reservoirs. By comparison, the analyses of the MAP database indicated a 1.1–1.8 fold increase in malaria prevalence in the communities located close to the reservoirs (Table 6). However, in neither case was the difference statistically significant in the stable areas. Overall, the estimated malaria prevalence using the MAP database was broadly consistent with that reported in the literature, increasing confidence that the results derived from the MAP analyses are reasonably valid. Table 5 Documented malaria prevalence (from literature vs MAP database) around some African dams Table 6 Summary of comparison of malaria prevalence around some African dams using data from the literature and the MAP database The cumulative burden imposed by large dams is major This study confirmed that dams generally intensify malaria transmission in SSA. The main findings are that the existing large dams investigated in this study increase the risk of malaria for close to 15 million people and contribute more than 1 million cases annually to the malaria burden of SSA. The planned dams investigated in this study will increase the risk for an additional 400,000 people and add more than 50,000 cases annually, based on current population densities. This number may increase significantly after the commissioning of these dams, as past experience [38] indicates that people tend to migrate towards the shores of reservoirs for livelihood purposes (mainly agriculture). The contribution of these dams to malaria burden in the region is thus substantial. Estimates of dam-associated malaria impacts are conservative The present study included a large proportion of existing dams in the analyses. Nonetheless, there are believed to be at least 800 additional large dams in SSA for which georeferenced data were not available and so they were not included. Assuming that these 800 dams have the same approximate distribution in relation to areas of stable, unstable and no malaria as the 1268 dams that were mapped, a realistic estimate of the total number of cases attributable to existing large dams in SSA annually would be of the order of 1.8 million. It is believed that this estimate is conservative because a large proportion of the large dams (n = 502) for which georeferenced data are available are located in South Africa, where there is no malaria. This means that a greater proportion of the existing unmapped dams are located outside South Africa, most likely in areas of either stable or unstable transmission. Similarly, a number of planned large dams were not included in the present study due to data limitations. The fact that approximately two-thirds of the planned dams are located in areas of unstable transmission should be concerning given their likely cumulative impacts in unstable areas. Incorporation of such dams into the analyses would undoubtedly render the figures provided above low estimates, as additional dams will further increase the malaria risk of dams in the region. Magnitude of dam-associated malaria in SSA has been underestimated The present study showed that the population at risk of malaria around dams is at least four times greater than that previously estimated [18]. Reasons for the large difference in population at risk were mainly because the present study used a more robust dataset and a large number of dams for analysis. Comparison of the cumulative burden of malaria in SSA due to large dams with other studies is not possible, as attempts were not previously made to quantify dam-associated malaria cases. The present study is the first of its kind to quantify the impact of a high proportion of large dams on malaria in SSA. The impact of large dams is far more severe in areas of unstable transmission The study confirmed previous assertions that the impacts of dams are much greater in areas of unstable transmission [18, 22]. A possible explanation for this is that malaria in stable areas is broadly continuous and water reservoirs simply add to a wide array of existing breeding habitat, available throughout the year. In contrast, in areas of unstable transmission, where malaria is seasonal, availability of mosquito breeding habitat in the dry season is one of the limiting factors for malaria transmission. Rainfall has been indicated as major determinant that limits the length of malaria transmission to the wet season in semi-arid areas [39]. In such cases, reservoirs may effectively create conditions suitable for mosquito development as they increase humidity throughout the year, thereby increasing vector abundance at times of year when they would not normally be found. Furthermore, the present study indicated that the impact of dams on malaria in unstable areas could either lead to intensified malaria transmission or change the nature of transmission from seasonal to perennial. Further study is needed to better understand the ecological and entomologic factors that lead to enhanced transmission in unstable areas. How important are large dams' impacts on malaria among other anthropogenic changes? This study indicated that there are at least 1.1 million malaria cases associated with current dams and most likely at least an additional 56,000 cases associated with future dams, each year in SSA. Throughout SSA, other environmental modifications such as irrigation [40], deforestation [41], small dams [42], and urbanization [43] have been identified as major anthropogenic determinants of malaria but the relative importance of these factors is unknown. More rigorous identification of the relative contribution of anthropogenic determinants to the total malaria burden could strengthen the allocation of resources to fight the disease. Dam-associated malaria could challenge SSA's current struggle towards malaria elimination Whilst Africa is currently recording success stories in reducing malaria and even considering malaria elimination [44], extensive dam construction could confound malaria control efforts. Indeed, despite growing evidence of the impact of dams on malaria, there is scant evidence of their negative impacts being fully offset. The only documented example that showed how to systematically control malaria around a large dam comes from the Tennessee Valley Authority in USA over 60 years ago [45]. Ultimately, the development and implementation of various conventional and unconventional approaches to mitigate malaria around large dams may be required in SSA. In particular, four unconventional approaches may be worth exploring (Lautze et al., unpublished): (1) dam placement—decision-making related to placement of dams in a river basin; (2) dam design and reservoir sizing—the degree of its operational flexibility and the nature and size of the reservoir; (3) reservoir operation and habitat modification—the way in which the dam is operated to allow measures to suppress larval development; and, (4) other environmental controls, such as introducing larvivorous fish. Furthermore, public health measures as part of dam planning (e.g., distribution of bed nets, blanket-treating all new migrants, construction of mosquito-proof houses, and improving local health facilities) should also be coordinated with existing malaria programmes. Climate change may exacerbate the impact of dams on malaria in SSA Large dams have been promoted as a mechanism to adapt to the likely increase in hydrological variability arising from climate change in Africa [46]. However, changes in climate characteristics, in particular local temperature and rainfall may also affect malaria transmission [47]. Extended dry seasons during El Niño years have been associated with malaria epidemics in the highlands of East Africa [48]. Furthermore, increased temperatures can also increase the rate of blood feeding by female mosquitoes which results in intensified malaria transmission [49]. Recent studies also indicated that climate change will likely push the altitude limits of unstable malaria towards the highlands of East Africa [50, 51]. In contrast, in the lowlands where temperature is generally high, transmission decreases dramatically with increasing temperature above 28 °C [52]. Future studies are needed to investigate the possible impact of climate change on malaria around both existing and planned dams in SSA, including in areas that are currently malaria free. Limitations of this study The major caveat of this study is that a number of environmental factors such as climate, land use and other seasonal and ecological drivers were not investigated. It was assumed that these potentially confounding factors affect equally both study groups—communities located within 5 km of a reservoir and those located more than 5 km from a reservoir. Approaches adopted in this study were consistent with those used in related literature (e.g., Keiser et al. [18]). In particular, in common with most similar studies, decoupling impacts of the river from the reservoir was not achieved. Given that mosquitoes typically breed in standing rather than flowing water, it was assumed that water reservoirs were more important contributors to malaria than the rivers flowing into and out of them. To undertake analysis that removes this limitation requires studies that evaluate the malaria situation before and after dam construction. To date, only one study is available that compares the malaria situation before and after dam construction [37]. In addition, the assumption that reservoir shapes are rectangular is an oversimplification. In the present study, limited verification was achieved using data from the literature. Nonetheless, in future research the analyses of the present study should be repeated using the actual reservoir shapes derived from high-resolution satellite data. Avenues for future investigation Dams may enhance transmission in the main periods of transmission, or change the seasonal pattern of malaria transmission. This study was focused on establishing aggregate annual impacts of dams in areas of stable and unstable transmission. Future investigations can focus on seasonality of such malaria impacts, particularly in unstable areas, to enhance implementation of disease control efforts. Future investigations should also determine the nature of adverse impact on malaria as dams either intensify transmission or may change transmission from seasonal to perennial. Such studies will then help understand underlying factors that explain why certain dams have produced significant impacts, whilst others have produced only negligible impacts. The time is ripe for action Current investment in large dams in SSA is increasing to respond to the need for urgent economic development. Results of the present study call for intensive measures to mitigate malaria in the vicinity of the reservoirs created by existing and planned large dams. Whilst recognizing the importance of dams for economic development, it is unethical that people living close to them pay the price of that development through increased suffering and, in extreme cases, loss of life due to disease. Those building dams must invest effectively in measures to prevent malaria transmission. ICOD—International Commission on Large Dams. World register of dams. Paris: International Commission on Large Dams; 2003. World Bank. The water resources sector strategy: an overview. Washington DC: The World Bank; 2004. Biswas AK. Impact of large dams: issues, opportunities and constraints. In: Tortajada C, Altinbilek D, Biswas AK, editors. Water resources development and management: impact of large dams a global assessment. Spain: Springer; 2012. McCartney M, King J: Use of decision support systems to improve dam planning and dam operation in Africa. CPWF Research for Development Series 02, CGIAR Challenge Program for Water and Food (CPWF). Colombo; 2011. Grey D, Sadoff W. Water for growth and development: Thematic Documents of the IV World Water Forum. Comision Nacional del Agua: Mexico City; 2006. World Bank. Africa's infrastructure: a time for transformation. Washington DC: World Bank; 2010. Rubin N, Warren WM. Dams in Africa: an inter-disciplinary study of man-made lakes in Africa. UK: Routledge; 2014. WHO—World Health Organization. World Malaria Report 2014. Geneva: World Health Organization; 2014. Atangana S, Foumbi J, Charlois M, Ambroise-Thomas P, Ripert C. Epidemiological study of onchocerciasis and malaria in Bamendjin dam area (Cameroon). Med Trop. 1979;39:537–43. Oomen J. Monitoring health in African dams: the Kamburu dam as a test case. PhD thesis. Rotterdam University: 1981. Lautze J, McCartney M, Kirshen P, Olana D, Jayasinghe G, Spielman A. Effect of a large dam on malaria risk: the Koka Reservoir in Ethiopia. Trop Med Int Health. 2007;12:982–9. Kibret S, McCartney M, Lautze J, Jayasinghe G. Malaria Transmission in the Vicinity of Impounded Water: Evidence from the Koka Reservoir, Ethiopia. IWMI Research Report 132. Colombo: International Water Management Institute; 2009. Yewhalaw D, Legesse W, van Bortel W, Gebre-Selassie S, Kloos H, Duchateau L, et al. Malaria and water resource development: the case of Gilgel-Gibe hydroelectric dam in Ethiopia. Malar J. 2009;8:21. Freeman T. Investigation into the 1994 malaria outbreak of the Manyuchi Dam area of Mbberengwa and Mwenezi Districts, Zimbabwe. Zimbabwe: Dare-Salam; 1994. Mba CJ, Aboh IK. Prevalence and management of malaria in Ghana: a case study of volta region. Afri Pop Stud. 2007;22:137–71. Ndiath MO, Sarr JB, Gaayeb L, Mazenot C, Sougoufara S, Konate L, et al. Low and seasonal malaria transmission in the middle Senegal River basin: identification and characteristics of Anopheles vectors. Parasit Vectors. 2012;5:21. Baudon D, Robert V, Darriet F, Huerre M. Impact of building a dam on the transmission of malaria. Malaria survey conducted in southeast Mauritania. Bull Soc Pathol Exot. 1986;79:123–9. Keiser J, Castro MC, Maltese MF, Bos R, Tanner M, Singer BH, et al. Effect of irrigation and large dams on the burden of malaria on a global and regional scale. Am J Trop Med Hyg. 2005;72:392–406. Keiser J, Singer B, Utzinger J. Reducing the burden of malaria in different eco-epidemiological settings with environmental management: a systemic review. Lancet Infect Dis. 2005;5:695–708. World Bank. World development indicators. Washington DC: The World Bank; 2012. Gething PW, Patil AP, Smith DL, Guerra CA, Elyazar RF, Johnson GL. A new world malaria map: Plasmodium falciparum endemicity in 2010. Malar J. 2011;10:378. Kibret S, Wilson GG, Ryder D, Tekie H, Petros B. The influence of dams on malaria trabsmission in sub-Saharan Africa. EcoHealth. 2015. doi:10.1007/s10393-015-1029-0. Food and Agriculture Organization (FAO). African dams. http://www.fao.org/nr/water/aquastat/dams/index.stm (2010). Accessed Jun 21 2014. International Rivers. African dams briefing. http://www.internationalrivers.org/files/attached-files/afrdamsbriefingjune2010.pdf (2010). Accessed Jun 21 2014. International Commission for Large dams (ICOLD). World register of dams. http://www.icold-cigb.org/GB/World_register/world_register.asp (2010). Accessed Jun 21 2014. Global Water System Project. Global Reservoir and Dam (GRanD) Database. http://www.gwsp.org/products/grand-database.html (2012). Accessed Jun 22 2014. Lehner B, Liermann CR, Revenga C, Vörösmarty C, Fekete B, Crouzet P, et al. High-resolution mapping of the world's reservoirs and dams for sustainable river-flow management. Front Ecol Environ. 2011;9:494–502. World Health Organization: Malaria data and statistics. http://www.aho.afro.who.int/en/data-statistics/data-statistics (2014). Accessed 15 Jun 2014. Worldpop Project. World population data. http://www.worldpop.org.uk/data/ (2014). Accessed 15 Jun 2014. Kauffman C, Briegel H. Flight performance of the malaria vectors Anopheles gambiae and Anopheles atroparvus. J Vector Ecol. 2004;29:140–53. Woodward M. Epidemiology: study design and data analysis. 2nd ed. UK: Chapman and Hall; 2013. Musingi JK, Ayiemba EHO. Effects of technological development on rural livelihoods in developing world: a case study of effects of a large scale multipurpose dam on malaria prevalence in a rural community around Kenya's largest dam. Eur J Sci. 2012;8:132–43. Yewhalaw D, Getachew Y, Tushune K, Michael K, Kassahun W, Duchateau L, et al. The effect of dams and seasons on malaria incidence and anopheles abundance in Ethiopia. BMC Infect Dis. 2013;13:161. Kibret S, Lautze J, Boelee E, McCartney M. How does an Ethiopian dam increase malaria? Entomological determinants around the Koka reservoir. Trop Med Int Health. 2012;17:1320–8. Dejene T, Yohannes M, Assmelash T. Characterization of mosquito breeding sites in and in the vicinity of Tigray microdams. Ethiop J Sci. 2011;21:57–65. Dejene T, Yohannes M, Assmelash T. Adult mosquito population and their health impact around and far from dams in Tigray region, Ethiopia. Ethiop J Health Sci. 2012;4:40–51. Sow S, De Vlas SJ, Engels D, Gryseels B. Water-related disease patterns before and after the construction of the Diama dam in northern Senegal. Ann Trop Med Parasitol. 2002;96:575–86. Jobin W. Dams and disease: ecological design and health impacts of large dams, canals and irrigation systems. London: E&FN Spon; 1999. Christiansen-Jucht C, Parham PE, Saddler A, Koella JC, Basáñez MG. Temperature during larval development and adult maintenance influences the survival of Anopheles gambiae s.s. Parasit Vectors. 2014;7:489. Ijumba JN, Lindsay SW. Impact of irrigation on malaria in Africa: paddies paradox. Med Vet Entomol. 2001;15:1–11. Guerra CA, Snow RW, Hay SI. A global assessment of closed forests, deforestation and malaria risk. Ann Trop Med Parasitol. 2006;100:189–204. Ripert CL, Raccurt CP. The impact of small dams on parasitic diseases in Cameroon. Parasitol Today. 1987;3:287–9. Keiser J, Utzinger J, De Castro MC, Smith TA, Tanner M, Singer BH. Urbanization in sub-Saharan Africa and implication for malaria control. Am J Trop Med Hyg. 2004;71:118–27. O'Meara WP, Mangeni JN, Steketee R, Greenwood B. Changes in the burden of malaria in sub-Saharan Africa. Lancet Infect Dis. 2010;10:545–55. Kitchens C. A dam problem: TVA's fight against malaria, 1926–1951. J Econ Hist. 2013;73:694–724. Leary NA. A framework for benefit-cost analysis of adaptation to climate change and climate variability. Mitig Adapt Strat Glob Chang. 1999;4:307–18. Tanser FC, Sharp B, Le Sueur D. Potential effect of climate change on malaria transmission in Africa. Lancet Inf Dis. 2003;362:1792–8. Patz JA, Olson SH. Malaria risk and temperature: influences from global climate change and local land use practices. PNAS. 2006;103:5635–6. Stern DI, Gething PW, Kabaria CW, Temperley WH, Noor AM, Okiro EA, et al. Temperature and malaria trends in highland East Africa. PLoS One. 2011;6:e24524. Kristan M, Abeku TA, Beard J, Okia M, Rapuoda B, Sang J, Cox J. Variations in entomological indices in relation to weather patterns and malaria incidence in East African highlands: implications for epidemic prevention and control. Malar J. 2008;7:231. Caminadea C, Kovatsc S, Rocklovd J, Tompkinse AM, Morseb AP, Colón-Gonzáleze FJ, et al. Impact of climate change on global malaria distribution. Proc Natl Acad Sci USA. 2014;111:3286–91. Mordecai EA, Paaijmans KP, Johnson LR, Balzer C, Ben-Horin T, Moor E, et al. Optimal temperature for malaria transmission is dramatically lower than previously predicted. Ecol Lett. 2013;16:22–30. SK, JL and MM made substantial contributions to conception, design, acquisition, analysis and interpretation of data. SK drafted the manuscript while JL, MM and GGW revised the draft critically for important intellectual content. LN contributed to data analysis. All authors read and agreed on the final manuscript. This work was financially supported by the CGIAR Water Land Ecosystems (WLE) Program. The authors wish to thank Eline Boelee for providing preliminary review, and the anonymous reviewers for their generous comments. Compliance with ethical guidelines Competing interests The authors declare that they have no competing interests. Ecosystem Management, School of Environmental and Rural Science, University of New England, Armidale, NSW, 2351, Australia Solomon Kibret & G. Glenn Wilson International Water Management Institute, Pretoria, South Africa Jonathan Lautze & Luxon Nhamo International Water Management Institute, Vientiane, Laos Matthew McCartney Solomon Kibret Jonathan Lautze G. Glenn Wilson Luxon Nhamo Correspondence to Solomon Kibret. Kibret, S., Lautze, J., McCartney, M. et al. Malaria impact of large dams in sub-Saharan Africa: maps, estimates and predictions. Malar J 14, 339 (2015). https://doi.org/10.1186/s12936-015-0873-2 Reservoir-shoreline
CommonCrawl
Distribution, cytotoxicity, and antioxidant activity of fungal endophytes isolated from Tsuga chinensis (Franch.) Pritz. in Ha Giang province, Vietnam Thi Hanh Nguyen Vu1,2 na1, Ngoc Son Pham1,2,3 na1, Phuong Chi Le1, Quynh Anh Pham1, Ngoc Tung Quach1,2, Van The Nguyen1, Thi Thao Do1,2, Hoang Ha Chu1,2 & Quyet Tien Phi ORCID: orcid.org/0000-0002-7182-33841,2 Annals of Microbiology volume 72, Article number: 36 (2022) Cite this article An endangered Tsuga chinensis (Franch.) Pritz. is widely used as a natural medicinal herb in many countries, but little has been reported on its culturable endophytic fungi capable of producing secondary metabolites applied in modern medicine and pharmacy. The present study aimed to evaluate the distribution of fungal endophytes and their cytotoxic and antioxidant properties. This study used the surface sterilization method to isolate endophytic fungi which were then identified using morphological characteristics and ITS sequence analysis. The antimicrobial and cytotoxic potentials of fungal ethyl acetate extracts were evaluated by the minimum inhibitory concentration (MIC) and sulforhodamine B (SRB) assays, respectively. Paclitaxel-producing fungi were primarily screened using PCR-based molecular markers. Additionally, biochemical assays were used to reveal the antioxidant potencies of selected strains. A total of sixteen endophytic fungi that belonged to 7 known and 1 unknown genera were isolated from T. chinensis. The greatest number of endophytes was found in leaves (50%), followed by stems (31.3%) and roots (18.7%). Out of 16 fungal strains, 33.3% of fungal extracts showed significant antimicrobial activities against at least 4 pathogens with inhibition zones ranging from 11.0 ± 0.4 to 25.8 ± 0.6 mm. The most prominent cytotoxicity against A549 and MCF7 cell lines (IC50 value < 92.4 μg/mL) was observed in Penicillium sp. SDF4, Penicillium sp. SDF5, Aspergillus sp. SDF8, and Aspergillus sp. SDF17. Out of three key genes (dbat, bapt, ts) involved in paclitaxel biosynthesis, strains SDF4, SDF8, and SDF17 gave one or two positive hits, holding the potential for producing the billion-dollar anticancer drug paclitaxel. Furthermore, four bioactive strains also displayed remarkable and wide-range antioxidant activity against DPPH, hydroxyl radical, and superoxide anion, which was in relation to the high content of flavonoids and polyphenols detected. The present study exploited for the first time fungal endophytes from T. chinensis as a promising source for the discovery of new bioactive compounds or leads for the new drug candidates. Cancer has intensively been a growing concern for humanity especially when society is at the stage of rapid development in medicine, science, and technology. As estimated by Global Cancer Observatory, there will be around 29.5 million new cancer cases, in which the implementation of cancer prevention strategies would save nearly 15 million lives (Eniu et al. 2019). Oxidative stress is known as an imbalance of reactive oxygen species (ROS) and antioxidants in the body, which can lead to cell and tissue damage interfering with cancer development (Griffiths et al. 2016; Liu et al. 2018). Indeed, ROS including both free radicals such as hydrogen peroxide and non-free oxygen radicals such as superoxide anion, singlet oxygen, and the hydroxyl radical are highly reactive molecules, which can be alleviated by antioxidants (Griffiths et al. 2016). Over the decades, several synthetic phenolic antioxidants including butylated hydroxyanisole, butylated hydroxytoluene, and propyl gallate used to treat ROS-relating diseases have been reported to cause severe side effects (Saad et al. 2007). Therefore, it is necessary to search for alternative anticancer and antioxidant agents from natural sources such as microorganisms that may overcome the complications of chemotherapy as well as prevent the risk of cancer. To date, there is an increasing interest in endophytic fungi known as potential reservoir for bioactive compounds applying in modern medicine and pharmacy. Since plants are colonized by diverse sets of microorganisms during co-evolution, endophytic fungal communities are always varied because of host, plant distribution, ecology, and physiology (Rosa et al. 2010; Zhao et al. 2013). Of note, various reports showed that endophytic fungi isolated from medicinal plants produced not only new biomolecules, but also similar active secondary metabolites as their host (El-hawary et al. 2020). Endophytic fungi that have been exploited in many medicinal and herb plants such as Passiflora incarnata (L.) Medik, Catharanthus roseus (L.) G. Don, Euphorbia hirta L., and Taxus chinensis (Pilg.) Rehder has been considered as producers of actively plant-derived compounds that belong to different structural groups including phenols, flavonoids, alkaloids, steroids, and terpenoids (Zhou et al. 2007; Dhayanithy et al. 2019; da Silva et al. 2020; Gautam et al. 2022). Notably, the capability to produce host-specific metabolites, vincristine, vinblastine, camptothecin, podophyllotoxin, and the billion-dollar anti-cancer drug paclitaxel has been reported in fungal endophytes (Tang et al. 2020; Hridoy et al. 2022). It is believed that horizontal gene transfer likely contributes to this phenomenon during their physical contact (Richards 2011). It is evident that the PCR-based molecular markers specific for three genes 10-deacetylbaccatin III-10-O-acetly transferase (dbat), C-13 phenylpropanoid side chain-CoA acyltransferase (bapt), and taxadiene synthase (ts) have effectively been used to screen paclitaxel-producing fungi (Zhou et al. 2007; Kumar et al. 2019), which indicates that horizontal gene transfer takes place as result of host-fungus interactions. Endophytic fungi that belong to the genus Penicillium and Aspergillus are viewed as producers of plant secondary metabolites like flavonoids and polyphenols that have a number of medical benefits. The ethyl acetate extract of Penicillium oxalicum YMG1 from Ligusticum chuanxiong Hort consisted of high-value polyphenols, such as hesperidin, citric acid, ferulic acid, and alternariol, which led to strong antioxidant activity against DPPH, superoxide radical and hydroxyl radical and antibacterial effects against Staphylococcus aureus ATCC 6538 and Escherichia coli ATCC 25922 (Tang et al. 2021). The endophytic fungus Penicillium janczewskii K.M. Zalessky was able to produce a polyphenol compound, ρ-hydroxybenzaldehyde that showed antibacterial activity (Schmeda-Hirschmann et al. 2005). Besides, high contents of flavonoid and polyphenol compounds in Aspergillus flavus L7 led to identification of rutin, gallic acid, and phlorizin as natural antioxidant agents (Patil et al. 2015). Another study identified the novel compounds seco-cytochalasins A-F, asperlactones G-H from Aspergillus sp. isolated from Pinellia ternata (Thunb) Makino, which displayed high effective cytotoxicity against human lung cancer A549 and doxorubicin-resistant human breast cancer MCF7 cell lines (Xin et al. 2019). Tsuga chinensis is a coniferous tree species in the International Union for Conservation of Nature (IUCN) Red List, distributed only in mountains 1300–1700 m above sea level in Ha Giang province, northern Vietnam and East Asian countries (Aiello 2016). It is worth noting that T. chinensis is highly resistant to an aphid-like insect Adelges tsugae Annand that is causing widespread death of Eastern Tsuga species (Del Tredici and Kitajima 2004). Traditionally, the bark can also be used to treat kidney, bladder problems, diarrhea, and sores and ulcers in the mouth and throat. However, natural products with biological activities from the plant and its endophytic fungi have not been explored yet. The present study was designed to shed light on culturable endophytic fungi from T. chinensis and to characterize the biological activities of fungal species. In line with these findings, we provided a new resource that yielded high levels of flavonoids and polyphenols and discovered taxol-producing endophytes. The outcomes obtained from this study open a promising avenue for conducting further in-depth investigations at the molecular and mechanistic levels. Sampling and endophytic fungi isolation The native conifer T. chinensis was harvested from Dong Van (23° 15′ 30″ N 105° 17′ 24″ E), Ha Giang Province, northern Vietnam in March 2020 (Fig. S1); no specific permission was required for the location. Plant specimens were collected through guided field walks with the aid of expert plant gatherers and local ethnic minority peoples. We attempted to select three conifers distanced from each other around 3–5 m. In addition, they were roughly estimated to be 35- to 40-year-old plants whose diameter and length was around 40 cm and 13 m tall, respectively. The leaf, stem, and root samples selected from 3 trees were healthy, fresh, and free from any injury, which were subsequently transported to the laboratory of the Institute of Biotechnology, Vietnam Academy of Science and Technology. All plant specimens were sent to the Institute of Ecology and Biological Resources, Vietnam Academy of Science and Technology for identification and preservation. The isolation of endophytic fungi from T. chinensis was performed according to a standard procedure with slight modifications (Kumar et al. 2019). In brief, the leaves, stems, and roots were cut into small segments (~ 2 × 3 cm) and washed with fresh water followed by surface sterilization via successive immersion into 70% ethanol for 30 s, 3.5% NaClO for 2 min, 70% ethanol for 2–5 s and rinsed with sterile water. The surface-disinfected samples were cut into small slices (~ 0.5 × 0.5 cm) and placed in 9-cm diameter Petri dishes (6 pieces/plate) containing Potato Dextrose Agar (PDA) supplemented with 100 mg/L streptomycin as an antibacterial agent. The plates were incubated at 28°C for 7–10 days, observed daily to check the growth of fungal colonies. For each fungal colony, single hypha was carefully sub-cultured onto fresh PDA plates to obtain pure isolates. Spores and mycelia of each fungal strain were preserved in 15% (v/v) glycerol at −80 °C. The fungal isolates producing spores were frozen at −80 °C, lyophilized under vacuum, and stored at 4°C. Morphological, molecular identification, and isolation frequency of the endophytic fungi Fungal isolates were cultivated on the PDA medium to observe the growth rate of hyphae, the morphology of colonies, and pigment production as described previously (Samson et al. 2011; Ngo et al. 2021). The structure of hyphae, conidia, conidiophores, and their arrangement were microscopically observed under a light microscope at 40X (Olympus, Japan). Genomic DNA of each fungal strain was extracted using a DNeasy Plant Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's protocol. The fungal internal transcribed spacer (ITS) DNA, located between a small subunit of rRNA and a large subunit of the rRNA gene, was amplified using ITS1 (5′–TCCGTAGGTGAACCTGCGG–3′) and ITS4 (5′–TCCTCCGCTTATTGATATGC–3′) (Ngo et al. 2021). The reaction was carried out in a 25-μL final volume consisting of 0.1 μg of genomic DNA, 0.4 μM of forward and reverse primers, 0.2 mM dNTPs, 1×Taq polymerase buffer, and 1 U of Taq DNA polymerase. For negative control, distilled water was used to verify the absence of contamination instead of genomic DNA. The amplification conditions were as follows: an initial cycle at 95 °C for 10 min, followed by 35 cycles of denaturation (95°C for 30 s), annealing (65°C for 1 min), extension (72°C for 90 s), and the process ended with a final extension at 72°C for 10 min. The PCR product was visualized on 1% (w/v) agarose gel by electrophoresis followed by purification and Sanger sequencing by First BASE Laboratories Sdn. Bhd. (Malaysia). The resulting ITS sequences were manually trimmed and edited to obtain complete sequences, which were subsequently compared with available data from GenBank databases (NCBI) using the BLASTn program. After alignment using ClustalW software v.1.81, the phylogenetic tree was computed by using the neighbor-joining method with 1000 bootstrap in Molecular Evolutionary Genetics Analysis v.7.0 (MEGA7) (Kumar et al. 2016). The ITS sequences of strains were deposited onto the GenBank (NCBI) under accession numbers as shown in Table 1. On the other hand, the isolation frequency of each fungal strain recovered from T. chinensis was calculated as the total number of segments yielding a given fungus divided by a total number of segments observed, expressed as a percentage (Dhayanithy et al. 2019). Table 1 Molecular identification and the colonization frequency of 16 endophytic fungi isolated from different tissues of T. chinensis Preparation of fungal extracts from fermentation broth Each fungal strain was cultivated in 3 liters of potato dextrose broth (PDB) at 28°C, shaking at 150 rpm for 14 days. The culture was filtered by vacuum filtration, extracted with a 3X volume of ethyl acetate in a separatory funnel, and then evaporated to dryness using a vacuum rotary evaporator at 60°C (Li et al. 2014). The crude extract was weighed, dissolved in 10% (v/v) dimethyl sulfoxide (DMSO), and used for antimicrobial and cytotoxic experiments. To evaluate antioxidant activities, each crude extract was prepared by dissolving in 70% ethanol. Antimicrobial assay The crude extract from fungal cultures was primarily screened for antimicrobial activity using the agar well- diffusion method with slight modification (Gonelimali et al. 2018). Nine different pathogenic bacteria, two Gram-negative (Escherichia coli ATCC 11105, Pseudomonas aeruginosa ATCC 9027), four Gram-positive bacteria (Bacillus cereus ATCC 11778, Staphylococcus aureus ATCC 6538, methicillin-resistant Staphylococcus aureus (MRSA) ATCC 33591, Enterococcus faecalis ATCC 29212), and a yeast Candida albicans ATCC 10231 were used as test organisms. Freshly prepared fungal and bacterial suspensions of 100 μL were inoculated onto Sabouraud Dextrose and LB agar plates, respectively. After that, wells 6 mm in diameter were made using a sterile cork borer, and 20 μL of each extract (1 mg/mL) was added to respective wells. About 20 μL of erythromycin (1 mg/mL) and 20 μL of nystatin (1 mg/mL) were used as positive controls, while negative control wells were added with 20 μL of 10% (v/v) DMSO. The plates were then incubated at 30°C or 37°C (according to the best temperature for each microorganism to be tested) for up to 48 h. The diameters of microbial growth inhibition zones were measured after 24 h of incubation. The antimicrobial activity was determined by measuring the zone of inhibition (excluding the well diameters). Cytotoxic potential of fungal endophytes The cytotoxicity of crude extracts was assayed using the human lung cancer A549 and human breast adenocarcinoma MCF7 cell lines by using the sulforhodamine B (SRB) assay (Skehan et al. 1990). The A549 and MCF7 cell lines were separately seeded in 96-well plates at a density of around 104 cells/well and incubated at 37°C, 95% humidity, and 5% CO2. After 24 h of incubation, the cells were treated with different concentrations of crude extract and left for 24 h. Then, the cells were fixed with cold 10% (w/v) trichloroacetic acid for 1 h at 4°C, stained with 0.4% (w/v) SRB solution for 30 min at room temperature, and then washed twice with 1% acetic acid. The results were recorded at the optical density of 540 nm in a microplate reader (BioTek EXL800). About 10 μg/mL ellipticine was used as a positive control, while 10% DMSO (v/v) was considered a negative control. The IC50 value is the concentration of the tested sample that inhibits 50% survival of cancer cells in comparison with the control sample grown in the same condition. PCR based molecular screening for paclitaxel-producing fungi Conserved genes encoding for 10-deacetylbaccatin III-10-O-acetly transferase (DBAT), C-13 phenylpropanoid side chain-CoA acyltransferase (BAPT), and taxadiene synthase (TS) were used as molecular markers to screen paclitaxel-producing fungi as described previously (Kumar et al. 2019). The specific primers ts-F (5′–ATCAGTCCGTCTGCATACGACA–3′), ts-R (5′–TAAGCCTGGCTTCCCGTGTTGT–3′), dbat-F (5′–ATGGCTGAC ACTGACCTCTCAGT–3′), dbat-R (5′–GGCCTGCTCCTAGTCCATCACAT–3′), bapt-F (5′–CCTCTCTCCGCCATTGACAACAT–3′), and bapt-R (5′–GTCGCTGTCAGCCATGGCTT–3′) were synthesized according to previous studies (Zhou et al. 2007; Kumar et al. 2019). After PCR amplification, the products were analyzed on 2% (w/v) agarose gel and visualized with the Gel Doc EZ Imager (Bio-Rad Laboratories Inc.). The capability of fungal extracts to scavenge 1,1-diphenyl-2-picrylhydrazyl (DPPH) was measured as described previously with a slight modification (Kadaikunnan et al. 2015). About 100 μL of extract solution (0; 100; 200; 400 μg/mL) was added to 100 μL of 0.1 mM DPPH solution in ethanol. The mixture was kept for 30 min in darkness at room temperature and the absorbance was measured at 517 nm against an equal amount of DPPH. The percentage of DPPH scavenging activity was calculated using the following formula: $$\mathrm{Scavenging}\ \mathrm{activity}\ \left(\%\right)=\left[1-\frac{\ \left({\mathrm{A}}_{\mathrm{sample}}-{\mathrm{A}}_{\mathrm{blank}}\right)}{{\mathrm{A}}_{\mathrm{control}}}\right]\times 100$$ where Asample is the absorbance of the reaction mixture; Ablank is the absorbance of 70% ethanol and fungal extract solution; and Acontrol is the absorbance of 70% ethanol and DPPH solution. The free hydroxyl radical scavenging capacity was evaluated according to the method described previously with a slight modification (Kadaikunnan et al. 2015). Briefly, the hydroxyl radical reaction consisted of 0.5 mL of 0.435 mM brilliant green, 1.0 mL of 0.5 mM FeSO4, and 0.75 mL of 3% (v/v) H2O2. The reaction was initiated by adding 1.0 mL of crude extract solution with concentrations in the range of 0–400 μg/mL followed by incubation at 37°C for 30 min. The absorbance was measured at 624 nm and the inhibition percentage of hydroxyl radical scavenging activity was expressed as: $$\mathrm{Scavenging}\ \mathrm{activity}\ \left(\%\right)=\frac{\ \left({\mathrm{A}}_{\mathrm{s}}-{\mathrm{A}}_0\right)}{\left(\mathrm{A}-{\mathrm{A}}_0\right)}\times 100$$ where As is the absorbance of the reaction mixture; A0 is the absorbance of the control (70% ethanol) in the absence of the sample; and A is the absorbance without the sample and Fenton reaction system. Superoxide radicals were carried out according to a previously published protocol with a slight modification (Li 2012). The reaction contained 900 μL of 0.05 M Tris–HCl (pH 8.2), 200 μL of crude extract (0; 100; 200; 400 μg/mL), and 80 μL of 2.5 mM pyrogallol. Following the incubation at room temperature for 5 min, the absorbance at 299 nm was measured. The superoxide radical scavenging activity was calculated as: $$\mathrm{Scavenging}\ \mathrm{activity}\ \left(\%\right)=\frac{\ \left(\ {\mathrm{A}}_{\mathrm{control}}-{\mathrm{A}}_{\mathrm{sample}}\right)}{\left({\mathrm{A}}_{\mathrm{control}}\right)}\times 100$$ where Asample is the absorbance of the reaction, mixture and Acontrol is the absorbance of the control (70% ethanol) in the reaction without the sample. To evaluate the antioxidant activity through reducing power, the reaction mixture containing 300 μL of extracts (0; 100; 200; 400; 600 μg/mL), 300 μL of 0.2 M sodium phosphate buffer pH 7.3, and 1.5 mL of 1% (w/v) K3Fe(CN)6 was incubated at 50 °C for 25 min in the dark. After that, 300 μL of 12% (w/v) trichloroacetic acid was added to the reaction mixture and then centrifuged at 10,000 rpm for 5 min. About 1 mL of supernatant was mixed with 0.5 mL of 0.2% (w/v) FeCl3. The absorbance of the mixture was measured at 700 nm. The reducing power was calculated as follows: reducing ability = Asample−A, where Asample is the absorbance of the mixture containing fungal extract and A was the absorbance containing deionized water instead of 0.2% (w/v) FeCl3 (Rajoka et al. 2019). In all experiments, ascorbic acid was used as a positive control. Total polyphenol and flavonoid contents Total phenolic content was evaluated using the Folin-Ciocalteu colorimetric method described previously with a slight modification (da Silva et al. 2020). About 20 μL of crude extract dissolved in 70% ethanol was mixed with 100 μL of Folin-Ciocalteu reagent and then kept for 5 min at room temperature. The reaction was stopped by the addition of 80 μL of 4% (w/v) sodium carbonate. Absorbance at 765 nm was measured by using a microplate reader, and the results were expressed as gallic acid equivalents (GAE) in microgram per gram of dry extract (μg GAE/g FW). The total polyphenolic content of fungal extracts was established by the gallic acid standard curve, equation: Y = 0.0043x – 0.0377 (R2=0.9983). All experiments were performed in triplicates. The total flavonoid content of ethyl acetate extracts was determined spectrophotometrically based on NaNO2-Al(NO)3 colorimetry (Tang et al. 2020). The total flavonoids assay was conducted by mixing 30 μL of 70% ethanol extract, 10 μL of 5% (w/v) NaNO2, 10 μL of 10% (w/v) AlCl3, 60 μL of 1M NaOH, and 120 μL of distilled water. The mixture was incubated for 30 minutes at room temperature followed by the measurement at an optical density of 510 nm against a reagent blank. The amount of flavonoid content was calculated using a standard curve of quercetin, and the result was expressed as quercetin equivalents in microgram per gram (μg QE/g) of dry extract (μg QE/g FW). The total flavonoid content of fungal extracts was calculated using the quercetin standard curve, equation Y = 0.0016x – 0.032 (R2=0.9974). Distribution and identification of culturable endophytic fungi from T. chinensis A total of 16 fungal isolates with distinct morphology were successfully derived from different segments of T. chinensis. The greatest number of endophytic fungi were recovered from leaves (50%, 8 isolates) followed by stems (31.3%, 5 isolates) and roots (18.7%, 3 isolates) (Fig. S2A). All selected isolates were cultivated on the PDA medium and then identified by their morphology of colony and hyphae as well as spore structures (Fig. S3). As a result, 16 fungal isolates were assigned to 8 morphotypes that belonged to the phylum Ascomycota. Furthermore, ITS-based rDNA sequence analysis revealed that most sequences of these isolates were no less than 99% similar to the closest matches, except for SDF2 (91.3%), SDF3 (90.1), SDF10 (91.2%), and SDF15 (93.0%) (Table 1). The phylogenetic analysis showed that 14 fungal isolates were classified into 7 fungal genera including Aspergillus, Daldinia, Fusarium, Neocosmospora, Neofusicoccum, Xylaria, and Penicillium among which Aspergillus (31.2%), and Fusarium (18.7%) were the most common genera (Fig. S2B). The isolate SDF2 and SDF10 were clustered together with Hypoxylon as represented by the support rating of 76%, indicating that they belonged to different genera (Fig. 1). Combining with morphological characteristics, the isolates SDF2 and SDF10 have not been identified at the genus level. Phylogenetic tree based on the ITS gene sequences of 16 the endophytic fungi isolated from T. chinensis showing relationship between fungal endophytes isolated from T. chinensis and nearest type strains. Numbers at the nodes indicate bootstrap values based on 1000 replicates and Cunninghamella elegans CBS 160.28T was used as an outgroup Isolation frequency analysis revealed that culturable fungi in the T. chinensis comprised 3 frequent genera and 6 infrequent groups. Aspergillus, Fusarium, and Penicillium were the most prevalent genus with isolation frequency ranging from 12.5–34.4% (Table 1). Among infrequent groups, Neofusicoccum was the rarest genera, accounting for 3.1%, while unidentified strains SDF2 and SDF10 showed 6.25% and 3.13% of isolation frequency, respectively. Antimicrobial activity by endophytic isolates Among 16 strains, only 5 (accounting for 33.3%) endophytic fungal ethyl acetate extracts had significant inhibitory activity against a wide range of tested pathogens. These included Aspergillus sp. SDF1, Penicillium sp. SDF4, Penicillium sp. SDF5, Aspergillus sp. SDF8, and Aspergillus sp. SDF17 showing antimicrobial activity with inhibition zones ranging from 11.0 ± 0.4 to 25.8 ± 0.6 mm (Table 2). The growth of P. aeruginosa ATCC 9027, S. aureus ATCC 6538, and MRSA ATCC 33591 were severely inhibited by 5 fungal extracts. The highest antibacterial activity was recorded for Aspergillus sp. SDF1 (25.3 ± 0.4 mm) and Penicillium sp. SDF4 (25.8 ± 0.6 mm) on E. faecalis ATCC 29212. Interestingly, only Aspergillus sp. SDF1 and Penicillium sp. SDF4 extracts were active against C. albicans ATCC 10231 with inhibition zones of 17.3 ± 0.4 mm and 14.2 ± 1.1 mm, respectively. Despite being classified into the same genus level and cultivated in the same condition, Penicillium sp. SDF4 and Penicillium sp. SDF5 showed a significantly different pattern of antimicrobial activity. Table 2 Antimicrobial activity, cytotoxic effect, and the presence of paclitaxel biosynthetic genes determined in sixteen endophytic fungi isolated from T. chinensis Anticancer activity from endophytic fungal extracts As for anticancer activity, 4 out of 16 fungal strains showed cytotoxic activity on A549 and MCF7 cells with IC50 values ranging from 14.2–92.4 μg/mL, which included SDF4, SDF5, SDF8, and SDF17 (Table 2). The highest cytotoxic activity on A549 and MCF7 cells was found for SDF5 with IC50 values of 14.2 ± 1.5 μg/mL and 25.2 ± 1.7 μg/mL, respectively, which were around 3-fold higher than those of SDF4. On the other hand, treatment of A549 and MCF7 cells with SDF8 and SDF17 extracts led to a moderate inhibition with IC50 values ranging from 36.8 ± 2.8 to 50.6 ± 1.8 μg/mL. Despite the high antimicrobial activity, SDF1 showed no cytotoxic activity. Molecular screening for paclitaxel-producing fungi Given that paclitaxel is produced not only by endophytic fungi from Taxus species but also by other plants (Gangadevi and Muthumary 2009; Kumaran et al. 2009), it is interesting to evaluate the potential for paclitaxel production of T. chinensis-associated fungi using PCR amplification which has not been explored yet. Molecular detection of genes encoding for DBAT, BAPT, and TS revealed that 4 out of 16 fungi had at least one amplified gene (Table 2). Of note, the presence of dbat and ts genes were observed in SDF8, while SDF17 had dbat and bapt. Moreover, strains SDF3 and SDF4 only had positive hits of dbat and bapt, respectively. These results suggest that these strains hold a high potential for paclitaxel production, which is an interesting subject for further investigation. Based on the criteria including antimicrobial activity, cytotoxicity, and positive hits for paclitaxel biosynthetic genes, 4 strains SDF4, SDF5, SDF8, and SDF17 were selected for further studies. All fungal extracts displayed remarkable antioxidant activities against DPPH, hydroxyl radical, and superoxide radical, but not with reducing power (Fig. 2). Of note, only SDF8 and SDF17 extracts showed the highest antioxidant activity against DPPH radical at the concentration of 600 μg/mL, which were comparable to ascorbic acid as a positive control (96.5 ± 0.4%). The highest hydroxyl and superoxide radical scavenging activities were observed in the SDF8 (63.9 ± 0.7%) and SDF4 (69.2 ± 0.7%) extracts, respectively (Fig. 2). In contrast, the reducing power of all extracts did not increase with respect to the increasing concentrations. Scavenging activity of ethyl acetate crude extracts from selected fungal strains on DPPH radical (A), hydroxyl radical (B), superoxide radical (C), and reducing power (D). The ascorbic acid is used as a positive control and error bars indicate the standard deviation (SD) Determination of the total polyphenol and flavonoid content The total polyphenolic content of all extracts ranged from 84.3 ± 0.3 to 99.1 ± 3.2 μg GAE/g FW with strain SDF8 showing the highest polyphenol content (Fig. 3). The highest flavonoid content (111.3 ± 0.6 μg QE/g FW) was reported for the extract of strain SDF17, followed by SDF5 (99.8 ± 0.8 μg QE/g FW). In contrast, strain SDF4 produced the lowest flavonoid content (66.9 ± 1.7 μg QE/g FW). It appears that all extracts comprised a relatively high level of polyphenols and flavonoids, which were relevant to the high antioxidant activities obtained. Total polyphenol (A) and flavonoid (B) contents determined in the extracts of bioactive fungi Endophytic fungi associated with medicinal plants are a promising source of novel bioactive compounds that hold biotechnological and medical potential. In the present study, 16 endophytic fungi were isolated from T. chinensis, of which 50% of the endophytes were from leaves, 31.3% from stems, and 18.7% from roots. It was nearly in agreement with the previous study showing the isolation of only 15 endophytic fungi from Ephedra pachyclada Boiss (Khalil et al. 2021). At the lower level, 15 fungal endophytes were recovered from the leaves and stems of five Sudanese medicinal plants (Khiralla et al. 2016). A larger number of fungi was obtained in leaves (110 isolates) and stems (3 isolates) of Zanthoxylum simulans Hance (Kuo et al. 2021). In line with this, previous studies proved that fungal endophytes were predominantly distributed in the leaf of herbaceous plants Centella asiatic (L.) Urban and Guarea guidonia (L.) Sleumer (Gamboa and Bayman 2001; Rakotoniriana et al. 2008). As revealed in Catharanthus roseus, the majority of fungal isolates originated from bark and stem (Dhayanithy et al. 2019). It is worth noting that the number and distribution of endophytic fungi vary vastly from study to study because of the many influencing factors such as isolation purpose, surface sterilization method, environment, and host-mediated factors in play (Gamboa and Bayman 2001; Rosa et al. 2010; Fernandes et al. 2015). In this work, 14 identified strains belonged to a single division, Ascomycota, and 7 genera, namely, Aspergillus, Daldinia, Fusarium, Neocosmospora, Neofusicoccum, Xylaria, and Penicillium. Notably, Aspergillus was the most isolated genus, followed by Fusarium. On the other hand, in previous studies, the most common endophytes residing in plants were Fusarium and Penicillium which might assist host plants to resist abiotic and biotic stresses as well as the attack by insects or pathogens (Maciá-Vicente et al. 2008; Rosa et al. 2010; Toghueo 2019). The less diverse endophyte community observed might be due to the fact that T. chinensis contains various compounds actively against fungal colonization, which resulted in the dominance of the genus Aspergillus that were highly adapted to every ecological niche and produced a number of bioactive compounds with diverse chemical structures and biological activities (El-hawary et al. 2020). To the best of our knowledge, this is the first effort in isolating and identifying fungal endophytes isolated from T. chinensis. The extracts of fungal strains isolated from T. chinensis also showed remarkably anti-microbial activities against a wide range of pathogens. Interestingly, all bioactive strains against pathogens were identified as Aspergillus and Penicillium, which confirmed the antimicrobial properties of both genera as reported previously (El-hawary et al. 2020). Aspergillus sp. ASCLA derived from Callistemon subulatus Cheel exhibited moderate to high activity against S. aureus, P. aeruginosa, and C. albicans (Kamel et al. 2020), while Penicillium cataractum SYPF 7131 from Ginkgo biloba L. was the most potent isolate with strong activity against five bacterial pathogens (Wu et al. 2018). The most promising strain in this study was Penicillium sp. SDF4 inhibiting the growth of six pathogens with inhibition zones ranging from 14.2 ± 1.1 to 25.8 ± 0.6 mm. In addition, the quantification of the total phenolic compounds and flavonoids supports the possibility that the antimicrobial effect of strain SDF4 extract is related to the presence of phenolic and flavonoid compounds that inactivate ribonucleic acid reductase blocking bacterial DNA synthesis (Rasch 2002). Hence, Penicillium sp. SDF4 may serve as a potential source of novel antibacterial compounds. Endophytic fungi from medicinal plants have stood out for the promising cytotoxic activity associated with anticancer properties. In the present study, Penicillium sp. SDF4, Penicillium sp. SDF5, Aspergillus sp. SDF8, and Aspergillus sp. SDF17 were found promising due to considerable inhibitory effects with IC50 values ranging from 14.2-92.4 μg/mL against A549 and MCF7 cells. This was consistent with various studies that proved the cytotoxic activity of the genus Aspergillus and Penicillium (El-hawary et al. 2020; Hridoy et al. 2022). In terms of the co-volution of host plants and endophytes, fungi have acquired individual genes or even gene clusters by horizontal gene transfer from host plants, which means that plant-derived compounds can be produced by fungal endophytes (Richards 2011; Hridoy et al. 2022). Positive hits for key genes dbat and ts responsible for the paclitaxel biosynthesis were detected in the Aspergillus sp. SDF8, while Aspergillus sp. SDF17 contained dbat and bapt genes. In contrast, one hit of neither dbat nor bapt was found in Xylaria sp. SDF3 and Penicillium sp. SDF4, respectively. Fungal paclitaxel has drawn particular attention from researchers in recent times because paclitaxel used in chemotherapy for many solid tumors has only been derived from rare and endangered yew trees belonging to the genus Taxus (Soliman and Raizada 2018). It is believed that both dbat and bapt are the most important genes since more than 10 enzymatic steps are taken place to synthesize paclitaxel after ts (Zhou et al. 2007; Kumar et al. 2019). In the genus Aspergillus, only A. fumigatus TPF-06 isolated from Taxus sp. was reported to have potency and sustainability to produce paclitaxel to date (Kumar et al. 2019). These results provided a shred of evidence for the presence of paclitaxel biosynthetic pathway in fungal strains SDF8 and SDF17. Using only bapt gene, Guignardia mangiferae HAA11, Fusarium proliferatum HBA29, and Colletotrichum gloeosporioides TA67 isolated from Taxus x media were shown to produce paclitaxel (Xiong et al. 2013), which suggested that Xylaria sp. SDF3 and Penicillium sp. SDF4 could not be excluded in quantifying paclitaxel yield. As demonstrated that paclitaxel has been also secreted by endophytes from other medicinal plants (Kumaran et al. 2009; Hridoy et al. 2022), exploiting the biosynthetic potency of paclitaxel from these promising strains SDF3, SDF4, SDF8, and SDF17 at phenotypic and genomic levels will be an interesting subject for future investigations. Moreover, fungal endophytes also are recognized as good sources of antioxidants as an alternative to plant extracts and synthetic antioxidants. There are numerous reports demonstrating antioxidant properties of fungi related to secondary metabolites like polyphenols and flavonoids that are plant-derived compounds with a variety of significant pharmacological activities (Kumar and Pandey 2013; Shahidi and Ambigaipalan 2015; Dhayanithy et al. 2019; Rahaman et al. 2020). Here, the extracts of bioactive strains subjected to four different in vitro antioxidant experiments, DPPH, hydroxyl radical, superoxide radical, and reducing power, revealed good antioxidant activities. Similar findings have been shown in fungi recovered from some medicinal plants including C. roseus, E. hirta, Conyza blini H.Lév (Dhayanithy et al. 2019; Tang et al. 2020; Gautam et al. 2022). The best DPPH and superoxide radical scavenging capacities observed in fungal extracts may be attributed to their hydrogen-donating ability and superoxide dismutase-like properties, respectively indicating the potent wide-ranging antioxidant activities. Aspergillus nidulans ST22 and Aspergillus oryzae SX10 isolated from Ginkgo biloba L. was found to be source of phenolic and flavonoid compounds responsible for a strong antioxidant activity (Qiu et al. 2010), indicating strong correlation between total polyphenol and flavonoid contents and antioxidant activity for all the fungal extracts. The high content of polyphenol and flavonoid also provides solid evidence to support the hypothesis that phenol and flavonoid compounds are likely responsible for not only the total antioxidant capacity but also the mortality of cancer and microbial cells. The present study provides new perspectives insight into culturable fungi isolated from a native conifer T. chinensis listed in the IUCN Red List and their potent biological activities that may stand out for promising drug candidates. Here, we isolated 16 endophytic fungi from T. chinensis that belonged to 7 known and one unidentified genera. Activity-based screening showed that out of 16 ethyl acetate extracts, four fungal strains Penicillium sp. SDF4, Penicillium sp. SDF5, Aspergillus sp. SDF8, and Aspergillus sp. SDF17 exhibited significant inhibitory effects against microbial pathogens, cancer cell lines, and free radicals. The biological activities obtained could be in the relation to the high content of plant-derived compounds like polyphenols and flavonoids. Surprisingly, these strains were predicted to produce paclitaxel based on the presence of key genes in the biosynthetic pathway. The study represented here compensates for the absence of research on T. chinensis and highlights the potential applications of endophytic fungi in the development of drugs but is required to be further investigated at the molecular and mechanistic level. Aiello AS (2016) Tsuga chinensis Pinaceae. Curtis's Bot Mag 33(1):82–93 da Silva MHR, Cueva-Yesquén LG, Júnior SB, Garcia VL, Sartoratto A, de Angelis DF, de Angelis DA (2020) Endophytic fungi from Passiflora incarnata: an antioxidant compound source. Arch Microbiol 202(10):2779–2789. https://doi.org/10.1007/s00203-020-02001-y Del Tredici P, Kitajima A (2004) Introduction and cultivation of Chinese hemlock (Tsuga chinensis) and its resistance to hemlock woolly adelgid (Adelges tsugae). Arboric J 30:282–286. https://doi.org/10.48044/jauf.2004.034 Dhayanithy G, Subban K, Chelliah J (2019) Diversity and biological activities of endophytic fungi associated with Catharanthus roseus. BMC Microbiol 19(1):22–22. https://doi.org/10.1186/s12866-019-1386-x El-hawary SS, Moawad AS, Bahr HS, Abdelmohsen UR, Mohammed R (2020) Natural product diversity from the endophytic fungi of the genus Aspergillus. RSC Adv 10(37):22058–22079. https://doi.org/10.1039/D0RA04290K Eniu A, Cherny NI, Bertram M, Thongprasert S, Douillard J-Y, Bricalli G, Vyas M, Trapani D (2019) Cancer medicines in Asia and Asia-Pacific: What is available, and is it effective enough? ESMO Open 4(4):e000483. https://doi.org/10.1136/esmoopen-2018-000483 Fernandes EG, Pereira OL, Silva CC, Bento CBP, Queiroz MV (2015) Diversity of endophytic fungi in Glycine max. Microbiol Res 181:84–92. https://doi.org/10.1016/j.micres.2015.05.010 Gamboa MA, Bayman P (2001) Communities of endophytic fungi in leaves of a tropical timber tree (Guarea guidonia: Meliaceae). Biotropica 33(2):352–360. https://doi.org/10.1111/j.1744-7429.2001.tb00187.x Gangadevi V, Muthumary J (2009) Taxol production by Pestalotiopsis terminaliae, an endophytic fungus of Terminalia arjuna (arjun tree). Biotechnol Appl Biochem 52(Pt 1):9–15. https://doi.org/10.1042/ba20070243 Gautam VS, Singh A, Kumari P, Nishad JH, Kumar J, Yadav M, Bharti R, Prajapati P, Kharwar RN (2022) Phenolic and flavonoid contents and antioxidant activity of an endophytic fungus Nigrospora sphaerica (EHL2), inhabiting the medicinal plant Euphorbia hirta (dudhi) L. Arch Microbiol 204(2):140. https://doi.org/10.1007/s00203-021-02650-7 Gonelimali FD, Lin J, Miao W, Xuan J, Charles F, Chen M, Hatab SR (2018) Antimicrobial properties and mechanism of action of some plant extracts against food pathogens and spoilage microorganisms. Front Microbiol 9:1639. https://doi.org/10.3389/fmicb.2018.01639 Griffiths K, Aggarwal BB, Singh RB, Buttar HS, Wilson D, De Meester F (2016) Food antioxidants and their anti-inflammatory properties: a potential role in cardiovascular diseases and cancer prevention. Diseases 4(3). https://doi.org/10.3390/diseases4030028 Hridoy M, Gorapi MZH, Noor S, Chowdhury NS, Rahman MM, Muscari I, Masia F, Adorisio S, Delfino DV, Mazid MA (2022) Putative anticancer compounds from plant-derived endophytic fungi: A review. Molecules 27(1):296 Kadaikunnan S, Rejiniemon T, Khaled JM, Alharbi NS, Mothana R (2015) In-vitro antibacterial, antifungal, antioxidant and functional properties of Bacillus amyloliquefaciens. Ann Clin Microbiol Antimicrob 14:9–9. https://doi.org/10.1186/s12941-015-0069-1 Kamel RA, Abdel-Razek AS, Hamed A, Ibrahim RR, Stammler HG, Frese M, Sewald N, Shaaban M (2020) Isoshamixanthone: a new pyrano xanthone from endophytic Aspergillus sp. ASCLA and absolute configuration of epiisoshamixanthone. Nat Prod Res 34(8):1080–1090. https://doi.org/10.1080/14786419.2018.1548458 Khalil AMA, Hassan SE-D, Alsharif SM, Eid AM, Ewais EE-D, Azab E, Gobouri AA, Elkelish A, Fouda A (2021) Isolation and characterization of fungal endophytes isolated from medicinal plant Ephedra pachyclada as plant growth-promoting. Biomolecules 11(2):140 Khiralla A, Mohamed IE, Tzanova T, Schohn H, Slezack-Deschaumes S, Hehn A, André P, Carre G, Spina R, Lobstein A, Yagi S, Laurain-Mattar D (2016) Endophytic fungi associated with Sudanese medicinal plants show cytotoxic and antibiotic potential. FEMS Microbiol Lett 363(11). https://doi.org/10.1093/femsle/fnw089 Kumar P, Singh B, Thakur V, Thakur A, Thakur N, Pandey D, Chand D (2019) Hyper-production of taxol from Aspergillus fumigatus, an endophytic fungus isolated from Taxus sp. of the Northern Himalayan region. Biotechnol Rep 24:e00395. https://doi.org/10.1016/j.btre.2019.e00395 Kumar S, Pandey AK (2013) Chemistry and biological activities of flavonoids: an overview. Sci World J 2013:162750–162750. https://doi.org/10.1155/2013/162750 Kumar S, Stecher G, Tamura K (2016) MEGA7: molecular evolutionary genetics analysis version 7.0 for bigger datasets. Mol Biol Evol 33(7):1870–1874. https://doi.org/10.1093/molbev/msw054 Kumaran RS, Muthumary J, Kim E-K, Hur B-K (2009) Production of taxol from Phyllosticta dioscoreae, a leaf spot fungus isolated from Hibiscus rosa-sinensis. Biotechnol Bioproc E 14(1):76. https://doi.org/10.1007/s12257-008-0041-4 Kuo J, Chang CF, Chi WC (2021) Isolation of endophytic fungi with antimicrobial activity from medicinal plant Zanthoxylum simulans Hance. Folia Microbiol 66(3):385–397. https://doi.org/10.1007/s12223-021-00854-4 Li G, Kusari S, Lamshöft M, Schüffler A, Laatsch H, Spiteller M (2014) Antibacterial secondary metabolites from an endophytic fungus, Eupenicillium sp. LG41. J Nat Prod 77(11):2335–2341. https://doi.org/10.1021/np500111w Li X (2012) Improved pyrogallol autoxidation method: a reliable and cheap superoxide-scavenging assay suitable for all antioxidants. J Agric Food Chem 60(25):6418–6424. https://doi.org/10.1021/jf204970r Liu Z, Ren Z, Zhang J, Chuang CC, Kandaswamy E, Zhou T, Zuo L (2018) Role of ROS and nutritional antioxidants in human diseases. Front Physiol 9:477. https://doi.org/10.3389/fphys.2018.00477 Maciá-Vicente JG, Jansson HB, Abdullah SK, Descals E, Salinas J, Lopez-Llorca LV (2008) Fungal root endophytes from natural vegetation in Mediterranean environments with special reference to Fusarium spp. FEMS Microbiol Ecol 64(1):90–105. https://doi.org/10.1111/j.1574-6941.2007.00443.x Ngo CC, Nguyen QH, Nguyen TH, Quach NT, Dudhagara P, Vu THN, Le TTX, Le TTH, Do TTH, Nguyen VD, Nguyen NT, Phi Q-T (2021) Identification of fungal community associated with deterioration of optical observation instruments of museums in northern Vietnam. Appl Sci 11(12):5351 Patil MP, Patil RH, Maheshwari VL (2015) Biological activities and identification of bioactive metabolite from endophytic Aspergillus flavus L7 isolated from Aegle marmelos. Curr Microbiol 71(1):39–48. https://doi.org/10.1007/s00284-015-0805-y Qiu M, Xie R-S, Shi Y, Zhang H, Chen H-M (2010) Isolation and identification of two flavonoid-producing endophytic fungi from Ginkgo biloba L. Ann Microbiol 60(1):143–150. https://doi.org/10.1007/s13213-010-0016-5 Rahaman MS, Siraj MA, Sultana S, Seidel V, Islam MA (2020) Molecular phylogenetics and biological potential of fungal endophytes from plants of the Sundarbans mangrove. Front Microbiol 11. https://doi.org/10.3389/fmicb.2020.570855 Rajoka MSR, Mehwish HM, Hayat HF, Hussain N, Sarwar S, Aslam H, Nadeem A, Shi J (2019) Characterization, the antioxidant and antimicrobial activity of exopolysaccharide isolated from poultry origin Lactobacilli. Probiotics Antimicrob Proteins 11(4):1132–1142. https://doi.org/10.1007/s12602-018-9494-8 Rakotoniriana EF, Munaut F, Decock C, Randriamampionona D, Andriambololoniaina M, Rakotomalala T, Rakotonirina EJ, Rabemanantsoa C, Cheuk K, Ratsimamanga SU, Mahillon J, El-Jaziri M, Quetin-Leclercq J, Corbisier AM (2008) Endophytic fungi from leaves of Centella asiatica: occurrence and potential interactions within leaves. Antonie Van Leeuwenhoek 93(1-2):27–36. https://doi.org/10.1007/s10482-007-9176-0 Rasch M (2002) The influence of temperature, salt and pH on the inhibitory effect of reuterin on Escherichia coli. Int J Food Microbiol 72(3):225–231. https://doi.org/10.1016/s0168-1605(01)00637-7 Richards TA (2011) Genome evolution: Horizontal movements in the fungi. Curr Biol 21(4):R166–R168. https://doi.org/10.1016/j.cub.2011.01.028 Rosa LH, Almeida Vieira Mde L, Santiago IF, Rosa CA (2010) Endophytic fungi community associated with the dicotyledonous plant Colobanthus quitensis (Kunth) Bartl. (Caryophyllaceae) in Antarctica. FEMS Microbiol Ecol 73(1):178–189. https://doi.org/10.1111/j.1574-6941.2010.00872.x Saad B, Sing YY, Nawi MA, Hashim N, Mohamed Ali AS, Saleh MI, Sulaiman SF, Talib KM, Ahmad K (2007) Determination of synthetic phenolic antioxidants in food items using reversed-phase HPLC. Food Chem 105(1):389–394. https://doi.org/10.1016/j.foodchem.2006.12.025 Samson RA, Peterson SW, Frisvad JC, Varga J (2011) New species in Aspergillus section Terrei. Stud Mycol 69:39–55. https://doi.org/10.3114/sim.2011.69.04 Schmeda-Hirschmann G, Hormazabal E, Astudillo L, Rodriguez J, Theoduloz C (2005) Secondary metabolites from endophytic fungi isolated from the Chilean gymnosperm Prumnopitys andina (Lleuque). World J Microbiol Biotechnol 21(1):27–32. https://doi.org/10.1007/s11274-004-1552-6 Shahidi F, Ambigaipalan P (2015) Phenolics and polyphenolics in foods, beverages and spices: antioxidant activity and health effects – a review. J Funct Foods 18:820–897. https://doi.org/10.1016/j.jff.2015.06.018 Skehan P, Storeng R, Scudiero D, Monks A, McMahon J, Vistica D, Warren JT, Bokesch H, Kenney S, Boyd MR (1990) New colorimetric cytotoxicity assay for anticancer-drug screening. J Natl Cancer Inst 82(13):1107–1112. https://doi.org/10.1093/jnci/82.13.1107 Soliman SSM, Raizada MN (2018) Darkness: a crucial factor in fungal taxol production. Front Microbiol 9. https://doi.org/10.3389/fmicb.2018.00353 Tang Z, Qin Y, Chen W, Zhao Z, Lin W, Xiao Y, Chen H, Liu Y, Chen H, Bu T, Li Q, Cai Y, Yao H, Wan Y (2021) Diversity, chemical constituents, and biological activities of endophytic fungi isolated from Ligusticum chuanxiong Hort. Front Microbiol 12. https://doi.org/10.3389/fmicb.2021.771000 Tang Z, Wang Y, Yang J, Xiao Y, Cai Y, Wan Y, Chen H, Yao H, Shan Z, Li C, Wang G (2020) Isolation and identification of flavonoid-producing endophytic fungi from medicinal plant Conyza blinii H.Lév that exhibit higher antioxidant and antibacterial activities. PeerJ 8:e8978. https://doi.org/10.7717/peerj.8978 Toghueo RMK (2019) Bioprospecting endophytic fungi from Fusarium genus as sources of bioactive metabolites. Mycology 11(1):1-21. https://doi.org/10.1080/21501203.2019.1645053. Wu Y-Y, Zhang T-Y, Zhang M-Y, Cheng J, Zhang Y-X (2018) An endophytic fungi of Ginkgo biloba L. produces antimicrobial metabolites as potential inhibitors of FtsZ of Staphylococcus aureus. Fitoterapia 128:265–271. https://doi.org/10.1016/j.fitote.2018.05.033 Xin XQ, Chen Y, Zhang H, Li Y, Yang MH, Kong LY (2019) Cytotoxic seco-cytochalasins from an endophytic Aspergillus sp. harbored in Pinellia ternata tubers. Fitoterapia 132:53–59. https://doi.org/10.1016/j.fitote.2018.11.010 Xiong Z-Q, Yang Y-Y, Zhao N, Wang Y (2013) Diversity of endophytic fungi and screening of fungal paclitaxel producer from Anglojap yew, Taxus x media. BMC Microbiol 13(1):71. https://doi.org/10.1186/1471-2180-13-71 Zhao J, Li C, Wang W, Zhao C, Luo M, Mu F, Fu Y, Zu Y, Yao M (2013) Hypocrea lixii, novel endophytic fungi producing anticancer agent cajanol, isolated from pigeon pea (Cajanus cajan [L.] Millsp.). J Appl Microbiol 115(1):102–113. https://doi.org/10.1111/jam.12195 Zhou X, Wang Z, Jiang K, Wei Y, Lin J, Sun X, Tang K (2007) Screening of taxol-producing endophytic fungi from Taxus chinensis var. mairei. Prikl Biokhim Mikrobiol 43(4):490–494 The authors would like to thank the support of VAST – Culture Collection of Microorganisms, Institute of Biotechnology, Vietnam Academy of Science and Technology (www.vccm.vast.vn). This study was financially supported by the Vietnam Academy of Science and Technology under Grant number TĐCNSH.05/20-22. Thi Hanh Nguyen Vu and Ngoc Son Pham contributed equally to this work. Institute of Biotechnology, Vietnam Academy of Science and Technology, Hanoi, 100000, Vietnam Thi Hanh Nguyen Vu, Ngoc Son Pham, Phuong Chi Le, Quynh Anh Pham, Ngoc Tung Quach, Van The Nguyen, Thi Thao Do, Hoang Ha Chu & Quyet Tien Phi Graduate University of Science and Technology, Vietnam Academy of Science and Technology, Hanoi, 100000, Vietnam Thi Hanh Nguyen Vu, Ngoc Son Pham, Ngoc Tung Quach, Thi Thao Do, Hoang Ha Chu & Quyet Tien Phi Ngoc Son Pham Thi Hanh Nguyen Vu Phuong Chi Le Quynh Anh Pham Ngoc Tung Quach Van The Nguyen Thi Thao Do Hoang Ha Chu Quyet Tien Phi THNV and NSP conceived of this study. PCL, QAP, VTN, TTD, and NTQ designed and performed the experiments. NSP and THNV supervised and implemented the statistical analysis. THNV, NTQ, and NSP wrote the manuscript. QTP and HHC improved the writing of the manuscript. The authors read and approved the final manuscript. Correspondence to Quyet Tien Phi. The participant has consented to the submission of this article to the journal. We confirm that the manuscript, or part of it, has neither been published nor is currently under consideration for publication. This work and the manuscript were approved by all co-authors. Additional file 1: Fig. S1. Pictures of the whole tree (A), branch (B, C), and leaves (D) of T. chinensis (Franch.) Pritz collected at Ha Giang province, northern Vietnam. Fig. S2. Distribution of fungal strains affiliated with different plant organs (A) and genera (B) retrieved from T. chinensis (Franch.) Pritz. Fig. S3. The colony morphology of 16 endophytic fungi on PDA at 28°C isolated from T. chinensis (Franch.) Pritz. Vu, T.H.N., Pham, N.S., Le, P.C. et al. Distribution, cytotoxicity, and antioxidant activity of fungal endophytes isolated from Tsuga chinensis (Franch.) Pritz. in Ha Giang province, Vietnam. Ann Microbiol 72, 36 (2022). https://doi.org/10.1186/s13213-022-01693-5 Endophytic fungi Tsuga chinensis
CommonCrawl
A statistical framework for analyzing deep mutational scanning data Alan F. Rubin1,2,3,4, Hannah Gelman4,5, Nathan Lucas6, Sandra M. Bajjalieh6, Anthony T. Papenfuss1,2,3,7,8, Terence P. Speed1,8 & Douglas M. Fowler4,9 Genome Biology volume 18, Article number: 150 (2017) Cite this article A Correction to this article was published on 07 February 2018 Deep mutational scanning is a widely used method for multiplex measurement of functional consequences of protein variants. We developed a new deep mutational scanning statistical model that generates error estimates for each measurement, capturing both sampling error and consistency between replicates. We apply our model to one novel and five published datasets comprising 243,732 variants and demonstrate its superiority in removing noisy variants and conducting hypothesis testing. Simulations show our model applies to scans based on cell growth or binding and handles common experimental errors. We implemented our model in Enrich2, software that can empower researchers analyzing deep mutational scanning data. Exploring the relationship between sequence and function is fundamental to enhancing our understanding of biology, evolution, and genetically driven disease. Deep mutational scanning is a method that marries deep sequencing to selection among a large library of protein variants, measuring the functional consequences of hundreds of thousands of variants of a protein simultaneously. Deep mutational scanning has greatly enhanced our ability to probe the protein sequence-function relationship [1] and has become widely used [2]. For example, deep mutational scanning has been applied to comprehensive interpretation of variants found in disease-related human genes [3, 4], understanding protein evolution [5,6,7,8,9], and probing protein structure [10, 11] with many additional possibilities on the horizon [2]. In a deep mutational scan, a library of protein variants is first introduced into a model system [12]. Model systems that have been used in deep mutational scanning include phage, bacteria, yeast, and cultured mammalian cells. A selection is applied for protein function or another molecular property of interest, altering the frequency of each variant according to its functional capacity. Selections can be growth-based or implement physical separation of variants into bins, as in phage display or flow sorting of cells. Next, the frequency of each variant in each time point or bin is determined by using deep sequencing to count the number of times each variant appears. Here, the variable region is either directly sequenced using a single-end or paired-end strategy, or a short barcode that uniquely identifies each variant in the population is sequenced instead [12, 13]. Barcoding enables accurate assessment of variable regions longer than a single sequencing read [4, 13, 14]. Analysis of the change in each variant's frequency throughout the selection yields a score that estimates the variant's effect. Scoring the performance of individual variants is distinct from a related class of methods that quantify tolerance for change at each position in a target protein [15]. Those approaches enable a different set of biological inferences that we do not seek to address here. Guidelines for the design of deep mutational scanning experiments have been discussed elsewhere [12, 16,17,18]. Fundamental gaps remain in our ability to use deep mutational scanning data to accurately measure the effect of each variant because practitioners lack a unifying statistical framework within which to interpret their results. Existing methods are diverse in terms of their scoring function, statistical approach, and generalizability. Two established implementations of deep mutational scanning scoring methods, Enrich [19] and EMPIRIC [20], calculate variant scores based on the ratio of variant frequencies before and after selection. This type of ratio-based scoring has been used to quantify the effect of non-coding changes in promoters as well [21]. However, while intuitive and easy to calculate, ratio-based scores are highly sensitive to sampling error when frequencies are low. For experimental designs that sample from more than two time points to improve the resolution of changes in frequency, ratio-based scoring is insufficient so a regression-based approach has been used instead [4, 16, 22, 23]. Both ratio and regression analyses can incorporate corrections for wild-type performance [8, 16, 19, 20, 24] or nonsense variants [20, 22] at the expense of restricting the method to protein-coding targets only. The lack of a common standard for calculating scores makes comparison between studies difficult and existing bespoke methods are not applicable to the diverse array of experimental designs currently being used. Furthermore, no existing method quantifies the uncertainty surrounding each score, which limits the utility of the data. For example, one of the most compelling applications of deep mutational scanning is to annotate variants found in human genomes with the goal of empowering variant interpretation [4], where estimation of the uncertainty associated with each measurement in a common framework is crucial. At best, current approaches employ ad hoc filtering of putative low-quality scores, often using manually determined read-depth cutoffs. To address these limitations, we present Enrich2, an extensible and easy-to-use computational tool that implements a comprehensive statistical model for analyzing deep mutational scanning data. Enrich2 includes scoring methods applicable to deep mutational scans with any number of time points. Unlike existing methods, Enrich2 also estimates variant scores and standard errors that reflect both sampling error and consistency between replicates. We explore Enrich2 performance using novel and published deep mutational scanning datasets comprising 243,732 variants in five target proteins, as well as simulated data. We demonstrate that Enrich2's scoring methods perform better than existing methods across multiple experimental designs. Enrich2 facilitates superior removal of noisy variants and improved detection of variants of small effect and enables statistically rigorous comparisons between variants. Enrich2 is platform-independent and includes a graphical interface designed to be accessible to experimental biologists with minimal bioinformatics experience. Overview of Enrich2 workflow We distilled the common features of a deep mutational scan into a generalized workflow (Fig. 1). After the experiment, each FASTQ file is quality filtered and variants are counted. For directly sequenced libraries, this involves calling the variant for each read (see "Methods"). For barcoded libraries, barcode counts are assigned to variants using an additional file that describes the many-to-one barcode-to-variant relationship. Next, the counts for each variant are normalized and a score is calculated that quantifies the change in frequency of each variant in each selection. Finally, each variant's scores from replicate selections are combined into a single replicate score using a random-effects model. Variant standard errors are also calculated for each selection and replicate score, allowing the experimenter to remove noisy variants or perform hypothesis testing. Enrich2 is designed to enable users to implement other scoring functions, so long as they produce a score and a standard error. Thus, Enrich2 can serve as a framework for any counting-based enrichment/depletion experiment. Deep mutational scanning and Enrich2. In a deep mutational scan, a library of protein variants is subjected to selection, which perturbs the frequency of variants. Samples of the library are collected before, during, and after selection and subjected to high-throughput sequencing (left panel). Enrich2 processes the high-throughput sequencing files generated from each sample. Sequencing reads are quality filtered and variants are counted by comparing each read to the wild-type sequence. Enrich2 estimates variant scores and standard errors using the variant counts and combines these estimates for replicates (middle panel). Enrich2 displays the scores and standard errors as a sequence-function map. A sequence-function map of eight positions of the hYAP65 WW domain is shown (right panel). Cell color indicates the score for the single amino acid change (row) at the given position in the mutagenized region (column). Positive scores (in red) indicate better than wild-type performance in the assay and negative scores (in blue) indicate worse than wild-type performance. Diagonal lines in each cell represent the standard error for the score and are scaled such that the highest standard error on the plot covers the entire diagonal. Standard errors that are less than 2% of this maximum value are not plotted. Cells containing circles have the wild-type amino acid at that position. Gray squares denote amino acid changes that were not measured in the assay Scoring a single selection using linear regression For experimental designs with three or more time points, Enrich2 calculates a score for each variant using weighted linear least squares regression. These time points can be variably spaced, as in samples from a yeast selection withdrawn at different times, or they can be uniformly spaced to represent rounds or bins, as in successive rounds of a phage selection. This method assumes the selection pressure is relatively constant during the course of the selection. Each variant's score is defined as the slope of the regression line. For each time point in the selection, including the input time point, we calculate a log ratio of the variant's frequency relative to the wild-type's frequency in the same time point and regress these values on time. Regression weights are calculated for each variant in each time point based on the Poisson variance of the variant's count (see "Methods"). We estimate a standard error for each score using the weighted mean square of the residuals about the fitted line. We calculate p values for each score using the z-distribution under the null hypothesis that the variant behaves like wild-type (i.e. has a slope of 0). A problem with linear regression-based scoring is that the wild-type frequency often changes non-linearly over time in an experiment-specific and selection-specific manner (Fig. 2). Some linear model-based approaches subtract the wild-type score from each variant's score [4, 22], ignoring this issue and potentially reducing score accuracy. A solution for this problem, which has been used extensively, is normalizing each variant's score to wild-type at each time point [16, 20, 25,26,27]. We implemented per-time point normalization and compared variant standard errors calculated with and without wild-type normalization for a total of 14 replicates in three different experiments: a phage selection for BRCA1 E3 ubiquitin ligase activity; a yeast two-hybrid selection for BRCA1-BARD1 binding; and a phage selection for E4B E3 ubiquitin ligase activity (Table 1). In all cases, wild-type normalization resulted in significantly smaller variant standard errors (p ≈ 0, binomial test, Additional file 1). Variants that remain non-linear after normalization are poorly fit by our regression model and have high standard errors. Thus, they can easily be identified for further examination or removal. Wild-type frequency can change non-linearly. The change in frequency of the wild-type over the course of replicate selections is shown for (a) BRCA1 E3 ubiquitin ligase, (b) BRCA1-BARD1 binding, or (c) E4B E3 ubiquitin ligase. Each colored line represents a single replicate Table 1 Datasets analyzed with Enrich2 Wild-type normalization is not always the best option. For example, some experimental designs do not have a wild-type sequence in the library, which precludes wild-type normalization. Furthermore, experiments subject to high levels of stochasticity arising from low read depth or limited sampling can benefit from normalization to the total number of reads rather than to wild-type [16]. Normalization to wild-type is also inappropriate in cases where the effect of the wild-type is incorrectly estimated or subject to high levels of error [16, 28]. To deal with these cases, Enrich2 also offers normalization using the number of reads instead of the wild-type count. Wild-type non-linearity is not the only problem in scoring a typical selection. Each time point has a different number of reads per variant and time points with low coverage are more affected by sampling error. An example of this issue is found in one of the replicate selections for BRCA1 E3 ubiquitin ligase activity (Fig. 3a). To address this problem, Enrich2 downweights time points in the regression with low counts per variant. Without weighted regression, the experimenter is forced to choose between three undesirable options: using the low coverage time point and adding noise to the measurements; removing the time point and complicating efforts to compare replicates; or spending time and resources to re-sequence the time point. Weighting avoids these undesirable options, achieving lower variant standard errors as compared to ordinary regression (Fig. 3b). To show that this effect is general and not a feature of the specific BRCA1 replicate we analyzed, we downsampled reads from a single time point in the E4B E3 ubiquitin ligase dataset. We find that weighted regression reduces the mean standard error regardless of the fraction of reads removed (Fig. 3c, d). Finally, we show that weighted regression improves reproducibility between replicates in the BRCA1 E3 ubiquitin ligase dataset, even in the absence of any filtering (Fig. 3e, f). A previously developed Bayesian MCMC approach could be used to generate a posterior variance, which would be of similar value to our standard errors [28]. However, this approach would be impracticably slow for tens of thousands of variants. Weighted least squares regression reduces standard error and improves replicate correlation. a The number of reads (shaded blue bars) and the distribution of variant regression weights (boxplots, solid green line is the median, dotted green line is the mean, box spans the first to third quartile, whiskers denote the data range) for each time point in a single BRCA1 E3 ubiquitin ligase selection is shown. Time points with fewer reads per variant are downweighted in the regression. The weights for later time points are lower on average because most variants decrease in frequency during the course of the selection. b A density plot of standard errors for all variants in the selection shown in (a) calculated using weighted least squares regression (blue line) or ordinary least squares regression (green line) is shown. The weighted least squares regression method returns lower standard errors using the same underlying data by minimizing the impact of sampling error in low read count time points. c The mean standard error of variants after randomly downsampling reads in a single time point in one of the E4B E3 ubiquitin ligase selections is shown. Mean standard errors for all variants at each read downsampling percentage were calculated using either weighted least squares regression (blue) or ordinary least squares regression (green). Error bars indicate the 95% confidence interval of five random downsampling trials at each percentage. d Read counts per time point in the selection described in (c) is shown. The lines on the bar for time point 2 correspond to the level of downsampling on the x-axis of (c). e, f Plots of variant scores in two replicate selections from the BRCA1 E3 ubiquitin ligase dataset are shown. Replicate agreement for scores calculated using the weighted least squares regression model (e) is higher than agreement for scores calculated using ordinary least squares regression (f). The dashed line shows the line of best fit for the replicate scores in each plot. Hex color indicates point density For experiments with only two sequenced populations or time points (e.g. "input" and "selected"), Enrich2 calculates the slope between the two time point log ratios, which is equivalent to frequently used ratio-based scoring methods [1, 19, 20, 24]. Unlike previous implementations of ratio-based scoring, we provide standard error estimates for each score using Poisson assumptions (see "Methods"). A random-effects model for scoring replicate selections Deep mutational scans are affected by various sources of error in addition to sampling error. One way to deal with this problem is to perform replicates. Usually, each variant's score is calculated by taking the mean across replicates, which ignores the distribution of replicate scores. Furthermore, if an error is calculated, it is derived only from the replicate scores' distribution and ignores any error associated with each replicate score. One alternative is to combine replicate scores using a fixed-effect model [29]. We examined this approach for the BRCA1 E3 ubiquitin ligase dataset (Fig. 4) and found that because variant scores can vary widely between replicates, this method dramatically underestimates the standard error of the combined variant score. We therefore implemented a random-effects model that estimates each variant's score based on the distribution of that variant's scores across all replicates. This random-effects model also produces a standard error estimate for each variant that captures selection-specific error as well as error arising from the distribution of replicate scores (see "Methods"). A random-effects model for scoring replicate selections. Variant scores for 20 randomly selected variants from the BRCA1 E3 ubiqutin ligase dataset are shown. The replicate scores (green) were determined for each variant using Enrich2 weighted regression. Combined variant score estimates were determined using a fixed-effect model (orange) or the Enrich2 random-effects model (blue). In all cases, error bars show +2 or –2 standard errors The random-effects model furnishes variant scores that are less sensitive to outlier replicates than a fixed-effect model (Fig. 4). Additionally, standard errors estimated by the random-effects model better reflect the distribution of replicate scores, providing a better basis for subsequent hypothesis testing. The same random-effects model can be used for experiments with any number of time points or replicates or with any Enrich2 scoring function (Additional file 2: Figure S1). A key advantage of this approach is that error is quantified on a per-variant basis, unlike the usual approach of comparing replicate selections using pairwise correlation [4, 15, 22]. This allows experimenters to use replicate data to make inferences about individual variants, rather than simply as a quality control check for whole experiments. Standard error-based variant filtering Per-variant standard error estimates enable the removal of variants with unreliable scores. This contrasts with previous filtering schemes, which employed an empirical cutoff for the minimum number of read counts for each variant in the input library or throughout the selection [1, 4, 12, 14, 30,31,32,33,34,35,36]. Read count cutoffs eliminate low-count variants that may be unreliably scored due to sampling error, but ignore other sources of noise and may introduce a bias against variants that become depleted after selection. Enrich2 retains low-count variants and enables the experimenter to determine which scores are reliable directly from the associated standard error. To assess whether standard error-based filtering performs better than read count-based filtering, we analyzed data from a deep mutational scan of the C2 domain of Phospholipase A2 (Table 1). Here, a library of 84,252 phage-displayed C2 domain variants was selected for lipid binding over several rounds. This dataset was un-analyzable using previous methods due to the apparent extreme variability between replicate selections. We compared filtering based on four different parameters: variant standard error calculated using the random-effects model or the fixed-effect model, read count in the input round, and total read count in all rounds of selection. To quantify filtering method performance, we took the top quartile of variants selected by each filtering method. Then, we calculated the pairwise Pearson correlation coefficient between variant scores for each possible pair of the three replicates in the C2 domain dataset (Fig. 5, Additional file 3). We found that filtering based on standard errors from the random-effects model was the only method that recovered a replicable subset of variants from this dataset. In fact, input count filtering selected a subset of variants whose scores were more poorly correlated than the unfiltered set. We performed a similar analysis on the higher-quality E4B, neuraminidase, and BRCA1 replicate datasets using the top three quartiles of variants. As for the C2 domain, we found that filtering based on random-effects standard error outperforms the other filtering methods (Additional file 3). For example, in the E4B dataset random-effects standard error filtering performed better (pairwise Pearson r2 = 0.80) than fixed-effect standard error (r2 = 0.59), input library count (r2 = 0.58), or total count filtering (r2 = 0.59). We note that any filtering strategy removes variants and reduces coverage. To explore how the stringency of variant filtering affects replicate correlation, we calculated replicate correlations after removing increasing numbers of variants according to each of the four filtering methods (Additional file 2: Figure S2). We found that filtering by standard errors from the random-effects model was the only approach that yielded high correlations between replicates for the C2 domain data. Furthermore, random-effects standard error filtering performed better at nearly all filtering stringencies in both the C2 domain and BRCA1 E3 datasets. Standard error-based filtering improves replicate correlation. Variant scores from two replicates of the C2 domain dataset are shown. Each panel plots the top quartile of variants selected by standard error from the random-effects model (leftmost column, blue points), standard error from the fixed-effect model (middle-left column, green points), input library count (middle-right column, orange points), or total count in all libraries (rightmost column, purple points). Scores and standard errors are calculated using only the input and final round of selection (top row) or using all three rounds (bottom row). The dashed line is the best linear fit and the Pearson correlation coefficient is shown To further demonstrate the utility of Enrich2 standard error-based filtering, we re-analyzed a deep mutational scan of the influenza virus neuraminidase gene (Table 1). In this experiment, 22 neuraminidase variants were individually validated and used to assess the quality of the deep mutational scanning data. Of these individually validated variants, four had large variant score standard errors as determined by Enrich2's random-effects model (Fig. 6a, Additional file 2: Figure S3, Additional file 4). Removing these high-standard error variants improved the correlation between the deep mutational scanning scores and individual validation scores from Pearson r2 = 0.81 to r2 = 0.87. Removal of these variants also improved the correlation when scores were calculated as originally described in the study (Pearson r2 = 0.80 versus r2 = 0.84) (Additional file 2: Figure S3) [30]. This suggests that scores of variants with low Enrich2 standard errors are more likely to reflect the results of gold standard validation experiments and supports the use of standard error-based filtering for selecting candidate variants for follow-up studies. We note that in the neuraminidase experiment, the three replicates used a common starting library. This design fails to capture some artifacts, especially those introduced during cloning. Ideally, full biological replicates should be collected. Standard errors enable hypothesis testing. a Enrich2 variant scores are plotted against single-variant growth assay scores for the 22 individually validated variants of the neuraminidase dataset. Four (18%) of these variants have Enrich2 standard errors larger than the median standard error. The dotted line shows the best linear fit for all variants and the dashed line shows the best linear fit for variants with standard errors less than the median. b Enrich2 variant scores are plotted for selections performed in the presence or absence of the small molecule inhibitor oseltamivir. Colored points indicate variants that significantly outperformed wild-type in the drug's presence. Red points also scored significantly higher than wild-type in the drug's absence. Triangles indicate the five "drug-adaptive" mutations identified originally [30] Standard error-based hypothesis testing An important challenge in analyzing deep mutational scanning data is determining whether a variant behaves differently from wild-type or differently under altered conditions. Enrich2 standard errors empower experimenters to perform statistical tests for such differences. By default, Enrich2 calculates raw p values for each score under the null hypothesis that the variant's score is indistinguishable from wild-type using a z-test. This allows the user to discriminate between variants with extreme scores due to sampling error or other noise from those that are confidently estimated to be different from wild-type. We note that Enrich2 provides raw p values and users should correct for multiple testing using their preferred method. We can also use a z-test to determine whether variants have different functional consequences under altered experimental conditions. For example, deep mutational scans of the neuraminidase gene were conducted in the presence and absence of the small molecule neuraminidase inhibitor oseltamivir (Table 1). The original study identified five "drug-adaptive" variants, defined as those that outperformed wild-type in the presence of oseltamivir [30]. These five drug-adaptive variants included three known oseltamivir-resistant variants. In our reanalysis, we identified 22 drug-adaptive variants including all five variants found in the original study (Fig. 6b, Additional file 5). Fifteen of these 22 drug-adaptive variants also had a significantly higher score than wild-type in the absence of the inhibitor and therefore might be more likely to occur in natural virus populations. Our results agree broadly with the original analysis. By using Enrich2 to calculate scores and standard errors for variants across replicates, we were able to identify additional candidate variants with small but statistically significant effects, some of which could be of biological interest. Of course, any new candidate variants could be false positives and they would need to be individually validated, as was done in the original study. Simulations of deep mutational scanning data Our analyses of experimental data suggest that Enrich2 is a useful tool for exploring and understanding deep mutational scanning datasets. In support of this, we generated simulated datasets with predetermined variant effects and compared mathematically predicted Enrich2 variant effect scores to scores calculated from simulated data. Using this approach, we demonstrate that the Enrich2 method can be applied to data from either cell growth or binding assays and can handle different types of noise. Deep mutational scanning datasets can be generated using different selection assays. Nearly all scans employ either cell growth assays or binding assays, which are typically conducted using phage or yeast display [12]. To demonstrate that the Enrich2 method can meaningfully assign variant scores for both assay types, we simulated data where each variant's true effect was predetermined (see "Methods"). In growth simulations, a variant's true effect was the growth rate of a cell carrying that variant; in binding simulations, a variant's true effect was the probability of a cell or phage carrying that variant progressing to the subsequent round of selection. In our simulations, each variant's true effect was drawn from a normal distribution with the wild-type true effect in the 75th percentile of the distribution, which is consistent with empirical datasets (Additional file 6). Each simulated dataset contained 10,000 unique variants including wild-type. For each selection, a starting variant population was independently generated and then five rounds of growth or binding selection were performed (see "Methods"). Five replicate selections were simulated for each dataset. Sequencing was simulated such that each variant had, on average, 200 reads. The resulting datasets were scored by Enrich2 using the weighted least squares regression method and replicates were combined using the random-effects model. We found that the Enrich2 scores are strongly correlated with predicted scores based on the true variant effects (r2 = 0.995 for binding and r2 = 0.992 for growth) (Fig. 7a). Thus, the Enrich2 method captures true variant effects for both growth-based and binding-based assays. We note that the relationship of these variant effects to a physical parameter of interest (e.g. K d for binding) depends on the specific conditions of the experiment [37,38,39]. Variant scoring for growth and binding experiments using simulated data. a Enrich2 variant effect scores derived from simulated data are plotted against expected Enrich2 scores based on true variant effects in the simulation. Enrich2 accurately scores variants in both simulated binding assays (left) and growth assays (right). Shading indicates point density from low (blue) to high (white). b Noisy variants were generated by randomizing their true effect in one replicate selection (green line). Noisy variants have higher overall standard errors than other variants (dashed gray line) in both binding and growth assay simulations. c The percentage of variants removed at each standard error percentile cutoff (5% intervals) is plotted. Standard error filtering preferentially removes noisy variants (green points) We also simulated noisy data and evaluated Enrich2's ability to identify affected variants. One type of noise is inconsistent variant effects between replicates, which can arise from cloning errors or experimental variation. We simulated datasets in which 2% of variants in each of the five biological replicates were randomly assigned a new true effect. As expected, noisy variants have higher standard errors than other variants (Fig. 7b) and standard error-based filtering is an effective tool for removing them (Fig. 7c). The magnitude of a noisy variant's standard error is proportional to the magnitude of the difference between the variant's original true effect and the resampled true effect (r2 = 0.85 for binding and r2 = 0.93 for growth; Additional file 2: Figure S4). Another type of noise arises from unexpected amplification or depletion of variant counts in a single time point, which can be due to polymerase chain reaction (PCR) jackpotting or other artifacts during the DNA isolation, amplification, and sequencing steps. We simulated datasets in which 10% of variants are over-represented or under-represented in a single time point. We found that the random-effects model accurately assigns scores to these amplified or depleted variants (Additional file 2: Figure S5A) and the affected variants are easily identified by their replicate standard errors (Additional file 2: Figure S5B). These results illustrate that the Enrich2 method is robust to common types of noise present in deep mutational scanning data. We developed a statistical framework for analyzing deep mutational scanning data that is applicable to many common experimental designs. We showed that our statistical method is superior to existing methods for removing noisy variants and detecting variants of small effect, enabling researchers to extract more from their datasets. We implemented our method in Enrich2, a computationally efficient graphical software package intended to improve access to deep mutational scanning for labs without data analysis experience. Enrich2 is extensible, so users can implement and easily share new scoring functions as new deep mutational scanning experimental designs are developed. Enrich2 builds upon previous approaches to regression-based scoring, which we improved in two ways. First, per-time-point wild-type normalization helps reduce the effects of non-linear behavior under the assumption that many sources of non-linearity affect most variants similarly. Second, weighting each regression time point based on variant counts helps alleviate sampling error. In addition to these improvements, Enrich2 combines replicate selections into a single set of variant scores with standard errors to help identify variants that behave consistently in a given assay. Though variant score precision does not guarantee accuracy, we showed that removing variants with high standard errors from the neuraminidase dataset did improve the correlation between deep mutational scanning results and gold-standard measurements. Enrich2 furnishes generalized variant effect scores, which we showed are applicable to both growth-based and binding-based deep mutational scans. In the case of growth-based deep mutational scans, variant scores are linearly related to growth rate. In the case of binding-based deep mutational scans, variant scores are linearly related to the log of the likelihood of selection in each round. We note that the relationship between the likelihood of selection and variant binding affinity depends on experimental specifics including the number of molecules displayed per cell or phage, ligand concentration, and degree of non-specific binding [39]. Furthermore, the regression-based approach described here is designed for deep mutational scans with constant selection pressure. Selections conducted over longer timescales or selections in which the selection pressure is modulated by the experimenter may not be modeled accurately by our approach [8, 40, 41]. Specific scoring methods that take into account experimental details such as ligand concentration or variable selection pressure could easily be added to Enrich2, taking advantage of the program's existing read counting, variant calling, replicate combining, and visualization machinery. Enrich2 standard errors can also be used to conduct hypothesis tests comparing variants within a single experimental condition or between multiple conditions. When comparing variants between conditions, we assume that the distribution of scores between conditions is roughly similar, but this assumption does not hold in all cases. For example, the shape of the score distribution is a function of the strength of the selective pressure applied [8] and, more generally, the experimental conditions employed. Thus, Enrich2 standard errors should be used with caution when comparing variants between differing selections unless the variant scores are similarly distributed and the selection conditions are comparable. A general method for normalizing scores to facilitate comparisons across different conditions or selection pressures remains an important open question, as existing approaches are computationally intensive [28]. The use of deep mutational scanning is expanding rapidly and better tools for analysis will help it flourish. As with other widely used high-throughput experimental methods, a robustly implemented common statistical framework reduces barriers to entry, ensures data quality, and enables comparative analyses. We suggest that Enrich2 can help deep mutational scanning continue to grow by providing a foundation for meeting these challenges and facilitating further exploration and collaboration. Variant calling and sequence read handling Enrich2 implements alignment-free variant calling. Variant sequences are expected to have the same length and start point as the user-supplied wild-type sequence, which allows Enrich2 to compare each variant to the wild-type sequence in a computationally efficient manner. In addition to this alignment-free mode, an implementation of the Needleman-Wunsch global alignment algorithm [42] is included that will call insertion and deletion events. Enrich2 supports overlapping paired-end reads and single-end reads for direct variant sequencing, as well as barcode sequencing for barcode-tagged variants. Calculating enrichment scores For selections with at least three time points, we define T, which includes all time points, and T′, which includes all time points except the input (t 0). The frequency of a variant (or barcode) v in time point t is the count of the variant in the time point (c v,t ) divided by the number of reads sequenced in the time point (N t ). $$ {f}_{v,t}=\frac{c_{v,t}}{N_t} $$ The change in frequency for a variant v in a non-input time point t ∊ T′ is the ratio of frequencies for t and the input. $$ {r}_{v,t}=\frac{f_{v,t}}{f_{v,0}} $$ Instead of using this raw change in variant frequency, we divide each variant's ratio by the wild-type (wt) variant's ratio. $$ \frac{r_{v,t}}{r_{wt,t}}=\frac{c_{v,t}{c}_{wt,0}}{c_{v,0}{c}_{wt,t}} $$ Because the library size terms (N t and N 0) in the frequencies cancel out, the ratio of ratios is not dependent on other non-wild-type variants in the selection. In practice, we add \( \frac{1}{2} \) to each count to assist with very small counts [43] and take the natural log of this ratio of ratios. $$ {L}_{v,t}= \log \left(\frac{\left({c}_{v,t}+\frac{1}{2}\right)\left({c}_{wt,0}+\frac{1}{2}\right)}{\left({c}_{v,0}+\frac{1}{2}\right)\left({c}_{wt,t}+\frac{1}{2}\right)}\right) $$ This equation can be rewritten as $$ {L}_{v,t}= \log \left(\frac{c_{v,t}+\frac{1}{2}}{c_{wt,t}+\frac{1}{2}}\right)- \log \left(\frac{c_{v,0}+\frac{1}{2}}{c_{wt,0}+\frac{1}{2}}\right) $$ If we were to regress L v,t on t ∊ T ′, we note that the second term is shared between all the time points and therefore only affects the intercept of the regression line. We do not use the intercept in the score, so instead we regress on M v,t and use all values of t ∊ T. $$ {M}_{v,t}= \log \left(\frac{c_{v,t}+\frac{1}{2}}{c_{wt,t}+\frac{1}{2}}\right) $$ The score is defined as the slope of the regression line, \( {\widehat{\beta}}_v \). In practice, we regress on \( \frac{t}{ \max T} \) to facilitate comparisons between selections with different magnitudes of time points (e.g. 0/1/2/3 rounds versus 0/24/48/72 hours). To account for unequal information content across time points with variable sequencing coverage, we perform weighted linear least squares regression [44]. The regression weight for M v,t is V v,t −1, where V v,t is the variance of M v,t based on Poisson assumptions [43] and is approximately $$ {V}_{v,t}=\frac{1}{c_{v,t}+\frac{1}{2}}+\frac{1}{c_{wt,t}+\frac{1}{2}} $$ For selections with only two time points (e.g. input and selected), we use the slope of the line connecting the two points as the score. This is equivalent to the wild-type adjusted log ratio (L v ) derived similarly to L v,t above. $$ {L}_v= \log \left(\frac{c_{v,sel}+\frac{1}{2}}{c_{wt,sel}+\frac{1}{2}}\right)- \log \left(\frac{c_{v,inp}+\frac{1}{2}}{c_{wt,inp}+\frac{1}{2}}\right) $$ As there is no residual error about the fitted line, we must use a different method to estimate the standard error. We calculate a standard error (SE v ) for the enrichment score L v under Poisson assumptions [24, 43]. $$ S{E}_v=\sqrt{\frac{1}{c_{\mathit{v,inp}}+\frac{1}{2}}+\frac{1}{c_{\mathit{wt,inp}}+\frac{1}{2}}+\frac{1}{c_{\mathit{v,sel}}+\frac{1}{2}}+\frac{1}{c_{\mathit{wt,sel}}+\frac{1}{2}}} $$ For experiments with no wild-type sequence, scores can be calculated using the filtered library size for each time point t, which is defined as the sum of counts at time t for variants that are present in all time points. Combining replicate scores To account for replicate heterogeneity, we use a simple meta-analysis model with a single random effect to combine scores from each of the n replicate selections into a single score for each variant. Each variant's score is calculated independently. Enrich2 computes the restricted maximum likelihood estimates for the variant score (\( \widehat{\beta} \)) and standard error (\( {\widehat{\sigma}}_s \)) using Fisher scoring iterations [45]. Given the replicate scores (\( {\widehat{\beta}}_i \)) and estimated standard errors (\( {\widehat{\sigma}}_i \)) where i = 1, 2, …, n, the estimate for \( \widehat{\beta} \) at each iteration is the weighted average: $$ \widehat{\beta}=\frac{{\displaystyle {\sum}_{i=1}^n}{\widehat{\beta}}_i{\left({\widehat{\sigma}}_s^2+{\widehat{\sigma}}_i^2\right)}^{-1}}{{\displaystyle {\sum}_{i=1}^n}{\left({\widehat{\sigma}}_s^2+{\widehat{\sigma}}_i^2\right)}^{-1}} $$ The starting value for \( {\widehat{\sigma}}_s^2 \) at the first iteration is: $$ {\widehat{\sigma}}_s^2=\frac{1}{n-1}{\displaystyle \sum_{i=1}^n}{\left({\widehat{\beta}}_i-\overline{\widehat{\beta}}\right)}^2 $$ Enrich2 calculates the following fixed-point solution for \( {\widehat{\sigma}}_{s+1}^2 \): $$ {\widehat{\sigma}}_{s+1}^2={\widehat{\sigma}}_s^2\frac{{\displaystyle {\sum}_{i=1}^n}{\left({\widehat{\sigma}}_s^2+{\widehat{\sigma}}_i^2\right)}^{-2}{\left({\widehat{\beta}}_i-\widehat{\beta}\right)}^2}{{\displaystyle {\sum}_{i=1}^n}{\left({\widehat{\sigma}}_s^2+{\widehat{\sigma}}_i^2\right)}^{-1}-\frac{{\displaystyle {\sum}_{i=1}^n}{\left({\widehat{\sigma}}_s^2+{\widehat{\sigma}}_i^2\right)}^{-2}}{{\displaystyle {\sum}_{i=1}^n}{\left({\widehat{\sigma}}_s^2+{\widehat{\sigma}}_i^2\right)}^{-1}}} $$ Because it is more computationally efficient to perform a fixed number of iterations for all variant scores in parallel than to test for convergence of each variant, Enrich2 performs 50 Fisher scoring iterations. In practice, this is more than sufficient for \( {\widehat{\sigma}}_s^2 \) to converge. We record the difference \( {\varepsilon}_s={\widehat{\sigma}}_s^2-{\widehat{\sigma}}_{s-1}^2 \) for the final iteration and identify any variants with high values for ɛ s as variants that failed to converge. No such variants were encountered in the analyses detailed here. For the fixed-effect model [29], we calculate the variant score (\( {\widehat{\beta}}^{\mathit{\hbox{'}}} \)) and standard error (\( {\widehat{\sigma}}_s^{\mathit{\hbox{'}}} \)) using a weighted average of the replicate scores (\( {\widehat{\beta}}_i \)) where the weight for each score is the inverse of that variant's variance (\( {\widehat{\sigma}}_{s-1}^2 \)). The standard error of the variant \( {\widehat{\sigma}}_s^{\mathit{\hbox{'}}} \) is: $$ {\widehat{\sigma}}_s^{\hbox{'}}=\sqrt{\frac{1}{{\displaystyle {\sum}_{i=1}^n}{\widehat{\sigma}}_i^{-2}}} $$ The fixed-effect model was used for comparison purposes only and is not implemented in the Enrich2 software. Derivation of predicted scores The behavior of a variant v in a simulated binding experiment (e.g. phage display, yeast display) can be described in terms of the displaying entity's likelihood of being selected in a given round [39, 46]. This likelihood is related to the binding affinity of each variant, and, by extension, the binding probability of an individual protein molecule under the experimental conditions. The relationship between variant binding affinity, monomer binding probability, and likelihood of selection will depend on the specifics of the experiment such as the number of molecules displayed per cell or phage, ligand concentration, and non-specific binding [39]. Each round of selection is a time point t in the analysis, so we can assign each variant a probability of being selected in a given time point (p v,t ). We assume that p v,t = p v,0 = p v (i.e. that the probability is constant throughout the selection) and that any grow out or amplification steps are uniform across all variants. The initial variant population is determined by the variant population frequencies (\( {f}_{v,0}^{\mathit{\prime}} \)) and the size of the starting population (\( {N}_0^{\prime } \)). $$ {c}_{v,\ 0}^{\prime }={f}_{v,\ 0}^{\prime }{N}_0^{\prime } $$ We note that \( {c}_{v,\mathrm{t}}^{\mathit{\prime}} \), \( {f}_{v,\mathrm{t}}^{\mathit{\prime}} \), and \( {N}_{\mathrm{t}}^{\prime } \) refer to the variant population itself, in contrast to the previously defined c v,t , f v,t , and N t , which refer to sequence reads derived from the variant population. We define a t as a factor describing growth between round t and the previous round (a 0 = 1). We assume that a t is the same for all variants. The count for a variant in time point t+1 in terms the count in time point t is: $$ {c}_{v,\ t+1}^{\prime }={a}_{t+1}{p}_v{c}_{v,\ t}^{\prime } $$ Therefore, the count for a variant in time point t given the starting count is: $$ {c}_{v,t}^{\hbox{'}}={c}_{v,0}^{\hbox{'}}{\displaystyle \prod_{j=1}^t}{a}_j{p}_v={f}_{v,0}^{\hbox{'}}{N}_0^{\hbox{'}}{p}_v^t{\displaystyle \prod_{j=1}^t}{a}_j $$ We can write the ratio of variant counts in these terms and define the log ratio for binding experiments (M v,t '). $$ \frac{c_{v,t}^{\hbox{'}}}{c_{wt,t}^{\hbox{'}}}=\frac{f_{v,0}^{\hbox{'}}{N}_0^{\hbox{'}}\kern0.1em {p}_v^t{\displaystyle {\prod}_{j=1}^t}{a}_j}{f_{wt,0}^{\hbox{'}}{N}_0^{\hbox{'}}\kern0.5em {p}_{wt}^t{\displaystyle {\prod}_{j=1}^t}{a}_j}=\frac{f_{v,0\kern0.1em }^{\hbox{'}}{p}_v^t}{f_{wt,0\kern0.1em }^{\hbox{'}}{p}_{wt}^t} $$ $$ {M}_{v,t}^{\hbox{'}}= \log \left(\frac{c_{v,t}^{\hbox{'}}}{c_{wt,t}^{\hbox{'}}}\right)=t\cdot \log \left(\frac{p_v}{p_{wt}}\right)+ \log \left(\frac{f_{v,0}^{\hbox{'}}}{f_{wt,0}^{\hbox{'}}}\right) $$ If we substitute t for \( {t}^{\prime }=\frac{t}{ \max \kern0.3em T} \), we find that the expected score for binding experiments under the regression scoring model (\( {\beta}_v^{\prime } \)) should be related to the variant selection probability (p v ) by: $$ {\beta}_v^{\prime }=\left( \max \kern0.3em T\right) \log \left(\frac{p_v}{p_{wt}}\right) $$ The behavior of a variant v in a simulated growth experiment can be described by the growth rate at time t (μ v (t)). Unlike in the round-based binding experiment case, time in growth experiments is modeled as continuous. We assume that μ v (t) = μ v (0) = μ v (i.e. that the growth rate is constant throughout the selection) and that any amplification steps are uniform across all variants. This derivation is based on [16, 18]. In interference-free growth, the growth of individual variants can be described by the first order equation: $$ \frac{d{c}_v^{\hbox{'}}}{dt}={\mu}_v{c}_v^{\hbox{'}}(t) $$ Therefore, the count for a variant at time t given the starting count is: $$ {c}_v^{\hbox{'}}(t)={c}_{v,0}^{\hbox{'}}{e}^{\mu_vt}={f}_{v,0}^{\hbox{'}}{N}_0^{\hbox{'}}{e}^{\mu_vt} $$ We can write the ratio of variant counts in these terms and construct the continuous function M v ″(t). $$ \frac{c_v^{\hbox{'}}(t)}{c_{wt}^{\hbox{'}}(t)} = \frac{N_0^{\hbox{'}}\kern0.2em {f}_{v,0}^{\hbox{'}}{e}^{\mu_vt}}{N_0^{\hbox{'}}\kern0.2em {f}_{wt,0}^{\hbox{'}}{e}^{\mu_{wt}t}} = \frac{f_{v,0}^{\hbox{'}}}{f_{wt,0}^{\hbox{'}}}{e}^{\left({\mu}_v-{\mu}_{wt}\right)t} $$ $$ {M}_v^{{\prime\prime} }(t)= \log \left(\frac{c_v^{\hbox{'}}(t)}{c_{wt}^{\hbox{'}}(t)}\right)=\left({\mu}_v-{\mu}_{wt}\right)t+ \log \left(\frac{f_{v,0}^{\hbox{'}}}{f_{wt,0}^{\hbox{'}}}\right) $$ We convert to the discrete function M v,t ″ for convenience by assuming that m timepoints are sampled at constant intervals, determined by the number of wild-type doublings (δ) per time point, such that max T = mδ. We then find that the expected score for growth experiments under the regression scoring model (β v ″) should be related to the growth rate (μ v ) by: $$ {\beta}_v^{{\prime\prime} }=m\delta \left({\mu}_v-{\mu}_{wt}\right) $$ Generation of simulated datasets Simulated datasets contain 10,000 unique variants (including wild-type), each characterized by a true variant effect: the probability of selection in each round (p v ) for binding simulations or the growth rate (μ v ) for growth simulations. We assume that the variant effect distribution is normal and set the wild-type effect to p wt = 0.5 and μ wt = 1. We set the wild-type effect at the 75th percentile of the distribution and set the standard deviation to 0.1. We draw 9999 variants from this distribution, with 0.05 < p v < 0.99 and 0.05 < μ v < 5. In each case, the population size is 10 million, with a starting wild-type frequency of 1%. Starting counts for each variant are simulated using a log-normal distribution of variant counts in the input time point such that the mean variant input count is 990 and the standard deviation of the distribution is 0.4 [16, 47]. Starting counts are independently generated for each replicate. For each replicate, the starting population undergoes five rounds of selection. The count of each variant after binding (k v,t ) is generated using a binomial distribution with parameters n = c v,t−1 ' and p = p v . The count of each variant after growth (g v,t ) is generated using a negative binomial distribution with parameters r = c v,t−1 ' and \( p={e}^{-{\mu}_v\Delta t} \), \( \Delta t=\frac{\delta \ln 2}{\mu_{wt}} \). For these simulations, δ = 2. The population count for each variant (c v,t ') is obtained by performing weighted random sampling with replacement, where the weight for each variant is proportional to k v,t or g v,t and the total population size was 10 million. Read counts for each variant (c v,t ) are simulated by performing weighted random sampling with replacement, where the weight for each variant is proportional to the population counts (c v,t ') and the average sequencing depth is 200 reads per variant (approximately 2 million reads per time point). We simulate replicate noise by drawing a new variant effect from the variant effect distribution for 10% of variants (not including wild-type). These noisy variants were randomly chosen. This new variant effect was used to simulate one replicate and the other four replicates used the original effect. Noisy effects were split uniformly between the five replicates, such that 2% of the variants in each replicate were affected. We simulate time point amplification and depletion noise by multiplying or dividing c v,t ' by 50 before performing the sampling step to obtain c v,t . We randomly choose 10% of variants to be affected by noise, 5% subject to amplification and 5% subject to depletion, split uniformly among the five replicates. For each noisy variant in the chosen replicate, one time point (including input) was randomly chosen for amplification or depletion. Python code for generating these simulated datasets is available as simdms v0.1 (DOI: 10.5281/zenodo.546311). Deep mutational scan of Phospholipase A2 A region proximal to both lipid binding sites of the C2 domain of Phospholipase A2 (PLA2) was targeted for deep mutational scanning. Positions 94–97 of the C2 domain of mouse PLA2-alpha (ANYV) were fully randomized using a doped synthetic oligonucleotide. The library of C2 subdomains containing mutations was cloned into the AvrII and PpuMI sites of wild-type C2 domain in pGEM. The library was subcloned into phage arms and expressed on the surface of bacteriophage using the T7 phage display system according to the manufacturer's instructions (Novagen T7Select 10-3b). The library was amplified in BLT5403 E. coli and variants were selected for their ability to bind to a lipid mixture containing ceramide 1-phosphate (C1P) [48]. The mouse PLA2-alpha cDNA was a generous gift from Michael Gelb, University of Washington. NiSepaharose Excel, capacity 10 mg/mL, was purchased from GE. Other reagents were purchased from Thermo-Fisher. To select for C1P binding, lipid nanodiscs were developed as a bilayer affinity matrix. The His6-tagged membrane scaffold protein MSP1D1 [49] was expressed in BL21 E. coli from a pET28a plasmid and purified on nickel resin, then used to generate lipid nanodiscs comprising 30 mol% phosphatidylcholine, 20 mol% phosphatidylserine, 40 mol% phosphatidylethanolamine, and 10 mol% C1P [50]. To separate nanodiscs from large lipid aggregates and free protein, the mixture was subjected to gel filtration using a Superose 6 10/300 GL column (Pharmacia) and the major peak following the void volume was collected. To generate the affinity resin, 70 μg of nanodiscs (quantified by protein content) was incubated overnight at 4 °C with 10 μL nickel resin in 20 mM Tris pH 7.5 and 100 mM NaCl. The resin was washed twice in the same solution and used in phage binding reactions. Phage expressing the C2 domain variant library were titered and diluted to a concentration of 5 × 109 pfu/mL in 20 mM Tris pH 7.5 and 100 mM NaCl, then incubated with lipid nanodisc affinity resin plus 10 μM calcium in a final volume of 350 μL. After a 2-hour incubation at 4 °C, the resin was washed four times in 1 mL of the incubation buffer containing 20 mM imidazole. Phage bound to nanodiscs were eluted with 20 mM Tris pH 7.5 containing 500 mM imidazole. Phage from the elution were titered, amplified, and subjected to additional rounds of selection. Three replicate selections were performed on different days using the same input phage library. Sequencing libraries were prepared by PCR amplifying the variable region using primers that append Illumina cluster generating and index sequences (Additional file 7) before sequencing using the Illumina NextSeq platform with a NextSeq high output kit (75 cycles, FC-404-1005). Reads were demultiplexed using bcl2fastq v2.17 (Illumina) with the arguments bcl2fastq --with-failed-reads --create-fastq-for-index-reads --no-lane-splitting --minimum-trimmed-read-length 0 --mask-short-adapter-reads 0. Quality was assessed using FastQC v0.11.3 [51]. Demultiplexed reads are available in the NCBI Sequence Read Archive, BioProject Accession PRJNA344387. Neuraminidase data analysis Raw reads were demultiplexed using a custom script based on three-nucleotide barcodes provided by the original authors [30]. The reads were analyzed in Enrich2 v1.0.0 as ten experimental conditions: five non-overlapping 30-base regions of the neuraminidase gene in either the presence or absence of oseltamivir. Reads were required to have a minimum quality score of 23 at all positions and contain no Ns. The five mutagenized regions were scored independently and then merged to create a single set of variant scores for each treatment. To be consistent with the original study, we removed variants containing multiple nucleotide changes with the exception of p.Ile117Ter and p.Thr226Trp that were individually validated. The p values for comparing variant scores to wild-type in each treatment and comparing variant scores between treatments were calculated using a z-test. All three sets of p values were jointly corrected for multiple testing using the qvalue package in R [52], and variants with a q value of less than 0.05 were reported as significant. Analysis of other datasets For previously published datasets, raw sequence files in FASTQ format were obtained from the respective authors. Datasets (Table 1) were analyzed independently using Enrich2 v1.0.0. The BRCA1 dataset was analyzed in a single run with separate experimental conditions for the yeast two-hybrid and phage display assays. For all datasets except neuraminidase, reads were required to have a minimum quality score of 20 at all positions and contain no Ns. For the WW domain sequence function map (Fig. 1), scores and standard errors were calculated using weighted least squares linear regression in two technical replicates and the replicates were combined using the random-effects model as described. Enrich2 software implementation Enrich2 is implemented in Python 2.7 and requires common dependencies for scientific Python. The graphical user interface is implemented using Tkinter. A deep mutational scanning experiment is represented as a tree of objects with four levels: experiment; condition; selection; and sequencing library. Each object's data and metadata are stored in a single HDF5 file, including intermediate values calculated during analysis. Enrich2 is designed to be run locally on a laptop computer and does not require a high-performance computing environment. Most analyses can be run overnight (Table 1). Run times in Table 1 were measured using a MacBook Pro Retina with 2.8 GHz Intel Core i7 processor and 16GB of RAM. After publication of our article [1] it was brought to our attention that a line of code was missing from our program to combine the within-replicate variance and between-replicate variance. This led to an overestimation of the standard errors calculated using the Enrich2 random-effects model. Fowler DM, Araya CL, Fleishman SJ, Kellogg EH, Stephany JJ, Baker D, et al. High-resolution mapping of protein sequence-function relationships. Nat Methods. 2010;7:741–6. Fowler DM, Fields S. Deep mutational scanning: a new style of protein science. Nat Methods. 2014;11:801–7. Majithia AR, Tsuda B, Agostini M, Gnanapradeepan K, Rice R, Peloso G, et al. Prospective functional classification of all possible missense variants in PPARG. Nat Genet. 2016;48:1570–5. Starita LM, Young DL, Islam M, Kitzman JO, Gullingsrud J, Hause RJ, et al. Massively Parallel Functional Analysis of BRCA1 RING Domain Variants. Genetics. 2015;200:413–22. Bank C, Hietpas RT, Jensen JD, Bolon DNA. A systematic survey of an intragenic epistatic landscape. Mol Biol Evol. 2015;32:229–38. Podgornaia AI, Laub MT. Protein evolution. Pervasive degeneracy and epistasis in a protein-protein interface. Science. 2015;347:673–7. Rockah-Shmuel L, Tóth-Petróczy Á, Tawfik DS. Systematic mapping of protein mutational space by prolonged drift reveals the deleterious effects of seemingly neutral mutations. PLoS Comput Biol. 2015;11:e1004421. Stiffler MA, Hekstra DR, Ranganathan R. Evolvability as a function of purifying selection in TEM-1 β-Lactamase. Cell. 2015;160:882–92. Wu NC, Dai L, Olson CA, Lloyd-Smith JO, Sun R. Adaptation in protein fitness landscapes is facilitated by indirect paths. Elife. 2016;5:e16965. Adkar BV, Tripathi A, Sahoo A, Bajaj K, Goswami D, Chakrabarti P, et al. Protein model discrimination using mutational sensitivity derived from deep sequencing. Structure. 2012;20:371–81. Sahoo A, Khare S, Devanarayanan S, Jain PC, Varadarajan R. Residue proximity information and protein model discrimination using saturation-suppressor mutagenesis. Elife. 2015;4:e09532. Fowler DM, Stephany JJ, Fields S. Measuring the activity of protein variants on a large scale using deep mutational scanning. Nat Protoc. 2014;9:2267–84. Hiatt JB, Patwardhan RP, Turner EH, Lee C, Shendure J. Parallel, tag-directed assembly of locally derived short sequence reads. Nat Methods. 2010;7:119–22. Starita LM, Pruneda JN, Lo RS, Fowler DM, Kim HJ, Hiatt JB, et al. Activity-enhancing mutations in an E3 ubiquitin ligase identified by high-throughput mutagenesis. Proc Natl Acad Sci U S A. 2013;110:E1263–72. Bloom JD. An experimentally determined evolutionary model dramatically improves phylogenetic fit. Mol Biol Evol. 2014;31:1956–78. Matuszewski S, Hildebrandt ME, Ghenu A-H, Jensen JD, Bank C. A statistical guide to the design of deep mutational scanning experiments. Genetics. 2016;204:77–87. Starita LM, Fields S. Deep mutational scanning: a highly parallel method to measure the effects of mutation on protein function. Cold Spring Harb Protoc. 2015;2015:711–4. Kowalsky CA, Klesmith JR, Stapleton JA, Kelly V, Reichkitzer N, Whitehead TA. High-resolution sequence-function mapping of full-length proteins. PLoS One. 2015;10:e0118193. Fowler DM, Araya CL, Gerard W, Fields S. Enrich: software for analysis of protein function by enrichment and depletion of variants. Bioinformatics. 2011;27:3430–1. Hietpas RT, Jensen JD, Bolon DNA. Experimental illumination of a fitness landscape. Proc Natl Acad Sci U S A. 2011;108:7896–901. Patwardhan RP, Lee C, Litvin O, Young DL, Pe'er D, Shendure J. High-resolution analysis of DNA regulatory elements by synthetic saturation mutagenesis. Nat Biotechnol. 2009;27:1173–5. Araya CL, Fowler DM, Chen W, Muniez I, Kelly JW, Fields S. A fundamental protein property, thermodynamic stability, revealed solely from large-scale measurements of protein function. Proc Natl Acad Sci U S A. 2012;109:16858–63. Rich MS, Payen C, Rubin AF, Ong GT, Sanchez MR, Yachie N, et al. Comprehensive analysis of the SUL1 promoter of Saccharomyces cerevisiae. Genetics. 2016;203:191–202. Melnikov A, Rogov P, Wang L, Gnirke A, Mikkelsen TS. Comprehensive mutational scanning of a kinase in vivo reveals substrate-dependent fitness landscapes. Nucleic Acids Res. 2014;42:e112. Roscoe BP, Thayer KM, Zeldovich KB, Fushman D, Bolon DNA. Analyses of the effects of all ubiquitin point mutants on yeast growth rate. J Mol Biol. 2013;425:1363–77. Jiang L, Mishra P, Hietpas RT, Zeldovich KB, Bolon DNA. Latent effects of Hsp90 mutants revealed at reduced expression levels. PLoS Genet. 2013;9:e1003600. Mavor D, Barlow K, Thompson S, Barad BA, Bonny AR, Cario CL, et al. Determination of ubiquitin fitness landscapes under different chemical stresses in a classroom setting. Elife. 2016;5:e15802. Bank C, Hietpas RT, Wong A, Bolon DN, Jensen JD. A Bayesian MCMC approach to assess the complete distribution of fitness effects of new mutations: uncovering the potential for adaptive walks in challenging environments. Genetics. 2014;196:841–52. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to meta-analysis. Chichester: Wiley; 2009. Jiang L, Liu P, Bank C, Renzette N, Prachanronarong K, Yilmaz LS, et al. A balance between inhibitor binding and substrate processing confers influenza drug resistance. J Mol Biol. 2016;428:538–53. Forsyth CM, Juan V, Akamatsu Y, DuBridge RB, Doan M, Ivanov AV, et al. Deep mutational scanning of an antibody against epidermal growth factor receptor using mammalian cell display and massively parallel pyrosequencing. MAbs. 2013;5:523–32. Kim I, Miller CR, Young DL, Fields S. High-throughput analysis of in vivo protein stability. Mol Cell Proteomics. 2013;12:3370–8. Kosuri S, Goodman DB, Cambray G, Mutalik VK, Gao Y, Arkin AP, et al. Composability of regulatory sequences controlling transcription and translation in Escherichia coli. Proc Natl Acad Sci U S A. 2013;110:14024–9. Melamed D, Young DL, Gamble CE, Miller CR, Fields S. Deep mutational scanning of an RRM domain of the Saccharomyces cerevisiae poly(A)-binding protein. RNA. 2013;19:1537–51. Tinberg CE, Khare SD, Dou J, Doyle L, Nelson JW, Schena A, et al. Computational design of ligand-binding proteins with high affinity and selectivity. Nature. 2013;501:212–6. Guy MP, Young DL, Payea MJ, Zhang X, Kon Y, Dean KM, et al. Identification of the determinants of tRNA function and susceptibility to rapid tRNA decay by high-throughput in vivo analysis. Gene Dev. 2014;28:1721–32. Reich LL, Dutta S, Keating AE. SORTCERY-A high-throughput method to affinity rank peptide ligands. J Mol Biol. 2015;427:2135–50. Levine HA, Nilsen-Hamilton M. A mathematical analysis of SELEX. Comput Biol Chem. 2007;31:11–35. Levitan B. Stochastic modeling and optimization of phage display. J Mol Biol. 1998;277:893–916. Levin AM, Weiss GA. Optimizing the affinity and specificity of proteins with molecular display. Mol Biosyst. 2006;2:49–57. Brockmann E-C. Selection of stable scFv antibodies by phage display. Methods Mol Biol. 2012;907:123–44. Needleman SB, Wunsch CD. A general method applicable to the search for similarities in the amino acid sequence of two proteins. J Mol Biol. 1970;48:443–53. Plackett RL. The analysis of categorical data. 2nd ed. New York: MacMillan; 1981. Seber GAF. Linear regression analysis. New York: Wiley; 1977. Demidenko E. Mixed models: theory and applications with R. 2nd ed. Hoboken: Wiley; 2013. Spill F, Weinstein ZB, Irani Shemirani A, Ho N, Desai D, Zaman MH. Controlling uncertainty in aptamer selection. Proc Natl Acad Sci U S A. 2016;113:12076–81. Wrenbeck EE, Klesmith JR, Stapleton JA, Adeniran A, Tyo KEJ, Whitehead TA. Plasmid-based one-pot saturation mutagenesis. Nat Methods. 2016;13:928–30. Lamour NF, Subramanian P, Wijesinghe DS, Stahelin RV, Bonventre JV, Chalfant CE. Ceramide 1-phosphate is required for the translocation of group IVA cytosolic phospholipase A2 and prostaglandin synthesis. J Biol Chem. 2009;284:26897–907. Dalal K, Chan CS, Sligar SG, Duong F. Two copies of the SecY channel and acidic lipids are necessary to activate the SecA translocation ATPase. Proc Natl Acad Sci U S A. 2012;109:4104–9. Denisov IG, Grinkova YV, Lazarides AA, Sligar SG. Directed self-assembly of monodisperse phospholipid bilayer Nanodiscs with controlled size. J Am Chem Soc. 2004;126:3477–87. Andrews S. FastQC A quality control tool for high throughput sequence data. http://www.bioinformatics.babraham.ac.uk/projects/fastqc/. Accessed 8 Sept 2016. Storey JD. A direct approach to false discovery rates. J Roy Stat Soc B Wiley Online Library. 2002;64:479–98. We thank Stanley Fields for his support, advice, and comments. We thank Sara Rubin for editing the manuscript. We thank Lea Starita, Vanessa Gray, and Matt Rich for beta testing. We thank Daniel Esposito for his contributions to Enrich2 v1.1.0. We thank Bernie Pope and Matthew Wakefield for their software engineering and algorithm advice. We thank Galen Flynn for his help with the lipid nanodisc purification. This work was supported by the National Institute of General Medical Sciences (1R01GM109110 and 5R24GM115277 to DMF and P41GM103533 to Stanley Fields); the National Institute of Biomedical Imaging and Bioengineering (5R21EB020277 to SMB and DMF); the Washington Research Foundation (Washington Research Foundation Innovation Postdoctoral Fellowship to HG); and the National Health and Medical Research Council of Australia (Program Grant 1054618 to ATP and TPS). • Project name: Enrich2 (v1.1.0a) • Project home page: https://github.com/FowlerLab/Enrich2 • Example dataset home page: https://github.com/FowlerLab/Enrich2-Example • Documentation home page: http://enrich2.readthedocs.io/ • Archived version: 10.5281/zenodo.802188 • Operating systems: Platform independent • Programming language: Python • Other requirements: Python 2.7, multiple Python packages • License: GNU GPLv3 • Any restrictions to use by non-academics: None • Dataset accession numbers: ° Neuraminidase: BioProject PRJNA272490 ° WW domain: SRA SRP002725 ° C2 domain: BioProject PRJNA344387 Bioinformatics Division, The Walter and Eliza Hall Institute of Medical Research, Parkville, VIC, 3052, Australia Alan F. Rubin, Anthony T. Papenfuss & Terence P. Speed Department of Medical Biology, University of Melbourne, Melbourne, VIC, 3010, Australia Alan F. Rubin & Anthony T. Papenfuss Bioinformatics and Cancer Genomics Laboratory, Peter MacCallum Cancer Centre, Melbourne, VIC, 3000, Australia Department of Genome Sciences, University of Washington, Seattle, WA, 98195, USA Alan F. Rubin, Hannah Gelman & Douglas M. Fowler Institute for Protein Design, University of Washington, Seattle, WA, 98195, USA Hannah Gelman Department of Pathology, University of Washington, Seattle, WA, 98195, USA Nathan Lucas & Sandra M. Bajjalieh Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, VIC, 3010, Australia Anthony T. Papenfuss Department of Mathematics and Statistics, University of Melbourne, Melbourne, VIC, 3010, Australia Anthony T. Papenfuss & Terence P. Speed Department of Bioengineering, University of Washington, Seattle, WA, 98195, USA Douglas M. Fowler Alan F. Rubin Nathan Lucas Sandra M. Bajjalieh Terence P. Speed AFR, TPS, and DMF developed the statistical methods. AFR wrote the Enrich2 software and performed the data analysis. HG and AFR developed the simulation framework and generated the simulated datasets. NL, SMB, and DMF designed and performed the C2 domain deep mutational scan. ATP reviewed the Enrich2 codebase. AFR and DMF wrote the paper. All authors read and approved the final manuscript. Correspondence to Douglas M. Fowler. Ethics approval was not needed for this work. A correction to this article is available online at https://doi.org/10.1186/s13059-018-1391-7. Additional file 1: Wild-type normalization performance table. (XLSX 9 kb) Supplementary figures. (PDF 12 kb) Additional file 3 Replicate correlation tables. (XLSX 11 kb) Individually validated variants of the neuraminidase gene. (XLSX 9 kb) Variants with higher scores than wild-type in the presence of oseltamivir. (XLSX 11 kb) Wild-type score percentile table. (XLSX 45 kb) C2 domain primer sequences. (XLSX 8 kb) Rubin, A.F., Gelman, H., Lucas, N. et al. A statistical framework for analyzing deep mutational scanning data. Genome Biol 18, 150 (2017). https://doi.org/10.1186/s13059-017-1272-5 Deep Mutational Scanning Noisy Variables Answer Scores Replicate Selections Variable Standard Error Submission enquiries: [email protected]
CommonCrawl
Earth Science Stack Exchange is a question and answer site for those interested in the geology, meteorology, oceanography, and environmental sciences. It only takes a minute to sign up. Regarding the theory of the origin of water on Earth through meteorites, why wouldn't the water evaporate on impact? Water on earth has been theorized to have come through comets trapped inside crystals. But why wouldn't that water evaporate on impact, and wouldn't the atmosphere at that time allow the vapours to escape Earth? Also, what is the current scientific opinion of the validity of this theory ? earth-history meteorite planetary-formation DaudDaud $\begingroup$ Also, there is water in our mantle, as water can be stored within the rocks at the molecular/lattice level. I don't know enough about impacts for a full answer, but it certainly MIGHT be possible that the water simply could not escape the lattice on impact. $\endgroup$ – Neo $\begingroup$ @Neo You might be thinking of meteoroids/asteroids. Comets would hold their water almost entirely as ice. --- Oh, I just noticed the title does not match the text. $\endgroup$ – Eubie Drew why wouldn't that water evaporate on impact, and wouldn't the atmosphere at that time allow the vapours to escape Earth? The water would very likely evaporate on impact. However, gravity would prevent the gas phase water molecules from leaving Earth. The speed of a water molecule must be compared to the escape velocity of Earth (11 km/s) to determine whether or not the molecule can escape. At a given temperature, the velocity of water molecules will be governed by a Boltzmann distribution. The most probable velocity of a molecule will be: $$V= \sqrt{\frac{2kT}{m}}$$ where $m$ is the mass of the molecule and $k$ is Boltzmann's constant. For example, at a temperature of 300 K, a water molecule will have a most probable velocity of 520 m/s, about a factor of 20 below the escape velocity. what is the current scientific opinion of the validity of this theory? Comets have a higher fraction of deuterium in their water compared to Earth. According to "Earth's water probably didn't come from comets, Caltech researchers say", this refutes the hypothesis that a large fraction of Earth's water came from comets. senshin DavePhDDavePhD Thanks for contributing an answer to Earth Science Stack Exchange! Not the answer you're looking for? Browse other questions tagged earth-history meteorite planetary-formation or ask your own question. Are the magnetic elements of meteorites that have struck Earth aligned with the magnetic field? If we assume the mega impact hypothesis for the formation of Moon, where on Earth is the impact point? Earth and moon have different mantle compositions. Is this a fatal flaw in the 'Giant Impact hypothesis'? Before the Great Oxygenation Event, where was the oxygen? What was the percentage of land mass in prehistoric times when temperatures were high enough that we had no ice caps? Does the heat of reentry affect the reliability of radiometric dating of meteorites?
CommonCrawl
Why do some planets have lots of $\mathrm{N_2}$ and others none? Earth, Titan and Venus all have large amounts of $\mathrm{N_2}$ in their atmospheres. (In the case of Venus it's a small proportion, but Venus' atmosphere is very thick, and the total mass of $\mathrm{N_2}$ is greater than Earth's.) However, other planets and moons, and Mars in particular, have hardly any. Why is this? $\mathrm{N_2}$ is a relatively light molecule, so I suppose it could be lost to space from smaller bodies. Did Mars start with a thick nitrogen atmosphere and then lose it? Or alternatively, is there some process that produced lots of $\mathrm{N_2}$ on Earth, Titan and Venus, which didn't occur on Mars or the other outer Solar system moons? If so, what is this process likely to be? atmosphere planetary-science nitrogen NathanielNathaniel For Earth, Titan and Venus, I think there are continuing processes that are providing $\ce{N2}$ to the atmosphere of these planets. Concerning the Earth, there is the well documented Nitrogen cycle based on flora. There are also other significant sources for nitrogen from inorganic processes. Deep crust/mantle core sources for nitrogen: Nitrogen solubility in mantle minerals Volcanic eruptions : Volcanic gas Metamorphic processes : Anomalous nitrogen isotopes in ultra high-pressure metamorphic rocks from the Sulu orogenic belt: Effect of abiotic nitrogen reduction during fluid–rock interaction Assuming the rock processes on Earth are also occurring on Titan and Venus, similar inorganic processes may be occurring on other worlds. I think it is a good educated guess even though we really have very very little hard evidence of rock processes other planets. As for Mars, its thin atmosphere may indicate that is cannot hold onto $\ce{N2}$ and there may be very little $\ce{N2}$ currently being created on Mars. Gary KindelGary Kindel $\begingroup$ The nitrogen cycle doesn't really put nitrogen into Earth's atmosphere - organisms take it out, and then eventually put it back again, but they can't really add any $\ce{N2}$ that wasn't there to start with. It would be interesting to see if there's any specific evidence of present-day $\ce{N2}$ outgassing on Venus or Titan - presumably this could be detected spectroscopically by Cassini in the case of Titan. $\endgroup$ – Nathaniel Dec 28 '14 at 7:34 $\begingroup$ "Assuming the rock processes on Earth are also occurring on Titan and Venus, similar inorganic processes may be occurring on other worlds." this assumption cannot be made. We know literally nothing of the geochemistry of Titan and Venus. Also the dominant Nitrogen content of Earth's atmosphere will have more to do with the formation conditions and not geochemistry, seeing as Nitrogen is relatively inert. $\endgroup$ – AtmosphericPrisonEscape May 17 '19 at 14:50 $\begingroup$ @AtmosphericPrisonEscape: Actually we do know something important about the geochemistry of Titan, namely that the "rocks" are water ice. We might also consider Pluto, where most of the surface is solid nitrogen: en.wikipedia.org/wiki/Geology_of_Pluto $\endgroup$ – jamesqf May 17 '19 at 17:35 $\begingroup$ @jamesqf: Titan yes, that's less than 50% of the surfae area. Pluto: Less than 30%, not most. Read the papers in science. Ammonia and CO ice are dominating. $\endgroup$ – AtmosphericPrisonEscape May 18 '19 at 1:08 $\begingroup$ @AtmosphericPrisonEscape: OK, but that doesn't invalidate the point I was trying to make, which is that the surface of the outer planets/moons is entirely different from that of the silicate-based inner planets. $\endgroup$ – jamesqf May 18 '19 at 18:42 A huge factor affecting a planet's atmospheric composition is the planet's escape velocity. From Wikipedia, we have a table of escape velocities, and here are some sample figures: Earth: 11.2 km/s Mars: 5.0 km/s Jupiter: 59.6 km/s Pluto: 1.2 km/s The molecules of an atmospheric gas all fly around with different velocities. Turns out, these velocities follow the normal distribution. That is, there is a bell curve with a mean. If the mean velocity of the gas particles is higher than the escape velocity of a given planet, then you probably won't see much of that gas on the planet. Mars has much less mass than the Earth and so its escape velocity is much lower. If you were to look up the velocity distribution of nitrogen molecules in Mars' atmosphere, you'd probably find that the average velocity is greater than 5.0 km/s. If this is interesting to you, there's so much more! There's an entire field of physics called statistical physics which is the foundation of so many other fields, like chemistry and thermodynamics. Studying these fields allows you develop physical intuition which can be super helpful for reasoning about all the crazy stuff which happens on this planet. jlcgdjlcgd $\begingroup$ As it happens I'm an expert on statistical physics already. (But not on planetary atmospheres.) The tricky thing is that Titan's escape velocity is only 2.6 km/s, about half that of Mars, but Titan has loads of nitrogen. The rate of thermal loss depends on the temperature at the top of the atmosphere as well as on the escape velocity, so it's not trivial to calculate (especially when you don't know what the temperature was in the past when the atmosphere was thicker). It's thought that the thermal loss rate for nitrogen on Mars was never that high, but (...) $\endgroup$ – Nathaniel Dec 28 '14 at 7:24 $\begingroup$ (...) but you can also lose molecules due to interactions with the Solar wind, especially if you don't have a magnetic field (which Mars doesn't), since the particles in the Solar wind can collide with molecules in the atmosphere and accelerate them past their escape velocity. Mars has probably lost a lot of its atmosphere that way. But this doesn't fully answer my question, which is about whether this is the primary reason for the lack of $\ce{N2}$ on Mars and its presence on other bodies, or whether there are also other important factors. $\endgroup$ – Nathaniel Dec 28 '14 at 7:28 Not the answer you're looking for? Browse other questions tagged atmosphere planetary-science nitrogen or ask your own question. What is the origin of the dominant atmospheric nitrogen content in Earth's atmosphere? How and why did the oceans form on Earth but not on other planets? Why do Earth and Venus have different atmospheres? Why is the eastern Pacific full of stratocumulus clouds, and why do they have so much variation in net radiation? Do other terrestrial planets have "earthquakes"? What color(s) would Earth be without water or life? How do tectonics work on other planets? Does the magnetic field really protect Earth from anything?
CommonCrawl
Transition between monostability and bistability of a genetic toggle switch in Escherichia coli DCDS-B Home A two-group age of infection epidemic model with periodic behavioral changes doi: 10.3934/dcdsb.2020020 Long term behavior of random Navier-Stokes equations driven by colored noise Anhui Gu 1,, , Boling Guo 2, and Bixiang Wang 3, School of Mathematics and Statistics, Southwest University, Chongqing 400715, China Institute of Applied Physics and Computational Mathematics, PO Box 8009, Beijing 100088, China Department of Mathematics, New Mexico Institute of Mining and Technology, Socorro, NM 87801, USA * Corresponding author: Anhui Gu Received June 2019 Published December 2019 Fund Project: This work is supported by NSF of Chongqing grant cstc2018jcyjA0897 Full Text(HTML) This paper is devoted to the study of long term behavior of the two-dimensional random Navier-Stokes equations driven by colored noise defined in bounded and unbounded domains. We prove the existence and uniqueness of pullback random attractors for the equations with Lipschitz diffusion terms. In the case of additive noise, we show the upper semi-continuity of these attractors when the correlation time of the colored noise approaches zero. When the equations are defined on unbounded domains, we establish the pullback asymptotic compactness of the solutions by Ball's idea of energy equations in order to overcome the difficulty introduced by the noncompactness of Sobolev embeddings. Keywords: Random attractor, colored noise, unbounded domain, Navier-Stokes equations, energy equations. Mathematics Subject Classification: Primary: 35B40; Secondary: 35B41, 37L30. Citation: Anhui Gu, Boling Guo, Bixiang Wang. Long term behavior of random Navier-Stokes equations driven by colored noise. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020020 P. Acquistapace and B. Terreni, An approach to Itö linear equations in Hilbert spaces by approximation of white noise with coloured noise, Stochastic Anal. Appl., 2 (1984), 131-186. doi: 10.1080/07362998408809031. Google Scholar V. S. Anishchenko, V. Astakhov, A. Neiman, T. Vadivasova and L. Schimansky-Geier, Nonlinear Dynamics of Chaotic and Stochastic Systems: Tutorial and Modern Developments, Second edition, Springer Series in Synergetics, Springer, Berlin, 2007. Google Scholar L. Arnold, Stochastic Differential Equations: Theory and Applications, Wiley-Interscience [John Wiley & Sons], New York-London-Sydney, 1974. Google Scholar L. Arnold, Random Dynamical Systems, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 1998. doi: 10.1007/978-3-662-12878-7. Google Scholar A. V. Babin and M. I. Vishik, Attractors of Evolution Equations, Studies in Mathematics and its Applications, 25. North-Holland Publishing Co., Amsterdam, 1992. Google Scholar J. M. Ball, Global attractors for damped semilinear wave equations, Discrete Contin. Dyn. Syst., 10 (2004), 31-52. doi: 10.3934/dcds.2004.10.31. Google Scholar J. M. Ball, Continuity properties and global attractors of generalized semiflows and the Navier-Stokes equations, J. Nonl. Sci., 7 (1997), 475-502. doi: 10.1007/s003329900037. Google Scholar P. W. Bates, K. N. Lu and B. X. Wang, Random attractors for stochastic reaction-diffusion equations on unbounded domains, J. Differential Equations, 246 (2009), 845-869. doi: 10.1016/j.jde.2008.05.017. Google Scholar W.-J. Beyn, B. Gess, P. Lescot and M. Röckner, The global random attractor for a class of stochastic porous media equations, Comm. Partial Differential Equations, 36 (2011), 446-469. doi: 10.1080/03605302.2010.523919. Google Scholar Z. Brzeźniak, M. Capiński and F. Flandoli, Stochastic partial differential equations and turbulence, Math. Models Methods Appl. Sci., 1 (1991), 41-59. doi: 10.1142/S0218202591000046. Google Scholar M. Capiński and N. J. Cutland, Existence of global stochastic flow and attractors for Navier-Stokes equations, Probab. Theory Relat. Fields, 115 (1999), 121-151. doi: 10.1007/s004400050238. Google Scholar T. Caraballo, G. Lukaszewicz and J. Real, Pullback attractors for asymptotically compact non-autonomous dynamical systems, Nonlinear Analysis, 64 (2006), 484-498. doi: 10.1016/j.na.2005.03.111. Google Scholar T. Caraballo, G. Lukaszewicz and J. Real, Pullback attractors for non-autonomous 2$D$-Navier-Stokes equations in some unbounded domains, C. R. Acad. Sci. Paris, 342 (2006), 263-268. doi: 10.1016/j.crma.2005.12.015. Google Scholar T. Caraballo, M. J. Garrido-Atienza, B. Schmalfuss and J. Valero, Non-autonomous and random attractors for delay random semilinear equations without uniqueness, Discrete Contin. Dyn. Syst., 21 (2008), 415-443. doi: 10.3934/dcds.2008.21.415. Google Scholar T. Caraballo and J. Langa, On the upper semicontinuity of cocycle attractors for non-autonomous and random dynamical systems, Dyn. Contin. Discrete Impuls. Syst. Ser. A Math. Anal., 10 (2003), 491-513. Google Scholar T. Caraballo, M. J. Garrido-Atienza, B. Schmalfuss and J. Valero, Asymptotic behaviour of a stochastic semilinear dissipative functional equation without uniqueness of solutions, Discrete Contin. Dyn. Syst. Ser. B, 14 (2010), 439-455. doi: 10.3934/dcdsb.2010.14.439. Google Scholar T. Caraballo, M. J. Garrido-Atienza and T. Taniguchi, The existence and exponential behavior of solutions to stochastic delay evolution equations with a fractional Brownian motion, Nonlinear Anal., 74 (2011), 3671-3684. doi: 10.1016/j.na.2011.02.047. Google Scholar T. Caraballo, J. A. Langa, V. S. Melnik and J. Valero, Pullback attractors for nonautonomous and stochastic multivalued dynamical systems, Set-Valued Analysis, 11 (2003), 153-201. doi: 10.1023/A:1022902802385. Google Scholar V. V. Chepyzhov and M. I. Vishik, Attractors for Equations of Mathematical Physics, American Mathematical Society Colloquium Publications, 49. American Mathematical Society, Providence, RI, 2002. Google Scholar I. Chueshov and M. Scheutzow, On the structure of attractors and invariant measures for a class of monotone random systems, Dynamical Systems, 19 (2004), 127-144. doi: 10.1080/1468936042000207792. Google Scholar H. Crauel, A. Debussche and F. Flandoli, Random attractors, J. Dyn. Diff. Eqns., 9 (1997), 307-341. doi: 10.1007/BF02219225. Google Scholar H. Crauel and F. Flandoli, Attractors for random dynamical systems, Probab. Theory Relat. Fields, 100 (1994), 365-393. doi: 10.1007/BF01193705. Google Scholar J. Q. Duan and B. Schmalfuss, The 3D quasigeostrophic fluid dynamics under random forcing on boundary, Comm. Math. Sci., 1 (2003), 133-151. doi: 10.4310/CMS.2003.v1.n1.a9. Google Scholar F. Flandoli and B. Schmalfuss, Random attractors for the 3$D$ stochastic Navier-Stokes equation with multiplicative noise, Stoch. Stoch. Rep., 59 (1996), 21-45. doi: 10.1080/17442509608834083. Google Scholar M. J. Garrido-Atienza and B. Schmalfuss, Ergodicity of the infinite dimensional fractional Brownian motion, J. Dynam. Differential Equations, 23 (2011), 671-681. doi: 10.1007/s10884-011-9222-5. Google Scholar M. J. Garrido-Atienza, A. Ogrowsky and B. Schmalfuss, Random differential equations with random delays, Stoch. Dyn., 11 (2011), 369-388. doi: 10.1142/S0219493711003358. Google Scholar M. J. Garrido-Atienza, B. Maslowski and B. Schmalfuss, Random attractors for stochastic equations driven by a fractional Brownian motion, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 20 (2010), 2761-2782. doi: 10.1142/S0218127410027349. Google Scholar [28] W. Gerstner, W. M. Kistler, R. Naud and L. Paninski, Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition, Cambridge University Press, Cambridge, 2014. Google Scholar B. Gess, W. Liu and M. Rockner, Random attractors for a class of stochastic partial differential equations driven by general additive noise, J. Differential Equations, 251 (2011), 1225-1253. doi: 10.1016/j.jde.2011.02.013. Google Scholar B. Gess, Random attractors for degenerate stochastic partial differential equations, J. Dyn. Diff. Eqns., 25 (2013), 121-157. doi: 10.1007/s10884-013-9294-5. Google Scholar B. Gess, Random attractors for singular stochastic evolution equations, J. Differential Equations, 255 (2013), 524-559. doi: 10.1016/j.jde.2013.04.023. Google Scholar A. H. Gu, K. N. Lu and B. X. Wang, Asymptotic behavior of random Navier-Stokes equations driven by Wong-Zakai approximations, Discrete Contin. Dyn. Syst. Ser. A, 39 (2019), 185-218. doi: 10.3934/dcds.2019008. Google Scholar A. H. Gu and B. X. Wang, Asymptotic behavior of random FitzHugh-Nagumo systems driven by colored noise, Discrete Contin. Dyn. Syst. Ser. B, 23 (2018), 1689-1720. doi: 10.3934/dcdsb.2018072. Google Scholar J. K. Hale, Asymptotic Behavior of Dissipative Systems, Mathematical Surveys and Monographs, 25. American Mathematical Society, Providence, RI, 1988. Google Scholar P. Hänggi, Colored noise in dynamical systems: A functional calculus approach, Noise in Nonlinear Dynamical Systems, Cambridge University Press, 1 (1989), 307328, 9 pp. Google Scholar P. Häunggi and P. Jung, Colored Noise in Dynamical Systems, Advances in Chemical Physics, Volume 89, John Wiley & Sons, Inc., Hoboken, NJ 1994. Google Scholar J. H. Huang and W. X. Shen, Pullback attractors for nonautonomous and random parabolic equations on non-smooth domains, Discrete Contin. Dyn. Syst., 24 (2009), 855-882. doi: 10.3934/dcds.2009.24.855. Google Scholar N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes, Second edition. North-Holland Mathematical Library, 24. North-Holland Publishing Co., Amsterdam; Kodansha, Ltd., Tokyo, 1989. Google Scholar T. Jiang, X. M. Liu and J. Q. Duan, Approximation for random stable manifolds under multiplicative correlated noises, Discrete Contin. Dyn. Syst. Ser. B, 21 (2016), 3163-3174. doi: 10.3934/dcdsb.2016091. Google Scholar N. G. van Kampen, Stochastic Processes in Physics and Chemistry, Lecture Notes in Mathematics, 888. North-Holland Publishing Co., Amsterdam-New York, 1981. Google Scholar D. Kelley and I. Melbourne, Smooth approximation of stochastic differential equations, Ann. Probab., 44 (2016), 479-520. doi: 10.1214/14-AOP979. Google Scholar P. E. Kloeden and J. A. Langa, Flattening, squeezing and the existence of random attractors, Proc. Royal Soc. London Serie A, 463 (2007), 163-181. doi: 10.1098/rspa.2006.1753. Google Scholar M. M. Klosek-Dygas, B. J. Matkowsky and Z. Schuss, Colored noise in dynamical systems, SIAM J. Appl. Math., 48 (1988), 425-441. doi: 10.1137/0148023. Google Scholar K. N. Lu and B. X. Wang, Wong-Zakai approximations and long term behavior of stochastic partial differential equations, J. Dyn. Diff. Equat., 31 (2017), 1341–1371, https://doi.org/10.1007/s10884-017-9626-y. doi: 10.1007/s10884-017-9626-y. Google Scholar [45] L. Ridolfi, P. D'Odorico and F. Laio, Noise-Induced Phenomena in the Environmental Sciences, Cambridge University Press, Cambridge, 2011. doi: 10.1017/CBO9780511984730. Google Scholar R. Rosa, The global attractor for the 2$D$ Navier-Stokes flow on some unbounded domains, Nonlinear Analysis, 32 (1998), 71-85. doi: 10.1016/S0362-546X(97)00453-7. Google Scholar B. Schmalfuss, Backward cocycles and attractors of stochastic differential equations, International Seminar on Applied Mathematics-Nonlinear Dynamics: Attractor Approximation and Global Behavior, (1992), 185–192. Google Scholar G. R. Sell and Y. C. You, Dynamics of Evolutionary Equations, Applied Mathematical Sciences, 143. Springer-Verlag, New York, 2002. doi: 10.1007/978-1-4757-5037-9. Google Scholar J. Shen, K. N. Lu and W. N. Zhang, Heteroclinic chaotic behavior driven by a Brownian motion, J. Differential Equations, 255 (2013), 4185-4225. doi: 10.1016/j.jde.2013.08.003. Google Scholar R. Temam, Navier-Stokes Equations. Theory and Numerical Analysis, Studies in Mathematics and its Applications, 2. North-Holland Publishing Co., Amsterdam-New York, 1979 Google Scholar R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics, Second edition, Applied Mathematical Sciences, 68. Springer-Verlag, New York, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar G. Uhlenbeck and L. Ornstein, On the theory of Brownian motion, Phys. Rev., 36 (1930), 823-841. Google Scholar B. X. Wang, Random attractors for the stochastic Benjamin-Bona-Mahony equation on unbounded domains, J. Differential Equations, 246 (2009), 2506-2537. doi: 10.1016/j.jde.2008.10.012. Google Scholar B. X. Wang, Asymptotic behavior of stochastic wave equations with critical exponents on $\mathbb R^3$, Trans. Amer. Math. Soc., 363 (2011), 3639-3663. doi: 10.1090/S0002-9947-2011-05247-5. Google Scholar B. X. Wang, Sufficient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems, J. Differential Equations, 253 (2012), 1544-1583. doi: 10.1016/j.jde.2012.05.015. Google Scholar B. X. Wang, Periodic random attractors for stochastic Navier-Stokes equations on unbounded domains, Electronic J. Differential Equations, 2012 (2012), No. 59, 18 pp. Google Scholar B. X. Wang, Existence and upper semicontinuity of attractors for stochastic equations with deterministic non-autonomous terms, Stoch. Dyn., 14 (2014), 1450009, 31 pp. doi: 10.1142/S0219493714500099. Google Scholar B. X. Wang, Random attractors for non-autonomous stochastic wave equations with multiplicative noise, Discrete Contin. Dyn. Syst. Ser. A, 34 (2014), 269-300. doi: 10.3934/dcds.2014.34.269. Google Scholar M. C. Wang and G. E. Uhlenbeck, On the theory of Brownian motion. Ⅱ, Rev. Modern Phys., 17 (1945), 323-342. doi: 10.1103/RevModPhys.17.323. Google Scholar E. Wong and M. Zakai, On the convergence of ordinary integrals to stochastic integrals, Ann. Math. Statist., 36 (1965), 1560-1564. doi: 10.1214/aoms/1177699916. Google Scholar E. Wong and M. Zakai, On the relation between ordinary and stochastic differential equations, Internat. J. Engrg. Sci., 3 (1965), 213-229. doi: 10.1016/0020-7225(65)90045-5. Google Scholar Takeshi Taniguchi. The existence and decay estimates of the solutions to $3$D stochastic Navier-Stokes equations with additive noise in an exterior domain. Discrete & Continuous Dynamical Systems - A, 2014, 34 (10) : 4323-4341. doi: 10.3934/dcds.2014.34.4323 Renhai Wang, Yangrong Li, Bixiang Wang. Random dynamics of fractional nonclassical diffusion equations driven by colored noise. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4091-4126. doi: 10.3934/dcds.2019165 Alain Miranville, Xiaoming Wang. Upper bound on the dimension of the attractor for nonhomogeneous Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 1996, 2 (1) : 95-110. doi: 10.3934/dcds.1996.2.95 Peter Anthony, Sergey Zelik. Infinite-energy solutions for the Navier-Stokes equations in a strip revisited. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1361-1393. doi: 10.3934/cpaa.2014.13.1361 Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602 Jan W. Cholewa, Tomasz Dlotko. Fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 2967-2988. doi: 10.3934/dcdsb.2017149 Alexei Ilyin, Kavita Patni, Sergey Zelik. Upper bounds for the attractor dimension of damped Navier-Stokes equations in $\mathbb R^2$. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 2085-2102. doi: 10.3934/dcds.2016.36.2085 Ciprian Foias, Ricardo Rosa, Roger Temam. Topological properties of the weak global attractor of the three-dimensional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1611-1631. doi: 10.3934/dcds.2010.27.1611 Yong Yang, Bingsheng Zhang. On the Kolmogorov entropy of the weak global attractor of 3D Navier-Stokes equations:Ⅰ. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2339-2350. doi: 10.3934/dcdsb.2017101 Reinhard Farwig, Yasushi Taniuchi. Uniqueness of backward asymptotically almost periodic-in-time solutions to Navier-Stokes equations in unbounded domains. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1215-1224. doi: 10.3934/dcdss.2013.6.1215 Kumarasamy Sakthivel, Sivaguru S. Sritharan. Martingale solutions for stochastic Navier-Stokes equations driven by Lévy noise. Evolution Equations & Control Theory, 2012, 1 (2) : 355-392. doi: 10.3934/eect.2012.1.355 Gung-Min Gie, Makram Hamouda, Roger Temam. Asymptotic analysis of the Navier-Stokes equations in a curved domain with a non-characteristic boundary. Networks & Heterogeneous Media, 2012, 7 (4) : 741-766. doi: 10.3934/nhm.2012.7.741 Yoshihiro Shibata. On the local wellposedness of free boundary problem for the Navier-Stokes equations in an exterior domain. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1681-1721. doi: 10.3934/cpaa.2018081 Yoshihiro Shibata. Local well-posedness of free surface problems for the Navier-Stokes equations in a general domain. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 315-342. doi: 10.3934/dcdss.2016.9.315 Anhui Gu, Kening Lu, Bixiang Wang. Asymptotic behavior of random Navier-Stokes equations driven by Wong-Zakai approximations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 185-218. doi: 10.3934/dcds.2019008 Hermenegildo Borges de Oliveira. Anisotropically diffused and damped Navier-Stokes equations. Conference Publications, 2015, 2015 (special) : 349-358. doi: 10.3934/proc.2015.0349 Hyukjin Kwean. Kwak transformation and Navier-Stokes equations. Communications on Pure & Applied Analysis, 2004, 3 (3) : 433-446. doi: 10.3934/cpaa.2004.3.433 Vittorino Pata. On the regularity of solutions to the Navier-Stokes equations. Communications on Pure & Applied Analysis, 2012, 11 (2) : 747-761. doi: 10.3934/cpaa.2012.11.747 C. Foias, M. S Jolly, I. Kukavica, E. S. Titi. The Lorenz equation as a metaphor for the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2001, 7 (2) : 403-429. doi: 10.3934/dcds.2001.7.403 Igor Kukavica. On regularity for the Navier-Stokes equations in Morrey spaces. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1319-1328. doi: 10.3934/dcds.2010.26.1319 HTML views (53) Anhui Gu Boling Guo Bixiang Wang
CommonCrawl
A. Balloons There are quite a lot of ways to have fun with inflatable balloons. For example, you can fill them with water and see what happens. Grigory and Andrew have the same opinion. So, once upon a time, they went to the shop and bought $$$n$$$ packets with inflatable balloons, where $$$i$$$-th of them has exactly $$$a_i$$$ balloons inside. They want to divide the balloons among themselves. In addition, there are several conditions to hold: Do not rip the packets (both Grigory and Andrew should get unbroken packets); Distribute all packets (every packet should be given to someone); Give both Grigory and Andrew at least one packet; To provide more fun, the total number of balloons in Grigory's packets should not be equal to the total number of balloons in Andrew's packets. Help them to divide the balloons or determine that it's impossible under these conditions. The first line of input contains a single integer $$$n$$$ ($$$1 \le n \le 10$$$) — the number of packets with balloons. The second line contains $$$n$$$ integers: $$$a_1$$$, $$$a_2$$$, $$$\ldots$$$, $$$a_n$$$ ($$$1 \le a_i \le 1000$$$) — the number of balloons inside the corresponding packet. If it's impossible to divide the balloons satisfying the conditions above, print $$$-1$$$. Otherwise, print an integer $$$k$$$ — the number of packets to give to Grigory followed by $$$k$$$ distinct integers from $$$1$$$ to $$$n$$$ — the indices of those. The order of packets doesn't matter. If there are multiple ways to divide balloons, output any of them. In the first test Grigory gets $$$3$$$ balloons in total while Andrey gets $$$1$$$. In the second test there's only one way to divide the packets which leads to equal numbers of balloons. In the third test one of the boys won't get a packet at all.
CommonCrawl
Academia Sinica Aim of Research List of Phone Extension No. Research and Specialist Staff Join Appointment and Retired Research Fellow Research Scholar and Postdoctor Historical Data - Visitors Chen-Jung Hsu Lectures 1-Year Research Trainee Chow Hung-Ching Scholarship Chow Hung-Ching Scholarship Regulation Archive of Past Conferences Archive of Past Seminars Archive of Colloquium Archive of Lakeside Lectures Mathmedia (e-version; Chinese edition) Bulletin (e-version) Recommended Readings (In Chinese) Life Info. MRPC You are here: Home ⇒ People ⇒ Research and Specialist Staff ⇒ Cheng, Shun-Jen Distinguished Research Fellow | Cheng, Shun-Jen Email : chengsj$\color{red}{@}$math.sinica.edu.tw Phone:+886 2 2368-5999 ext. 702 Fax: +886 2 2368-9771 Mathematics Genealogy Project Ph.D. in Mathematics, A.M. in Mathematics Harvard University (1988 – 1993) B.A. in Mathematics (with highest distinction), M.S. in Mathematics Northwestern University (1984 –1988) Lie algebras Lie superalgebras Representation Theory Acting Director Institute of Mathematics Academia Sinica 2018/8-2020/4 Director Institute of Mathematics Academia Sinica 2015/8-2018/8 Acting Director Institute of Mathematics Academia Sinica 2013/8 - 2015/8 Distinguished Research Fellow Academia Sinica 2013/4 - present Deputy Director Institute of Mathematics Academia Sinica 2011/4 - 2013/8 Joint Appointment Professor National Taiwan University 2006/8 - present Research Fellow Academia Sinica 2006/8 - 2013/3 Visiting Professor University of Virginia 2003/8 - 2004/7 Professor National Taiwan University 2000/8 - 2006/7 Professor National Cheng-Kung University 1998/8 - 2000/7 Visiting Scholar Massachusetts Institute of Technology 1997/9 - 1998/8 Associate Professor National Cheng-Kung University 1994/8 - 1998/7 Visiting Member Max-Planck-Institut für Mathematik 1993/10 – 1994/9 Academic Award of the Ministry of Education, 2011 The World Academy of Sciences TWAS Prize in Mathematics, 2011 Khwarizmi International Award, 2010 Academia Sinica Investigator Awards, 2007, 2014 Academia Sinica Research Award for Junior Research Investigators, 1999 National Science Council Outstanding Research Awards, 1997-1998, 1999-2000, 2011–2013 Research Descriptions Dr. Cheng research interests lie in the representation theory of Lie algebras and Lie superalgebras. His studies in the past include the relationship between the representation theory of classical Lie algebras and that of Lie superalgebras of classical types. He is especially interested in understanding the characters of modules over Lie superalgebras in general. Selected Publications ↓ (with K. Coulembier) "Representation theory of a semisimple extension of the Takiff superalgebra" , preprint , 2020-05. (with C.-W. Chen, L. Luo) "Blocks and characters of $D(2|1;\zeta)$-modules of non-integral weights" , preprint , 2020-02. (with C.-W. Chen, K. Coulembier) "Tilting modules for classical Lie superalgebras" , Journal of London Mathematical Society , 2019-07. (with B. Shu, W. Wang) "Modular representations of exceptional supergroups" , Mathematische Zeitschrift , 219 (2), 635-659, 2019-03. (with W. Wang) "Character formulae in category $\mathcal O$ for exceptional Lie superalgebras $D(2|1;\zeta)$" , Transformation Groups , 24 (3), 781-821, 2019. (https://link.springer.com/article/10.1007%2Fs00031-018-9506-5) (with W. Wang) "Character formulae in category $\mathcal O$ for exceptional Lie superalgebra $G(3)$" , Kyoto Journal of Mathematics , 2018-10. "Supercharacters of queer Lie superalgebras" , Journal of Mathematical Physics , 58, 061701-1-061701-9, 2017. (with C.-W. Chen) "Quantum group of type $A$ and representations of queer Lie superalgebra" , Journal of Algebra , 473, 1-28, 2017. (with J.-H. Kwon and W. Wang) "Character formulae for queer Lie superalgebras and canonical bases of types $A/C$" , Communications in Mathematical Physics , 352, 1091-1119, 2017. (with J.-H. Kwon) "Kac-Wakimoto character formula for ortho-symplectic Lie superalgebras" , Advances in Mathematics , 304, 1296-1329, 2017. (with J.-H. Kwon) "Finite-dimensional half-integer weight modules over queer Lie superalgebras" , Communications in Mathematical Physics , 346, 945-965, 2016. (with J.-H. Kwon, W. Wang) "Irreducible characters of Kac-Moody Lie superalgebras" , Proceedings of the London Mathematical Society , 110 (1), 108-132, 2015. (with N. Lam, W. Wang) "Brundan-Kazhdan-Lusztig conjecture for general linear Lie superalgebras" , Duke Mathematical Journal , 164 (4), 617-695, 2015. (with V. Mazorchuk, W. Wang) "Equivalence of blocks for the general linear Lie superalgebra" , Letters in Mathematical Physics , 103 (12), 1313-1327, 2013. (with W. Wang) "Dualities and Representations of Lie Superalgebras" , Graduate Studies in Mathematics 144, American Mathematical Society, 302 pp. ISBN: 978-0-8218-9118-6 , 2012. (with N. Lam, W. Wang) "Super Duality for General Linear Lie Superalgebras and Applications" , Proceedings of Symposia in Pure Mathematics, American Mathematical Society , 86, 113-136, 2012. (with W. Wang) "Dualities for Lie superalgebras, Lie Theory and Representation Theory" , In: Surveys of Modern Mathematics 2, International Press, Boston , 1-46, 2012. (with N. Lam, W. Wang) "Super duality and irreducible characters of ortho-symplectic Lie superalgebras" , Inventiones Mathematicae , 183 (1), 189-224, 2011. (with N. Lam) "Irreducible Characters of General Linear Superalgebra and Super Duality" , Communications in Mathematical Physics , 298 (3), 645-672, 2010. (with J.-H. Kwon, W. Wang) "Kostant homology formulas for oscillator modules of Lie superalgebras" , Advances in Mathematics , 224 (4), 1548-1588, 2010. (with J.-H. Kwon, N. Lam) "A BGG-type resolution for tensor modules over general linear superalgebra" , Letters in Mathematical Physics , 84 (1), 75-87, 2008. (with W. Wang) "Remarks on modules of the ortho-symplectic Lie superalgebras" , Bulletin of the Institute of Mathematics Academia Sinica , 3 (3), 353-372, 2008. (with W. Wang) "Brundan-Kazhdan-Lusztig and Super Duality Conjectures" , Publications of the Research Institute for Mathematical Sciences , 44 (4), 1219-1272, 2008. (with J.-H. Kwon) "Howe Duality and Kostant Homology Formula for infinite-dimensional Lie Superalgebras" , International Mathematics Research Notices , 2008 (rnn085), 1-52, 2008. (with D. Taylor, W. Wang) "The Bloch-Okounkov Correlation Functions of negative Levels" , Journal of Algebra , 319 (1), 457-490, 2008. (with W. Wang, R. B. Zhang) "Super Duality and Kazhdan-Lusztig Polynomials" , Transactions of the American Mathematical Society , 360 (11), 5883-5924, 2008. (with W. Wang) "The correlation functions of vertex operators and Macdonald polynomials" , Journal of Algebraic Combinatorics , 25 (1), 43-56, 2007. (with W. Wang, R. B. Zhang) "A Fock Space Approach to Representation Theory of osp(2|2n)" , Transformation groups , 12 (2), 209-225, 2007. (with N. Lam , R. B. Zhang) "Character Formula for infinite-dimensional unitarizable Modules of the general linear Superalgebra" , Journal of Algebra , 273 (2), 780-805, 2004. (with R. B. Zhang) "Howe Duality and Combinatorial Character Formula for Orthosymplectic Lie Superalgebras" , Advances in Mathematics , 182 (1), 124-172, 2004. (with R. B. Zhang) "Analogue of Kostant's u-cohomology Formula for general linear Superalgebra" , International Mathematics Research Notices , 2004 (1), 31-53, 2004. (with W. Wang) "The Bloch-Okounkov Correlation Functions at higher Levels" , Transformation Groups , 09 (2), 133-142, 2004. "Gelfand-Tsetlin Pattern and Strict Partitions" , Letters in Mathematical Physics , 64 (1), 23-30, 2003. (with W. Wang) "Lie Subalgebras of Differential Operators on the Super Circle" , Publications of the Research Institute for Mathematical Sciences , 39 (3), 545-600, 2003. (with N. Lam) "Infinite-dimensional Lie Superalgebras and Hook Schur Functions" , Communications in Mathematical Physics , 238 (1), 95-118, 2003. (with W. Wang) "Howe Duality for Lie Superalgebras" , Compositio Mathematica , 128 (1), 55-94, 2001. (with N. Lam) "Finite conformal modules over N=2,3,4 superconformal algebras" , Journal of Mathematical Physics , 42 (2), 906-933, 2001. (with W. Wang) "Remarks on Schur-Howe-Sergeev Duality" , Letters in Mathematical Physics , 52 (2), 143-153, 2000. (with V. G. Kac, M. Wakimoto) "Extensions of Neveu-Schwarz conformal modules" , Journal of Mathematical Physics , 41 (4), 2271-2294, 2000. (with V. G. Kac) "Structure of some Z-graded Lie superalgebras of vectorfields" , Transformation Groups , 4 (2-3), 219-272, 1999. (with V. G. Kac) "Generalized Spencer Cohomology and Filtered Deformations of Z-graded Lie superalgebras" , Advances in Theoretical and Mathematical Physics , 2 (5), 1141-1182, 1998. (with V. G. Kac, M. Wakimoto) "Extensions of conformal modules" , In: Topological field theory, primitive forms and related topics, M. Kashiwara et al eds., Birkhauser, Progress in Mathematics , 160, 79-130, 1998. (with V. G. Kac) "Conformal Modules" , Asian Journal of Mathematics , 1 (1), 181-193, 1997. (with V. G. Kac) "A New N=6 Superconformal Algebra" , Communications in Mathematical Physics , 186 (1), 219-231, 1997. "Superconformal algebras and Affine Algebras with Extended symmetry" , In: First International Tainan-Moscow Algebra Workshop, Y. Fong et al eds, de Gruyter, Berlin , 199-206, 1996. "Construction of N=2 Superconformal Algebras from Affine Algebras with Extended Symmetry I" , Letters in Mathematical Physics , 33 (1), 23-31, 1995. "Differentiably simple Lie superalgebras and representations of semisimple Lie superalgebras" , Journal of Algebra , 73 (1), 1-43, 1995. "Representations of central extensions of differentiably simple Lie superalgebras" , Communications in Mathematical Physics , 154 (3), 555-568, 1993. Lee, Yuan-Pin Hsieh, Ming-Lun Cheng, Shun-Jen Liu, Tai-Ping Miyamoto, Masahiko Lih, Ko-Wei Hwang, Chii-Ruey Cheng, Jih-Hsin Yeh, Yeong-Nan Lee, Jyh-Hao Hui, Kin Ming Huang, I-Chiau Wang, Julie Tzu-Yueh Kan, Su-Jen Yu, Chia-Fu Lam, Ching Hung Elling, Volker Hsiao, Chin-Yu Liang, Fei-tsen Hsieh, Chun-Chung Wu, Derchyi Chen, Yi-Chiuan Lai, Chun-Ju Tsai, Cheng-Chiang Lin, Yu-Tuan Copyright Institute of Mathematics, Academia Sinica | Contact us | Latest Update:2021-01-11 Telephone:886-2-23685999 Fax:886-2-23689771(Administration);886-2-23687232(Library) List of Extension Numbers | Skype Account : mathvoip Address : Institute of Mathematics, Academia Sinica, 6F, Astronomy-Mathematics Building, No. 1, Sec. 4, Roosevelt Road, Taipei 106319, TAIWAN Recommended settings: Chrome, Firefox or Microsoft IE 10.0 or above with resolution set to 1024X768 for optimal browsing.
CommonCrawl
You can meet me This paper of mine was updated. This paper of mine is new on arXiv. "There are two ways to do mathematics. The first is to be smarter than everybody else. The second way is to be stupider than everybody else - but persistent." - based on a quotation from Raoul Bott. Geometry and representation theory Click Mathoverflow. Link The n-lab. Link The n-category cafe. Link The Secret Blogging Seminar. Link History of Mathematics archive. Link Mathgen: randomly generated mathematics research papers. Link Database of integer sequences. Link Atlas of Lie groups and representations. Link Theorem of the day. Link ProofWiki. Link Inkscape (for pictures). Link MathJax. (Click for some examples.) Link "Caution! Functor!". Link "Connections on ArXiv". Image Original source. Link Periodic table of finite simple groups. Link Map projection transitions. Link The library of babel. Link Title: Quivers for $\mathrm{SL}_{2}$ tilting modules Authors: Daniel Tubbenhauer and Paul Wedrich Status: Preprint. Last update: Fri, 26 Jul 2019 13:09:47 UTC ArXiv link: https://arxiv.org/abs/1907.11560 Using diagrammatic methods, we define a quiver algebra depending on a prime $\mathsf{p}$ and show that it is the algebra underlying the category of tilting modules for $\mathrm{SL}_{2}$ in characteristic $\mathsf{p}$. Along the way, we obtain a presentation for morphisms between $\mathsf{p}$-Jones-Wenzl projectors. A few extra words Let $\mathbb{K}$ denote an algebraically closed field and $\mathbf{Tilt}=\mathbf{Tilt}\big(\mathrm{SL}_{2}(\mathbb{K})\big)$ the additive, $\mathbb{K}$-linear category of (left-)tilting modules for the algebraic group $\mathrm{SL}_{2}=\mathrm{SL}_{2}(\mathbb{K})$. This category can be described as the full subcategory of $\mathrm{SL}_{2}$-modules which is monoidally generated by the vector representation $T(1)\cong\mathbb{K}^{2}$, and which is closed under taking finite direct sums and direct summands. The purpose of this paper is to give a generators and relations presentation of $\mathbf{Tilt}$ by identifying it with the category of projective modules for an explicitly described quiver algebra. For $\mathbb{K}$ of characteristic zero this is trivial as $\mathbf{Tilt}$ is semisimple, and the indecomposable tilting modules are indeed the simple modules. The quantum analog at a complex root of unity is related to the zigzag algebra with vertex set $\mathbb{N}$ and a starting condition. The focus of this paper is on the case of positive characteristic $\mathsf{p}>0$, for which we represent $\mathbf{Tilt}$ as a quotient $Z=Z_{\mathsf{p}}$ of the path algebra of an infinite, fractal-like quiver, a truncation of which is illustrated for $\mathsf{p}=3$ as This illustrates the full subquiver containing the first $100$ vertices of the quiver underlying $Z_{3}$. Note that the algebra $Z$ contains information about the representation theory of $\mathrm{SL}_{2}$, as e.g. about the Weyl factors $\Delta(w_{i}-1)$ in $T(v-1)$. If the $\mathsf{p}$-adic expansion $v=\sum_{i=0}^{j}a_{i}\mathsf{p}^{i}$ has exactly $r+1$ non-zero digits, then there are $2^{r}$ such factors and, correspondingly, $r$ arrows from $v-1$ to certain $w_{i}-1 Note further the uniform behavior of $Z$ with respect to $\mathsf{p}$. For example is (a cut-off) of the quivers $Z_{2}$, $Z_{5}$ and $Z_{7}$, which, zooming out such that the precise labels get invisible, look basically the same as the one for $Z_3$. The basis for our work is the classical fact that the Temperley-Lieb algebra controls the finite-dimensional representation theory of $\mathrm{SL}_{2}$. The second main ingredient is an explicit description of $\mathsf{p}$-Jones--Wenzl projectors, which are characteristic $\mathsf{p}$ analogs of the classical Jones-Wenzl projectors. The bulk of this paper is devoted to a careful study of morphisms between $\mathsf{p}$-Jones--Wenzl projectors over $\mathbb{F}_{\mathsf{p}}$ and the linear relations between them; a result of which we think as being of independent interest. Last update: 18.Jan.2020 or later
CommonCrawl
Special issue dedicated to Jürgen Sprekels on the occasion of his 65th birthday Preface: Special issue dedicated to Jürgen Sprekels on the occasion of his 65th birthday Pierluigi Colli, Gianni Gilardi, Dietmar Hömberg, Pavel Krejčí and Elisabetta Rocca This special volume is dedicated to Jürgen Sprekels on the occasion of his 65th birthday, in tribute to his important achievements in several theoretical and applied problems, especially in the fields of Partial Differential Equations, Optimal Control, Hysteresis, Thermodynamics and Phase Transitions, Mechanics of Solids. Pierluigi Colli, Gianni Gilardi, Dietmar H\u00F6mberg, Pavel Krej\u010D\u00ED, Elisabetta Rocca. Preface: Special issue dedicated to J\u00FCrgen Sprekels on the occasion of his 65th birthday. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): i-ii. doi: 10.3934/dcds.2015.35.6i. Numerical simulation of two-phase flows with heat and mass transfer Eberhard Bänsch, Steffen Basting and Rolf Krahl 2015, 35(6): 2325-2347 doi: 10.3934/dcds.2015.35.2325 +[Abstract](2772) +[PDF](2023.2KB) We present a finite element method for simulating complex free surface flow. The mathematical model and the numerical method take into account two-phase non-isothermal flow of an incompressible liquid and a gas phase, capillary forces at the interface of both fluids, Marangoni effects due to temperature variation of the interface and mass transport across the interface by evaporation/condensation. The method is applied to two examples from microgravity research, for which experimental data are available. Eberhard B\u00E4nsch, Steffen Basting, Rolf Krahl. Numerical simulation of two-phase flows with heat and mass transfer. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2325-2347. doi: 10.3934/dcds.2015.35.2325. Analysis of a model coupling volume and surface processes in thermoviscoelasticity Elena Bonetti, Giovanna Bonfanti and Riccarda Rossi We focus on a highly nonlinear evolutionary PDE system describing volume processes coupled with surfaces processes in thermoviscoelasticity, featuring the quasi-static momentum balance, the equation for the unidirectional evolution of an internal variable on the surface, and the equations for the temperature in the bulk domain and the temperature on the surface. A significant example of our system occurs in the modeling for the unidirectional evolution of adhesion between a body and a rigid support, subject to thermal fluctuations and in contact with friction. We investigate the related initial-boundary value problem, and in particular the issue of existence of global-in-tim solutions, on an abstract level. This allows us to highlight the analytical features of the problem and, at the same time, to exploit the tight coupling between the various equations in order to deduce suitable estimates on (an approximation of) the problem. Our existence result is proved by passing to the limit in a carefully tailored approximate problem, and by extending the obtained local-in-time solution by means of a refined prolongation argument. Elena Bonetti, Giovanna Bonfanti, Riccarda Rossi. Analysis of a model coupling volume and surface processes in thermoviscoelasticity. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2349-2403. doi: 10.3934/dcds.2015.35.2349. Weak differentiability of scalar hysteresis operators Martin Brokate and Pavel Krejčí Rate independent evolutions can be formulated as operators, called hysteresis operators, between suitable function spaces. In this paper, we present some results concerning the existence and the form of directional derivatives and of Hadamard derivatives of such operators in the scalar case, that is, when the driving (input) function is a scalar function. Martin Brokate, Pavel Krej\u010D\u00ED. Weak differentiability of scalar hysteresis operators. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2405-2421. doi: 10.3934/dcds.2015.35.2405. On a Cahn-Hilliard type phase field system related to tumor growth Pierluigi Colli, Gianni Gilardi and Danielle Hilhorst The paper deals with a phase field system of Cahn-Hilliard type. For positive viscosity coefficients, the authors prove an existence and uniqueness result and study the long time behavior of the solution by assuming the nonlinearities to be rather general. In a more restricted setting, the limit as the viscosity coefficients tend to zero is investigated as well. Pierluigi Colli, Gianni Gilardi, Danielle Hilhorst. On a Cahn-Hilliard type phase field system related to tumor growth. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2423-2442. doi: 10.3934/dcds.2015.35.2423. Some mathematical problems related to the second order optimal shape of a crystallisation interface Pierre-Étienne Druet We consider the problem to optimise the stationary temperature distribution and the equilibrium shape of the solid-liquid interface in a two-phase system subject to a temperature gradient. The interface satisfies the minimisation principle of the free energy while the temperature is solving the heat equation with radiation boundary conditions at the outer wall. Under the condition that the temperature gradient is uniformly negative in the direction of crystallisation, we can expect that the interface has a global representation as a graph. We reformulate this condition as a pointwise constraint on the gradient of the state, and we derive the first order optimality system for a class of objective functionals that account for the second derivatives of the surface and for the surface temperature gradient. Pierre-\u00C9tienne Druet. Some mathematical problems related to the second order optimal shape of a crystallisation interface. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2443-2463. doi: 10.3934/dcds.2015.35.2443. A new phase field model for material fatigue in an oscillating elastoplastic beam Michela Eleuteri, Jana Kopfová and Pavel Krejčí We pursue the study of fatigue accumulation in an oscillating elastoplastic beam under the additional hypothesis that the material can partially recover by the effect of melting. The full system consists of the momentum and energy balance equations, an evolution equation for the fatigue rate, and a differential inclusion for the phase dynamics. The main result consists in proving the existence and uniqueness of a strong solution. Michela Eleuteri, Jana Kopfov\u00E1, Pavel Krej\u010D\u00ED. A new phase field model for material fatigue in an oscillating elastoplastic beam. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2465-2495. doi: 10.3934/dcds.2015.35.2465. On a non-isothermal diffuse interface model for two-phase flows of incompressible fluids Michela Eleuteri, Elisabetta Rocca and Giulio Schimperna We introduce a diffuse interface model describing the evolution of a mixture of two different viscous incompressible fluids of equal density. The main novelty of the present contribution consists in the fact that the effects of temperature on the flow are taken into account. In the mathematical model, the evolution of the velocity $u$ is ruled by the Navier-Stokes system with temperature-dependent viscosity, while the order parameter $\psi$ representing the concentration of one of the components of the fluid is assumed to satisfy a convective Cahn-Hilliard equation. The effects of the temperature are prescribed by a suitable form of the heat equation. However, due to quadratic forcing terms, this equation is replaced, in the weak formulation, by an equality representing energy conservation complemented with a differential inequality describing production of entropy. The main advantage of introducing this notion of solution is that, while the thermodynamical consistency is preserved, at the same time the energy-entropy formulation is more tractable mathematically. Indeed, global-in-time existence for the initial-boundary value problem associated to the weak formulation of the model is proved by deriving suitable a priori estimates and showing weak sequential stability of families of approximating solutions. Michela Eleuteri, Elisabetta Rocca, Giulio Schimperna. On a non-isothermal diffuse interface model for two-phase flows of incompressible fluids. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2497-2522. doi: 10.3934/dcds.2015.35.2497. Quasi-variational inequality approach to heat convection problems with temperature dependent velocity constraint Takeshi Fukao and Nobuyuki Kenmochi This paper is concerned with a heat convection problem. We discuss it in the framework of parabolic variational inequalities. The problem is a system of a heat equation with convection and a Navier-Stokes variational inequality with temperature-dependent velocity constraint. Our problem is a sort of parabolic quasi-variational inequalities in the sense that the constraint set for the velocity field depends on the unknown temperature. We shall give an existence result of the heat convection problem in a weak sense, and show that under some additional constraint for temperature there exists a strong solution of the problem. Takeshi Fukao, Nobuyuki Kenmochi. Quasi-variational inequality approach to heat convection problems with temperature dependent velocity constraint. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2523-2538. doi: 10.3934/dcds.2015.35.2523. Robust exponential attractors for the modified phase-field crystal equation Maurizio Grasselli and Hao Wu We consider the modified phase-field crystal (MPFC) equation that has recently been proposed by P. Stefanovic et al. This is a variant of the phase-field crystal (PFC) equation, introduced by K.-R. Elder et al., which is characterized by the presence of an inertial term $\beta\phi_{tt}$. Here $\phi$ is the phase function standing for the number density of atoms and $\beta\geq 0$ is a relaxation time. The associated dynamical system for the MPFC equation with respect to the parameter $\beta$ is analyzed. More precisely, we establish the existence of a family of exponential attractors $\mathcal{M}_\beta$ that are Hölder continuous with respect to $\beta$. Maurizio Grasselli, Hao Wu. Robust exponential attractors for the modified phase-field crystal equation. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2539-2564. doi: 10.3934/dcds.2015.35.2539. Existence of weak solutions for a PDE system describing phase separation and damage processes including inertial effects Christian Heinemann and Christiane Kraus In this paper, we consider a coupled PDE system describing phase separation and damage phenomena in elastically stressed alloys in the presence of inertial effects. The material is considered on a bounded Lipschitz domain with mixed boundary conditions for the displacement variable. The main aim of this work is to establish existence of weak solutions for the introduced hyperbolic-parabolic system. To this end, we first adopt the notion of weak solutions introduced in [12]. Then we prove existence of weak solutions by means of regularization, time-discretization and different variational techniques. Christian Heinemann, Christiane Kraus. Existence of weak solutions fora PDE system describing phase separation and damage processes including inertial effects. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2565-2590. doi: 10.3934/dcds.2015.35.2565. On the representation of hysteresis operators acting on vector-valued, left-continuous and piecewise monotaffine and continuous functions Olaf Klein In Brokate-Sprekels 1996, it is shown that hysteresis operators acting on scalar-valued, continuous, piecewise monotone input functions can be represented by functionals acting on alternating strings. In a number of recent papers, this representation result is extended to hysteresis operators dealing with input functions in a general topological vector space. The input functions have to be continuous and piecewise monotaffine, i.e. being piecewise the composition of two functions such that the output of a monotone increasing function is used as input for an affine function. In the current paper, a representation result is formulated for hysteresis operators dealing with input functions being left-continuous and piecewise monotaffine and continuous. The operators are generated by functions acting on an admissible subset of the set of all strings of pairs of elements of the vector space. Olaf Klein. On the representation of hysteresis operators acting on vector-valued, left-continuous and piecewise monotaffine and continuous functions. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2591-2614. doi: 10.3934/dcds.2015.35.2591. Existence results for incompressible magnetoelasticity Martin Kružík, Ulisse Stefanelli and Jan Zeman We investigate a variational theory for magnetoelastic solids under the incompressibility constraint. The state of the system is described by deformation and magnetization. While the former is classically related to the reference configuration, magnetization is defined in the deformed configuration instead. We discuss the existence of energy minimizers without relying on higher-order deformation gradient terms. Then, by introducing a suitable positively $1$-homogeneous dissipation, a quasistatic evolution model is proposed and analyzed within the frame of energetic solvability. Martin Kru\u017E\u00EDk, Ulisse Stefanelli, Jan Zeman. Existence results for incompressible magnetoelasticity. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2615-2623. doi: 10.3934/dcds.2015.35.2615. Control of crack propagation by shape-topological optimization Günter Leugering, Jan Sokołowski and Antoni Żochowski An elastic body weakened by small cracks is considered in the framework of unilateral variational problems in linearized elasticity. The frictionless contact conditions are prescribed on the crack lips in two spatial dimensions, or on the crack faces in three spatial dimensions. The weak solutions of the equilibrium boundary value problem for the elasticity problem are determined by minimization of the energy functional over the cone of admissible displacements. The associated elastic energy functional evaluated for the weak solutions is considered for the purpose of control of crack propagation. The singularities of the elastic displacement field at the crack front are characterized by the shape derivatives of the elastic energy with respect to the crack shape within the Griffith theory. The first order shape derivative of the elastic energy functional with respect to the crack shape, i.e., evaluated for a deformation field supported in an open neighbourhood of one of crack tips, is called the Griffith functional. The control of the crack front in the elastic body is performed by the optimum shape design technique. The Griffith functional is minimized with respect to the shape and the location of small inclusions in the body. The inclusions are located far from the crack. In order to minimize the Griffith functional over an admissible family of inclusions, the second order directional, mixed shape-topological derivatives of the elastic energy functional are evaluated. The domain decomposition technique [42] is applied to the shape [56] and topological [54,55] sensitivity analysis of variational inequalities. The nonlinear crack model in the framework of linear elasticity is considered in two and three spatial dimensions. The boundary value problem for the elastic displacement field takes the form of a variational inequality over the positive cone in a fractional Sobolev space. The variational inequality leads to a problem of metric projection over a polyhedric convex cone, so the concept of conical differentiability applies to shape and topological sensitivity analysis of variational inequalities under consideration. G\u00FCnter Leugering, Jan Soko\u0142owski, Antoni \u017Bochowski. Control of crack propagation by shape-topological optimization. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2625-2657. doi: 10.3934/dcds.2015.35.2625. A posteriori error estimates for time-dependent reaction-diffusion problems based on the Payne--Weinberger inequality Svetlana Matculevich, Pekka Neittaanmäki and Sergey Repin We consider evolutionary reaction-diffusion problems with mixed Dirichlet--Robin boundary conditions. For this class of problems, we derive two-sided estimates of the distance between any function in the admissible energy space and the exact solution of the problem. The estimates (majorants and minorants) are explicitly computable and do not contain unknown functions or constants. Moreover, it is proved that the estimates are equivalent to the energy norm of the deviation from the exact solution. Svetlana Matculevich, Pekka Neittaanm\u00E4ki, Sergey Repin. A posteriori error estimates for time-dependent reaction-diffusionproblems based on the Payne--Weinberger inequality. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2659-2677. doi: 10.3934/dcds.2015.35.2659. Deriving amplitude equations via evolutionary $\Gamma$-convergence Alexander Mielke We discuss the justification of the Ginzburg-Landau equation with real coefficients as an amplitude equation for the weakly unstable one-dimensional Swift-Hohenberg equation. In contrast to classical justification approaches we employ the method of evolutionary $\Gamma$-convergence by reformulating both equations as gradient systems. Using a suitable linear transformation we show $\Gamma$-convergence of the associated energies in suitable function spaces. The limit passage of the time-dependent problem relies on the recent theory of evolutionary variational inequalities for families of uniformly convex functionals as developed by Daneri and Savaré 2010. In the case of a cubic energy it suffices that the initial conditions converge strongly in $L^2$, while for the case of a quadratic nonlinearity we need to impose weak convergence in $H^1$. However, we do not need well-preparedness of the initial conditions. Alexander Mielke. Deriving amplitude equations via evolutionary $\\Gamma$-convergence. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2679-2700. doi: 10.3934/dcds.2015.35.2679. Implicit functions and parametrizations in dimension three: Generalized solutions Mihaela Roxana Nicolai and Dan Tiba We introduce a general local parametrization for the solution of the implicit equation $f(x,y,z)=0$ by using Hamiltonian systems. The approach extends previous work of the authors and is valid in the critical case as well. Mihaela Roxana Nicolai, Dan Tiba. Implicit functions and parametrizations in dimension three: Generalized solutions. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2701-2710. doi: 10.3934/dcds.2015.35.2701. The Cahn--Hilliard--de Gennes and generalized Penrose--Fife models for polymer phase separation Irena PawŁow The goal of this paper is twofold. Firstly, we overview the known Flory--Huggins--de Gennes (FHdG) free energy and the associated degenerate singular Cahn--Hilliard--de Gennes (CHdG) model for isothermal phase separation in a binary polymer mixture. Secondly, motivated by the structure of the FHdG free energy, in which the gradient term is made up of energetic and entropic contributions, we set up a corresponding thermodynamically consistent model for nonisothermal phase separation in such mixture. The model is characterized by the modified both energy and entropy fluxes by suitable ``extra" terms. In this sense it generalizes the well-known Penrose--Fife model in which only entropy flux is modified by an ``extra" term. Irena Paw\u0141ow. The Cahn--Hilliard--de Gennes and generalized Penrose--Fife models forpolymer phase separation. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2711-2739. doi: 10.3934/dcds.2015.35.2711. Uniform Poincaré-Sobolev and isoperimetric inequalities for classes of domains Marita Thomas The aim of this paper is to prove an isoperimetric inequality relative to a convex domain $\Omega\subset\mathbb{R}^d$ intersected with balls with a uniform relative isoperimetric constant, independent of the size of the radius $r>0$ and the position $y\in\overline{\Omega}$ of the center of the ball. For this, uniform Sobolev, Poincaré and Poincaré-Sobolev inequalities are deduced for classes of (not necessarily convex) domains that satisfy a uniform cone property. It is shown that the constants in all of these inequalities solely depend on the dimensions of the cone, space dimension $d,$ the diameter of the domain and the integrability exponent $p\in[1,d)$. Marita Thomas. Uniform Poincar\u00E9-Sobolev and isoperimetric inequalities for classes of domains. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2741-2761. doi: 10.3934/dcds.2015.35.2741. Weak structural stability of pseudo-monotone equations Augusto Visintin The inclusion $\beta(u)\ni h$ in $V'$ is studied, assuming that $V$ is a reflexive Banach space, and that $\beta: V \to {\cal P}(V')$ is a generalized pseudo-monotone operator in the sense of Browder-Hess [MR 0365242]. A notion of strict generalized pseudo-monotonicity is also introduced. The above inclusion is here reformulated as a minimization problem for a (nonconvex) functional $V \!\times V'\to \mathbf{R} \cup \{+\infty\}$. A nonlinear topology of weak-type is introduced, and related compactness results are proved via De Giorgi's notion of $\Gamma$-convergence. The compactness and the convergence of the family of operators $\beta$ provide the (weak) structural stability of the inclusion $\beta(u)\ni h$ with respect to variations of $\beta$ and $h$, under the only assumptions that the $\beta$s are equi-coercive and the $h$s are equi-bounded. These results are then applied to the weak stability of the Cauchy problem for doubly-nonlinear parabolic inclusions of the form $D_t\partial\varphi(u) + \alpha(u) \ni h$, $\partial\varphi$ being the subdifferential of a convex lower semicontinuous mapping $\varphi$, and $\alpha$ a generalized pseudo-monotone operator. The technique of compactness by strict convexity is also used in the limit procedure. Augusto Visintin. Weak structural stability of pseudo-monotone equations. Discrete & Continuous Dynamical Systems - A, 2015, 35(6): 2763-2796. doi: 10.3934/dcds.2015.35.2763.
CommonCrawl
Authorea Team https://authorea.com/ Group Admin: Alberto Pepe Public Documents (475) Data Science Open Access Typesetting Latex Our Product Roadmap 2018-2019 🗓 Alberto Pepe Here's our product roadmap for the next 12 months, so that you know what is keeping us busy these days, and what to expect from Authorea. Writing a response to reviewers You recently submitted your first manuscript for publication, and you were pleased when the editor decided to send the manuscript out for peer review. Now you have gotten the reviews back, and the editor has asked you to revise your manuscript in light of the reviewers' comments. How should you tackle this task?The comprehensive guide by \citet{Noble_2017} "Ten simple rules for writing a response to reviewers" gives you some concrete tips to organize and write a compelling letter for reviewers addressing their comments.Rule 6 states: Use typography to help the reviewer navigate your response:Use changes of typeface, color, and indenting to discriminate between 3 different elements: the review itself, your responses to the review, and changes that you have made to the manuscript.If you are writing your manuscript in Authorea, you can now very easily produce a changelog of your manuscript (the changes you have made to the manuscript) which you can export as PDF and include in your response to reviewers. Host articles, preprints, files, data, code, more At Authorea, we made it possible to host data behind figures in your documents from the very beginning. Starting today, however, you can add arbitrary datasets and files to your Authorea document. How to add files in your documentsIt's easy! Start a new document or open an existing one and in the toolbar select Insert 👉File, then select a file to host (Figure 1). The hosted file will look like this: Why preprint? Erica Howard Preprints are here to stay Preprints and preprint servers continue to gain popularity across disciplines, and the number of new preprints is growing at a rate significantly higher than journal articles. Researchers have many choices when it comes to hosting their preprint, and it can be challenging to know how to choose. So, why preprint in the first place?Still unsure about what a preprint is, and what the benefits of publishing one are? We have a full introductory article, and will summarize a few points here:Get publishedDisseminate your findings as soon as they are ready. Share your work without waiting for formal publication or final versions.Invite early comments and feedbackPreprints are open for all to read, so you can invite your colleagues to provide comments and feedback to help you improve your work. Establish creditGet credit for all your research outputs - articles, data sets, figures, and more - by uploading them on Authorea and minting a DOI. This way, you establish an early version of record which will always remain valid. And you can get cited while your work is under review!What about submitting to journals?Most journals accept submissions that have been published as preprints. Your preprint on Authorea will link to the final, published version of your article, so your final citation count won't be affected.How does Authorea compare to other preprint servers?Authorea is fast seamless publishingUpload your work easily and mint a DOI immediately. Other servers can take hours or days. Share your preprint with your colleagues right away.Publish many files, different formatsThe documents you publish on Authorea don't have to be traditional scholarly articles. You can publish anything: articles, data sets, code, figures, tables, slides, micropublications, and Jupyter Notebooks. Beautiful, HTML-based contentYour content will be hosted in an HTML-based environment, not a static PDF. Yes, you can even include Javascript-based visualizations like the one below in Figure 1.Open data and executable figuresUpload your data for review by your colleagues, and include fully executable figures in your work. Authorea enriches your content to make research interactive. Getting Started with Authorea Hello, and welcome to Authorea!👋 We're happy to have you join us on this journey towards making writing and publishing smoother, data-driven, interactive, open, and simply awesome. This document is a short guide on how to get started with Authorea, specifically how to take advantage of some of our powerful tools. Of course, feedback and questions are not only welcome, but encouraged--just hit the comment icon to the right of this text 💬 (You can also highlight specific parts of the text to leave a comment on). (Ha. That's your first lesson!).The BasicsAuthorea is a collaborative document editor built primarily for researchers. It allows you to collaboratively write in real-time in normal text, LaTeX, and Markdown all within the same document. In addition to easily writing together, each article on Authorea is a git repository, which allows you to host data, interactive figures, and code. But first, let's get started! 1. Sign up.If you're not already signed up, do so at authorea.com/signup. Tip: if you are part of an organization, sign up with your organizational email. 2. First stepsDuring the signup process you will be asked a few questions: your location, your title, etc. You will be also prompted to join a group. Groups are awesome! They allow you to become part of a shared document workspace. Tip: during signup, join a group or create a new one for your team. Overall, we suggest you fill out your profile information to get the best possible Authorea experience and to see if any of your friends are already on the platform. If you don't do it initially during sign up, don't worry; you can always edit your user information in your settings later on.Once you've landed on your profile page (see below). There are a few things you should immediately do:Add a profile picture. You've got a great face, show it to the world :) For reference, please see Pete, our chief dog officer (CDO), below. Add personal and group information. If you haven't added any personal information, like a bio, a group affiliation, or your location, do it! You might find some people at your organization already part of Authorea, plus it is a great way to build your online footprint, which is always good for getting jobs.Invite your colleagues. Click here to invite contacts from your Gmail. You'll get extra private documents in your account and you'll make Pete very happy! Demo with CiSE IntroductionI can write anything I like here. On the other hand, we denounce with righteous indignation and dislike men who are so beguiled and demoralized by the charms of pleasure of the moment, so blinded by desire, that they cannot foresee the pain and trouble that are bound to ensue; and equal blame belongs to those who fail in theirjnkdsanjadsnjandasnds dadas j duty through weakness of will, which is the same as saying through shrinking from toil and pain. These cases are perfectly simple and easy to distinguish. In a free hour, when our power of choice is untrammeled and when nothing prevents our being able to do what we like best, every pleasure is to be welcomed and every pain avoided. But in certain circumstances and owing to the claims of duty or the obligations of business it will frequently occur that pleasures have to be repudiated and annoyances accepted. The wise man therefore always holds in these matters to this principle of selection: he rejects pleasures to secure other greater pleasures, or else he endures pains to avoid worse pains. \cite{Hu_2014} A quick introduction to Authorea Daniel (Goldman) INTRODUCTION This is a LaTeX block and you can use commands such as \textbf{} to BOLDFACE TEXT. A few tips 1. Click anywhere outside of this LaTeX block to render it. 2. Hover on Preview to see a Preview of the rendered content. 3. Do not paste an entire LaTeX article here. Instead import documents from your homepage. 4. Only type LaTeX content in here, i.e. everything you would write after . 5. Do not type preamble, frontmatter, macros or figures. 6. To add macros (newcommands) and packages, click Settings → Edit Macros 7. Use the Insert Figure button to insert images (and data). 8. Use math mode for equations, e.g. $\mathcal L_{EM}=-\frac14F^{\mu\nu}F_{\mu\nu}$. 9. Try the citation tool (click cite) to find and add citations, or use \cite{}. 10. To insert more LaTeX blocks click Insert → LaTeX. 11. You can use sectioning commands like \section{},\subsection{},\subsubsection{} to add headings.[1] This footnote is generated via \footnote{}. [1] You can toggle heading numbering on/off from the article settings. A document by Daniel (Goldman), written on Authorea. Q&A with protocols.io Josh Nicholson What is your mission?We founded Authorea with the mission of accelerating and opening up scientific discovery. We were frustrated that a writing tool didn't fully suit the needs of researchers -- especially researchers in a web-first, data-driven world. Our mission is to reinvent how research documents are written and shared to capture the power of the web. In short, we allow researchers to create 21st century documents for 21st century research.What is the story behind Authorea?Authorea was started by two physicists, Alberto Pepe and Nathan Jenkins, who met while working at CERN in Geneva. CERN is the birthplace of the World Wide Web. The web was initially created to be a scientific information network that would allow researchers to share and disseminate scientific insights as fast as possible . The web today touches many facets of our lives: we produce massive volumes of content for the web, in HTML. Yet, scientific information is by and large still written, published and exchanged in formats which are not fully web compatible. PDFs and Microsoft Word documents, for example, are printer-centric rather than web-centric. Our plan with Authorea was to bring the writing process and its products (scientific manuscripts) to the web.We're a small group of former researchers from variety of different backgrounds, including life sciences, physics, math, computer science, and even the classics! Publishing content online: Authorea or Issuu? Deciding between Authorea and Issuu? Choosing the right way to publish your content online should be based on what is best for your business. Here are some key comparisons to help you decide:1. Presenting content in HTML or PDFToday, content is primarily read on the web. However, content is still largely created for the printed page. This disconnect has lead to the creation of service like Issu, making it possible to display PDFs online in an embedded container. Authorea, a document editor and publisher was designed to be web-native. Content written and published on Authorea is displayed in HTML as well as PDF. By displaying content in HTML Authorea offers groups the following advantagesAbility to read content easily on mobileAbility to track engagement on contentEnhanced SEO (PDFs are not part of the web, only described on the web)Ability to include interactive figures 2. Creating content automatically or with a design teamMany media companies create content designed for the printer (i.e. they create PDFs). The ability to create beautiful PDFs is possible with the work of designers using Indesign or other tools like Sketch. While a beautiful PDF can be created this way, the design and implementation of custom PDFs is time-consuming with most tools. Authorea, allows groups to set up a variety of custom PDF styles that can be recreated from Word, Google Docs, or and Authorea document with the click of a single button. This is one of the real powers of Authorea, we make it easy to produce content by employing LaTeX "under the hood," the world's most advanced typesetting system. If you want to control every pixel, then you might want to utilize a design team. If you want to produce beautiful reports easily without overhead then Authorea might be your best bet.3. CollaborationAuthorea allows people to collaborate simultaneously on their documents. This means that you don't need to send a variety of different file types back and forth via email. With Authorea you can work on one document to create a beautiful report. In this case, writing is publishing.4. Customizable URL and designCreating a PDF to put online? That necessitates you utilizing an editor (Word or Google Docs), a design application (Sketch or Indesign), and a web host (Issuu, Wordpress, etc.). With Authorea you can write, edit, and publish all from Authorea. We allow groups to fully customize the URL and design of the site to make it easy to incorporate Authorea into the publishing process. In fact, it's so easy that we have effectively made writing the same thing as publishing. Cancer Research Template (250 Word Max) Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis vestibulum diam a eleifend tincidunt. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Morbi libero risus, pretium ac urna vel, iaculis sagittis nisl. Duis non sapien justo. Quisque elementum, mauris a ullamcorper pharetra, mauris enim ullamcorper urna, sit amet fermentum sem nulla vel nibh. Etiam id urna non leo condimentum egestas pulvinar sit amet ex. Aenean faucibus eget dui at gravida. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum sodales neque dolor, et mollis turpis viverra non. Authorea and the American Association for Cancer Research Partner to Streamline Resea... BROOKLYN, NY, August 28 2017 – Authorea, the online document editor for researchers, has partnered with the American Association for Cancer Research (AACR), the world's first and largest cancer research organization. By working with the AACR, Authorea aims to help researchers more easily format and submit their papers by offering one-click submission to journals published by the AACR.The AACR, founded in 1907, publishes some of the world's most important discoveries in cancer research and is comprised of over 37,000 members in 108 countries. Christine Rullo, Publisher and Vice President, Scientific Publications at the AACR, says, "We're excited to partner with Authorea for a more seamless submission option for researchers writing on Authorea."Josh Nicholson, Chief Research Officer at Authorea, adds: "Our association with the AACR will help to advance research writing and editing for clinicians and scientists who work in all aspects of cancer research. Authorea aims to bring document editing into the 21st century while supporting authors in meeting their professional goals."Launched in 2014 and used by over 100,000 researchers from all disciplines in academia as well as leading private research companies, Authorea allows researchers to collaborate in real-time for an easier and more seamless writing and publishing experience. The journal templates below assist authors in structuring their manuscript as appropriate for each of the AACR journals. Templates• Cancer Research: https://www.authorea.com/templates/cancer_research • Cancer Immunology Research: https://www.authorea.com/templates/cancer_immunology_research• Cancer Prevention Research: https://www.authorea.com/templates/cancer_prevention_research• Molecular Cancer Research: https://www.authorea.com/templates/molecular_cancer_research• Molecular Cancer Therapeutics: https://www.authorea.com/templates/molecular_cancer_therapeutics• Cancer Discovery: https://www.authorea.com/templates/cancer_discovery_research_article • Clinical Cancer Research: https://www.authorea.com/templates/clinical_cancer_research• Cancer Epidemiology, Biomarkers and Prevention: https://www.authorea.com/templates/cancer_epidemiology_biomarkers_and_preventionPress ContactAdyam Ghebre, Outreach, Authorea+1 (646) 598-9285About AuthoreaAuthorea is an online document editor for research and the place where scientific collaboration happens. Authorea is trusted worldwide by leading researchers writing and publishing content in every discipline, from astrophysics to zoology. The online document editor supports a wide range of markup languages and scientific integrations, including the most popular citation management, graphing, and visualization plugins. Authorea is on a mission to accelerate science through a superior web-based research-writing platform that delivers powerful tools and capabilities to researchers.About the American Association for Cancer ResearchFounded in 1907, the American Association for Cancer Research (AACR) is the world's first and largest professional organization dedicated to advancing cancer research and its mission to prevent and cure cancer. AACR membership includes more than 37,000 laboratory, translational, and clinical researchers; population scientists; other health care professionals; and patient advocates residing in 108 countries. The AACR marshals the full spectrum of expertise of the cancer community to accelerate progress in the prevention, biology, diagnosis, and treatment of cancer by annually convening more than 30 conferences and educational workshops, the largest of which is the AACR Annual Meeting with more than 21,900 attendees. In addition, the AACR publishes eight prestigious, peer-reviewed scientific journals and a magazine for cancer survivors, patients, and their caregivers. The AACR funds meritorious research directly as well as in cooperation with numerous cancer organizations. As the Scientific Partner of Stand Up To Cancer, the AACR provides expert peer review, grants administration, and scientific oversight of team science and individual investigator grants in cancer research that have the potential for near-term patient benefit. The AACR actively communicates with legislators and other policymakers about the value of cancer research and related biomedical science in saving lives from cancer. For more information about the AACR, visit www.AACR.org. Polymer Template An abstract of approximately 100 to 150 words identifying the new and significant results of the study must be provided for all manuscripts including articles reviews and communications. The abstract should comprise a brief and factual account of the contents and conclusions of the paper as well as an indication of any new information presented and its relevance. Abstracts should be self-contained. References to formulae equations or references that appear in the main text are not permissible. Nano Energy A concise and factual abstract is required. The abstract should state briefly the purpose of the research, the principal results and major conclusions. An abstract is often presented separately from the article, so it must be able to stand alone. For this reason, References should be avoided, but if essential, then cite the author(s) and year(s). Also, non-standard or uncommon abbreviations should be avoided, but if essential they must be defined at their first mention in the abstract itself.KeywordsImmediately after the abstract, provide a maximum of 6 keywords, using American spelling and avoiding general and plural terms and multiple concepts (avoid, for example, 'and', 'of'). Be sparing with abbreviations: only abbreviations Applied Soft Computing Abstract content goes hereIntroduction State the objectives of the work and provide an adequate background, avoiding a detailed literature survey or a summary of the results.
CommonCrawl
Impact of applying LEM to non-definite statements on definite statements Solomon Feferman (1928 – 2016) hold that statements of arithmetic are definite, while "higher-order" notions (such as the set of all subsets of $\mathbb N$) are vague, and questions about them might not be objectively answered. He proposed a formal system of mathematics (or, if you will, of set theory) that allowed to apply the law of excluded middle to arithmetical statements, but not to statements which included vague concepts. That is, in the definite realm we may reason in classical logic, while in the less definite realm one only can use intuitionistic logic (let me call this "semi-intuitionistic"). See the thread Is platonism regarding arithmetic consistent with the multiverse view in set theory? for a discussion on that view. In his paper Is the continuum hypothesis a definite mathematical problem?, Feferman has a section called "3. A proposed logical framework for what's definite (and what's not)". There he discusses formal systems that are "semi-intuitionistic" in the sense that these systems somehow distinguish between "definite" and "vague" statements and depending on which of these two types a statement has, one might apply the law of excluded middle to it or not. In this present question I don't want to fix a particular "semi-intuitionistic" formal systems, I just note that it is possible to formalize this idea. Question. Is it possible that there are definite, arithmetical statements that are provable in classical logic + ZFC, but not in such a semi-intuitionistic version of set theory? My motivation of this question are my philosophical thoughts on ZFC and my objection that there might be statements provable in ZFC, but for which this proof isn't satisfying in the sense that it applies classical logic to non-definite statements. In my opinion, these proofs should be treated witch caution, since I don't believe in the definiteness of general set theoretic statements. One may critise that my question is vague, for those of you who do, let me put my questions in other words: Are there any results known which demonstrate for a semi-intuitionstic formal system $K$ that there is a definite statement $s$ not provable in this systems, but which is provable in ZFC + classical logic? lo.logic set-theory mathematical-philosophy 122 silver badges33 bronze badges TastaturTastatur $\begingroup$ I believe that Nik Weaver may have been the first to propose the hybrid system you mention. See the comments at mathoverflow.net/a/229058/1946. $\endgroup$ – Joel David Hamkins Aug 13 '16 at 12:49 There are no such statements $s$, at least not for a suitably formalized version of your question, using intuitionistic set theory with Peano arithmetic. Suppose we have an arithmetical statement $\phi$, and $ZFC \vdash \phi$. By Shoenfield absoluteness, $ZF \vdash \phi$. Then by results of Friedman, $IZF \vdash \phi^{-}$, the double-negation translation of $\phi$. So $IZF + PA \vdash \phi$. See Michael Beeson, Foundations of Constructive Mathematics (1985), chapter VIII.3. That chapter shows that this phenomenon is fairly robust to the choice of formalization. Matt F.Matt F. $\begingroup$ Is seems to me that this justifies the use of LEM in the non-definite realm. $\endgroup$ – Tastatur Aug 13 '16 at 13:43 $\begingroup$ I once heard that people discussed whether the proof of Wiles proving Fermat's last theorem relied on the Grothendieck universe axiom. Now I wonder: If we have that an arithmetical statement s is provable in ZFC + Grothendieck universe axiom, can we infer that s is provable in the semi-intuitionistic system? $\endgroup$ – Tastatur Aug 13 '16 at 13:52 Not the answer you're looking for? Browse other questions tagged lo.logic set-theory mathematical-philosophy or ask your own question. Is platonism regarding arithmetic consistent with the multiverse view in set theory? intuitionistic interpretation of classical logic Using the multiverse approach to decide the law of the exluded middle? Intuitionistic algebraic topology? Is it possible for a theorem to be constructive only in a non-constructive metatheory? Last Status of Feferman's Conjecture on Indefinite Value of Continuum Modal collapse upon addition of the law of the excluded middle to an Intuitionistic modal logic
CommonCrawl
Discrete and Continuous Dynamical Systems - B 2015, Volume 20, Issue 5: 1355-1375. Doi: 10.3934/dcdsb.2015.20.1355 This issue Previous Article Error analysis for numerical formulation of particle filter Next Article A posteriori eigenvalue error estimation for a Schrödinger operator with inverse square potential Euler-Maclaurin expansions and approximations of hypersingular integrals Chaolang Hu1, , Xiaoming He2, and Tao Lü1, College of Mathematics, Sichuan University, Chengdu,Sichuan, 610064 Department of Mathematics and Statistics, Missouri University of Science and Technology, Rolla, MO 65401 Received: February 28, 2013 Revised: December 31, 2014 This article presents the Euler-Maclaurin expansions of the hypersingular integrals $\int_{a}^{b}\frac{g(x)}{|x-t|^{m+1}}dx$ and $\int_{a}^{b}% \frac{g(x)}{(x-t)^{m+1}}dx$ with arbitrary singular point $t$ and arbitrary non-negative integer $m$. These general expansions are applicable to a large range of hypersingular integrals, including both popular hypersingular integrals discussed in the literature and other important ones which have not been addressed yet. The corresponding mid-rectangular formulas and extrapolations, which can be calculated in fairly straightforward ways, are investigated. Numerical examples are provided to illustrate the features of the numerical methods and verify the theoretical conclusions. Hypersingular integral, Euler-Maclaurin expansion, mid-rectangular quadrature formula, arbitrary singular point, extrapolation. Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35. I. V. Boykov, Numerical methods of computation of singular and hypersingular integrals, Internat. J. Math. Math. Sci., 28 (2001), 127-179.doi: 10.1155/S0161171201010924. I. V. Boykov, E. S. Ventsel and A. I. Boykov, Accuracy optimal methods for evaluating hypersingular integrals, Appl. Numer. Math., 59 (2009), 1366-1385.doi: 10.1016/j.apnum.2008.08.004. Y. S. Chan, A. C. Fannjiang and G. H. Paulino, Integral equations with hypersingular kernels-theory and applications to fracture mechanics, Int. J. Eng. Sci., 41 (2003), 683-720.doi: 10.1016/S0020-7225(02)00134-9. Y. Z. Chen, A numerical solution technique of hypersingular integral equation for curved cracks, Comm. Numer. Methods Engrg., 19 (2003), 645-655.doi: 10.1002/cnm.623. D. L. Clements, M. Lobo and N. Widana, A hypersingular boundary integral equation for a class of problems concerning infiltration from periodic channels, Electron. J. Bound. Elem., 5 (2007), 1-16. A. G. Davydov, E. V. Zakharov and Y. V. Pimenov, Hypersingular integral equations in computational electrodynamics, Comput. Math. Model., 14 (2003), 1-15.doi: 10.1023/A:1022072215887. A. G. Davydov and E. V. Zakharov and Y. V. Pimenov, Hypersingular integral equations for the diffraction of electromagnetic waves on homogeneous magneto-dielectric bodies, Comput. Math. Model., 17 (2006), 97-104.doi: 10.1007/s10598-006-0001-9. Y. F. Dong and H. C. Gea, A non-hypersingular boundary integral formulation for displacement gradients in linear elasticity, Acta Mech., 129 (1998), 187-205.doi: 10.1007/BF01176745. Q. K. Du, Evaluations of certain hypersingular integrals on interval, Internat. J. Numer. Methods Engrg., 51 (2001), 1195-1210.doi: 10.1002/nme.218. M. Fogiel, Handbook of Mathematical, Scientific, and Engineering, Research and Education Association, New Jersey, 1994. A. Frangi and M. Guiggiani, Boundary element analysis of kirchhoff plates with direct evaluation of hypersingular integrals, Int. J. Numer. Meth. Engng., 46 (1999), 1845-1863.doi: 10.1002/(SICI)1097-0207(19991220)46:11<1845::AID-NME747>3.0.CO;2-I. L. Gori, E. Pellegrino and E. Santi, Numerical evaluation of certain hypersingular integrals using refinable operators, Math. Comput. Simulation, 82 (2011), 132-143.doi: 10.1016/j.matcom.2010.07.006. L. S. Gradsbteyn and L. M. Ryzbik, Table of Integrals, Series and Produts, Elsevier Pte Ltd, Singapore, 2004. L. J. Gray, J. M. Glaeser and T. Kapla, Direct evaluation of hypersingular Galerkin surface integrals, SIAM J. Sci. Comput., 25 (2004), 1534-1556.doi: 10.1137/S1064827502405999. L. J. Gray, L. F. Martha and A. R. Ingraffea, Hypersingular integrals in boundary element fracture analysis, Internat. J. Numer. Methods Engrg., 29 (1990), 1135-1158.doi: 10.1002/nme.1620290603. C. L. Hu, J. Lu and X. M. He, Productivity formulae of an infinite-conductivity hydraulically fractured well producing at constant wellbore pressure based on numerical solutions of a weakly singular integral equation of the first kind, Math. Probl. Eng., (2012), Article ID 428596, 18 pages. C. L. Hu, J. Lu and X. M. He, Numerical solutions of hypersingular integral equation with application to productivity formulae of horizontal wells producing at constant wellbore pressure, Int. J. Numer. Anal. Mod., Series B, 5 (2014), 269-288. J. Huang, Z. Wang and R. Zhu, Asymptotic error expansion for hypersingular integrals, Adv. Comput. Math., 38 (2013), 257-279.doi: 10.1007/s10444-011-9236-x. O. Huber, R. Dallner, P. Partheymüller and G. Kuhn, Evaluation of the stress tensor in 3-D elastoplasticity by direct solving of hypersingular integrals, Internat. J. Numer. Methods Engrg., 39 (1996), 2555-2573.doi: 10.1002/(SICI)1097-0207(19960815)39:15<2555::AID-NME966>3.0.CO;2-6. O. Huber, A. Lang and G. Kuhn, Evaluation of the stress tensor in 3D elastostatics by direct solving of hypersingular integrals, Comput. Mech., 12 (1993), 39-50.doi: 10.1007/BF00370484. N. I. Ioakimidis, Two-dimensional principal value hypersingular integrals for crack problems in three-dimensional elasticity, Acta Mech., 82 (1990), 129-134.doi: 10.1007/BF01173742. N. I. Ioakimidis, The Gauss-Laguerre quadrature rule for finite-part integrals, Comm. Numer. Methods Engrg., 9 (1993), 439-450.doi: 10.1002/cnm.1640090509. M. A. Kelmanson, Hypersingular boundary integrals in cusped two-dimensional free-surface Stokes flow, J. Fluid Mech., 514 (2004), 313-325.doi: 10.1017/S0022112004000515. P. Kolm and V. Rokhlin, Numerical quadratures for singular and hypersingular integrals, Comput. Math. Appl., 41 (2001), 327-352.doi: 10.1016/S0898-1221(00)00277-7. A. M. Korsunsky, On the use of interpolative quadratures for hypersingular integrals in fracture mechanics, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 458 (2002), 2721-2733.doi: 10.1098/rspa.2002.1001. A. M. Korsunsky, Gauss-Chebyshev quadrature formulae for strongly singular integrals, Quart. Appl. Math., 56 (1998), 461-472. L. A. de Lacerda and L. C. Wrobel, Hypersingular boundary integral equation for axisymmetric elasticity, Internat. J. Numer. Methods Engrg., 52 (2001), 1337-1354.doi: 10.1002/nme.259. S. Li and Q. Huang, An improved form of the hypersingular boundary integral equation for exterior acoustic problems, Eng. Anal. Bound. Elem., 34 (2010), 189-195.doi: 10.1016/j.enganabound.2009.10.005. I. K. Lifanov, L. N. Poltavskii and G. M. Vainikko, Hypersingular Integral Equations and Their Applications, Chapman & Hall/CRC, Boca Raton, FL, 2004. A. M. Lin'kov and S. G. Mogilevskaya, Complex hypersingular integrals and integral equations in plane elasticity, Acta Mech., 105 (1994), 189-205.doi: 10.1007/BF01183951. Y. Liu and S. Chen, A new form of the hypersingular boundary integral equation for 3-D acoustics and its implementation with C0 boundary elements, Comput. Methods Appl. Mech. Engrg., 173 (1999), 375-386.doi: 10.1016/S0045-7825(98)00292-8. Y. Liu and F. J. Rizzo, A weakly singular form of the hypersingular boundary integral equation applied to 3-D acoustic wave problems, Comput. Methods Appl. Mech. Engrg., 96 (1992), 271-287.doi: 10.1016/0045-7825(92)90136-8. G. Monegato, Numerical evaluation of hypersingular integrals, J. Comput. Appl. Math., 50 (1994), 9-31.doi: 10.1016/0377-0427(94)90287-9. G. Monegato and J. N. Lyness, The Euler-Maclaurin expansion and finite-part integrals, Numer. Math., 81 (1998), 273-291.doi: 10.1007/s002110050392. G. Monegato, R. Orta and R. Tascone, A fast method for the solution of a hypersingular integral equation arising in a waveguide scattering problem, Internat. J. Numer. Methods Engrg., 67 (2006), 272-297.doi: 10.1002/nme.1633. L. M. Romero and F. G. Benitez, Traffic flow continuum modeling by hypersingular boundary integral equations, Internat. J. Numer. Methods Engrg., 82 (2010), 47-63.doi: 10.1002/nme.2754. G. Rus and R. Gallego, Hypersingular shape sensitivity boundary integral equation for crack identification under harmonic elastodynamic excitation, Comput. Methods Appl. Mech. Engrg., 196 (2007), 2596-2618.doi: 10.1016/j.cma.2006.12.004. A. Salvadori, Hypersingular boundary integral equations and the approximation of the stress tensor, Internat. J. Numer. Methods Engrg., 72 (2007), 722-743.doi: 10.1002/nme.2041. S. G. Samko, Hypersingular Integrals and Their Applications, Analytical Methods and Special Functions, 5. Taylor & Francis, Ltd., London, 2002. A. Sidi, Euler-Maclaurin expansions for integrals with arbitrary algebraic endpoint singularities, Math. Comp., 81 (2012), 2159-2173.doi: 10.1090/S0025-5718-2012-02597-X. V. Sládek, J. Sládek and M. Tanaka, Regularization of hypersingular and nearly singular integrals in the potential theory and elasticity, Internat. J. Numer. Methods Engrg., 36 (1993), 1609-1628.doi: 10.1002/nme.1620361002. W. W. Sun and J. M. Wu, Newton-Cotes formulae for the numerical evaluation of certain hypersingular integrals, Computing, 75 (2005), 297-309.doi: 10.1007/s00607-005-0131-5. A. Sutradhar, G. H. Paulino and L. J. Gray, On hypersingular surface integrals in the symmetric Galerkin boundary element method: Application to heat conduction in exponentially graded materials, Internat. J. Numer. Methods Engrg., 62 (2005), 122-157.doi: 10.1002/nme.1195. M. S. Tong and W. C. Chew, A Novel Approach for evaluating hypersingular and strongly singular surface integrals in electromagnetics, IEEE Trans. Antennas and Propagation, 58 (2010), 3593-3601.doi: 10.1109/TAP.2010.2071370. J. M. Wu and W. W. Sun, The superconvergence of the comosite trapezoidal rule for Hadamard finite-part integrals, Numer. Math., 102 (2005), 343-363.doi: 10.1007/s00211-005-0647-9. J. M. Wu and W. W. Sun, The superconvergence of Newton-Cotes rules for the Hadamard finite-part integrals on an interval, Numer. Math., 109 (2008), 143-165.doi: 10.1007/s00211-007-0125-7. E. V. Zakharov and I. V. Khaleeva, Hypersingular integral operators in diffraction problems of electromagnetic waves on open surfaces, Comput. Math. Model., 5 (1994), 208-213.doi: 10.1007/BF01130295. P. Zhang and T. W. Wu, A hypersingular integral formulation for acoustic radiation in moving flows, J. Sound Vibration, 206 (1997), 309-326.doi: 10.1006/jsvi.1997.1039. X. Zhang, J. Wu and D. H. Yu, The superconvergence of composite trapezoidal rule for Hadamard finite-part integral on a circle and its application, Int. J. Comput. Math., 87 (2010), 855-876.doi: 10.1080/00207160802226517. C. Zheng, T. Matsumoto, T. Matsumoto and H. Chen, Explicit evaluation of hypersingular boundary integral equations for acoustic sensitivity analysis based on direct differentiation method, Eng. Anal. Bound. Elem., 35 (2011), 1225-1235.doi: 10.1016/j.enganabound.2011.05.004. V. V. Zozulya, Regularization of the hypersingular integrals in 3-D problems of fracture mechanics, Boundary elements and other mesh reduction methods XXX, WIT Trans. Model. Simul., WIT Press, Southampton, 47 (2008), 219-228.doi: 10.2495/BEO80221. V. V. Zozulya and P. I. Gonzalez-Chi, Weakly singular, singular and hypersingular integrals in 3-D elasticity and fracture mechanics, J. Chinese Inst. Engrs., 22 (1999), 763-775.doi: 10.1080/02533839.1999.9670512. Chaolang Hu Xiaoming He Tao Lü
CommonCrawl
EPJ Nonlinear Biomedical Physics Performances of domiciliary ventilators compared by using a parametric procedure Emeline Fresnel ORCID: orcid.org/0000-0002-3011-76941,2,3, Jean-François Muir3 & Christophe Letellier1,2 EPJ Nonlinear Biomedical Physics volume 4, Article number: 6 (2016) Cite this article Noninvasive mechanical ventilation is sufficiently widely used to motivate bench studies for evaluating and comparing performances of the domiciliary ventilators. In most (if not in all) of the previous studies, ventilators were tested in a single (or a very few) conditions, chosen to avoid asynchrony events. Such a practice does not reflect how the ventilator is able to answer the demand from a large cohort of patients with their inherent inter-patient variability. We thus developed a new procedure according which each ventilator was tested with more than 1200 "simulated" patients. Three lung mechanics (obstructive, restrictive and normal) were simulated using a mechanical lung (ASL 5000) driven by a realistic muscular pressure. 420 different dynamics for each of these three lung mechanics were considered by varying the breathing frequency and the mouth occlusion pressure. For each of the nine ventilators tested, five different parameter settings were investigated. The results are synthesized in colored maps where each color represents the ventilator (in)ability to synchronize with a given muscular pressure dynamics. A synchronizability ε is then computed for each map. The lung model, the breathing frequency and the mouth occlusion pressure strongly affect the synchronizability of ventilators. The Vivo 50 (Breas) and the SomnoVENT autoST (Weinmann) are well synchronized with the restrictive model (\(\overline {\varepsilon }=86\) and 78 %, respectively), whereas the Elisée 150 (ResMed), the BiPAP A40 and the Trilogy 100 (Philips Respironics) better fit with an obstructive lung mechanics (\(\overline {\varepsilon }=87\), 86 and 86 %, respectively). Triggering and pressurization performances of the nine ventilators present heterogeneities due to their different settings and operating strategies. Performances of domiciliary ventilators strongly depend not only on the breathing dynamics but also on the ventilator strategy. One given ventilator may be more adequate than another one for a given patient. The use of noninvasive ventilation (NIV) strongly increased during the last twenty years, and became a common technique for managing acute and chronic respiratory failures. In response to a growing market, manufacturers propose yearly (if not more often) new ventilators with an increasing number of ventilatory modes and settings. Our objective is to help the physicians to improve the synchronization between the ventilator pressure cycles and patient breathing cycles, a key feature to optimize comfort and reduce the work of breathing [1]. The number of parameters available in a modern ventilator is indeed significantly greater than in those produced one or two decades ago. Unfortunately, the terminology used to designate these ventilator settings remains not unified and the same abbreviation can sometimes correspond to different quantities, as clearly pointed out long time ago [2]. Such disparities make a comparison between the different ventilators difficult to establish. In spite of this, several bench studies were published for evaluating and comparing the performances of these devices. However, there is no standardized protocol for bench studies [3] and, for instance, some parameters used to set the lung model or the ventilators are not systematically reported: it is therefore not possible to reproduce most of these studies. Moreover, although bench studies seem to be the most reliable and efficient way to compare mechanical ventilators [4–7], it is difficult to know whether the devices were actually tested in equivalent working conditions, and different studies can sometimes lead to conflicting results. Each parameter used to set the lung model and the ventilator to perform bench tests should therefore be explicitly specified and translated in the same units (by using specific external measurements) for each device to allow rigorous comparisons. Such a step is required to develop a procedure to compare mechanical ventilators for helping physicians to more objectively select an adequate ventilator for their patient respiratory profile since performances are known to depend on it [4]. Among available ventilatory parameters, the sensitivity to patient inspiratory effort for triggering the pressure rise (here designated as the pressure rise triggering sensitivity or the high pressure triggering sensitivity) appears to be the most significant setting in pressure support ventilation since it directly affects the synchronization between the pressure cycle and the respiratory demand [8]. A lack of synchronization such as the presence of non-triggered cycles reduces patient comfort and increases the work of breathing [1, 9]. It is thus relevant to assess the abilities of ventilators to correctly synchronize their pressure cycles with patient inspiratory demand. Assessing triggering sensitivity performances is a non trivial task [3, 10, 11], mainly because different algorithms are used by the manufacturers to detect inspiratory efforts. It is therefore difficult to propose a standard measure to evaluate their performances. We are not yet able to propose an external measure allowing to compare objectively the sensitivity of the pressure rise triggering rather than counting the different types of asynchrony events. Pressure rise duration is also a key parameter which affects patient's inspiratory work load [12], and must be taken into account for optimizing patient-ventilator interactions. A key parameter is the trigger to switch from the high to the low pressure delivered by the ventilator (here designated as the low pressure trigger or the pressure release trigger and often designated elsewhere as the "cycling"). These last two parameters are easier to compare because they are more or less based on similar algorithms and can be easily evaluated by the pressure rise duration and the percentage of the maximum airflow reached during the running cycle, respectively. In the present work, we developed a procedure to test the abilities of domiciliary ventilators to correctly synchronize the pressure cycles they deliver with patient breathing cycles in various conditions. We developed our procedure in trying to overcome some of the weaknesses pointed out in the critical analysis of bench studies evaluating devices for noninvasive ventilation recently published [3]. We therefore focused our attention in elaborating a protocol providing reliable and reproducible results. First, we based our tests by simulating patient breathing cycles with the help of a realistic muscular pressure as developed in [13] for driving our mechanical lung (an ASL 5000, Ingmar Medical, Pittsburgh, USA). In order to avoid the situation where two different ventilators are tested in two different operating conditions, each ventilator was parametrized in a systematic way and was tested with a large set of lung dynamics; from our point of view, this is a very key point since testing a ventilator with a single lung dynamics can lead to two opposite biased situations; i) the lung dynamics was an example chosen because the ventilator manages it in an optimal way and ii) the lung dynamics is unfortunately one of those the ventilator is not able to synchronize with. In the former case, the performances of the ventilator are overestimated while they are underestimated in the latter case. In both cases, the ventilator is incorrectly evaluated. We therefore introduce a parametric test, that is, each ventilator is tested with three types of lung mechanics (restrictive, obstructive and normal) whose dynamics is varied in a wide range to simulate a large cohort of patients. In the subsequent part of this paper, a time at which an event occurs is designated by T event, the time interval (the duration) during which a process happens by τ process and the delay with which an event occurs compared to the expected time by δ event. This notation will avoid to mistake times, durations and delays. Lung model and simulated inspiratory effort Among the different types of mechanical lungs available, we used a microprocessor-controlled piston, the ASL 5000. The choice for this device results from the fact that this is an active and flexible mechanical lung that has become extensively used for testing performances of domiciliary ventilators [14–19]. It consists in a computer controlled piston-cylinder unit whose mechanics can be adjusted by realistic parameters such as the resistance of the airways and the lung compliance, and whose dynamics can be varied according to parameters as the breathing frequency and the amplitude of the muscular pressure. In all cases, ventilators were connected to the mechanical lung via a single tube with intentional leaks. If not compatible with the ventilator, this interface was substituted by a single tube with an expiratory valve. Pressure, airflow and muscular pressure signals were measured with the ASL 5000 software at 512 Hz and stored for subsequent analysis. The data were then processed with the help of a code written by us and based on the definition of asynchrony events as detailed below. The inter-patient variability clinically observed can therefore be easily simulated. There is not yet a consensus for designing the evolution of the muscular pressure for driving an ASL [3] and all parameter values required to reproduce it are very rarely fully reported, with the exception of Chatburn's studies [20]. Most often a semi-sinusoidal muscle pressure — as predefined in the ASL 5000 — is used. Based on the (rare) physiological data available, we thus developed a more realistic evolution of the muscular pressure (Fig. 1) [13]. Realistic muscular pressure P mus. Muscular pressure model used for driving the ASL 5000 as developed by Fresnel et al. [13] The respiratory muscle pressure was simulated by using two exponential functions, one for inspiration and one for expiration (see [13] for details). The whole breathing cycle thus simulated only depends on two clinical parameters that are i) the breathing frequency f b and ii) the mouth occlusion pressure P 0.1 at 0.1s representing the stiffness of the inspiratory effort. From these two parameters and the formula governing the exponential evolution of the muscular pressure, the maximum amplitude P max and the duration τ ins of the contraction (corresponding to the inspiratory phase) and the duration τ exp of the relaxation (corresponding to the expiratory phase) are automatically computed by using [13] $$ \frac{\tau_{\text{ins}}}{\tau_{\text{tot}}} = 0.0125 \, f_{\mathrm{b}} + 0.125 \quad \text{where} \quad \tau_{\text{tot}} = \frac{60}{f_{\mathrm{b}}} \,. $$ According to clinical studies [21–24], most of patho-physiological cases are considered when P 0.1 is varied from 0.5 to 10 cmH2O (with an increment equal to 0.5 cmH2O). Combined with the breathing frequency f b which is varied from 10 to 30 cycles per minute (cpm) with an increment equal to 1 cpm, it allows to model 420 different ventilatory dynamics, each pair (f b,P 0.1) corresponding to a given lung mechanics (or a "simulated" patient in a clinical equivalent). The maximum amplitude [13] $$ P_{\text{max}} = \frac{P_{0.1}}{ 1 - e^{\frac{- 0.1 (f_{\mathrm{b}} + 4 \cdot P_{0.1})}{10}}} $$ of the muscular pressure is thus between 2 and 25 cmH2O, in agreement with the recommendations proposed by Olivieri el al., allowing to simulate weak (2 cmH2O), normal (8 cmH2O), high (15 cmH2O) and strenuous (25 cmH2O) inspiratory efforts [3]. Three lung mechanics were chosen to simulate normal, obstructive and restrictive disorders in the lung. The corresponding values of the airway resistance and the lung compliance (Table 1) were chosen within the range commonly found in bench studies and are close to the values proposed by Olivieri et al. to model severe obstruction and restriction, respectively [3]. Each ventilator is therefore tested with a total of 1260 different ventilatory dynamics. In each case, 50 breathing cycles were simulated. Table 1 Values of the airway resistance R and the lung compliance C used to simulate the three lung mechanics Ventilator modes and the different types of cycles In this work, we choose to focus on the pressure support mode since it is available in any device from the market and is the most often used mode at home [25]. This ventilation mode consists in a partial support provided by a positive pressure cycle ideally synchronized with patient's breathing cycles. It offers numerous advantages among which the fact that it is often easily accepted by the patients [26]. This mode is labeled in various ways in the different ventilators we tested. Thus, it is mode "S" in the Trilogy 100 (Philips Respironics), the BiPAP A40 (Philips Respironics), the S9 VPAP ST (ResMed), the Stellar 100 (ResMed), and the SOMNOvent autoST (Weinmann). It is mode "PS" in the Elisée 150 (ResMed), and mode "PSV" in the Vivo 50 (Breas), the Monnal T50 (Air Liquide Medical Systems) and the Smartair ST (Covidien). Typically, this mode is based on two phases, one with a high pressure during the inspiration and one with a low pressure during the expiration. One pressure cycle was therefore associated with a detection of the inspiratory effort that triggers the pressure rise from the low to the high pressure level, and a detection of the relaxation of the inspiratory demand for triggering the pressure release from the high to the low pressure level. This mode gives priority to the synchronization between the patient breathing cycles (here simulated by the mechanical lung) and the pressure cycles delivered by the ventilator. It is therefore necessary to introduce a terminology that clearly distinguishes the patient breathing cycles from the ventilator pressure cycles. The main characteristics of the ventilator pressure cycles are the delivery of a high pressure P h (corresponding to the so-called IPAP for "inspiratory positive airway pressure") and a low pressure P l (corresponding to the so-called EPAP for "expiratory positive airway pressure" or PEEP for "positive end-expiratory pressure"). The point that motivated this new terminology is that the pressure cycle delivered by the ventilator is not always synchronized with the patient breathing cycle: the high pressure P h is therefore not necessarily delivered during patient's inspiration, hence turning the term "positive inspiratory pressure" ambiguous [27]. To trigger the two pressure switches occurring during a pressure cycle, the ventilators use an algorithm interpreting measurements of the airflow Q v and the pressure P v within the ventilation circuit. First, there is a switch ensuring a transition from the low pressure to the high pressure level at time T h induced by the detection of patient's inspiratory effort; such a switch is commonly triggered from the airflow Q v, using a threshold or a variation of that airflow within an interval of a few milliseconds. Second, there is a switch ensuring the pressure release to the low level at time T l, also triggered by a condition on the airflow commonly expressed as a fraction of the maximum airflow Q v, max achieved during the running cycle (Fig. 2). These switches depend on the here-called high pressure triggering sensitivity η h (or the pressure rise triggering sensitivity) and on the low pressure triggering sensitivity η l (or the pressure release triggering sensitivity) regardless of the ventilator strategy. We prefer to speak in terms of triggering because it corresponds explicitly to the processes actually used by the ventilator to drive the pressure cycles it delivers. During mechanical ventilation, the breathing cycle combines with the pressure cycle to provide the "ventilatory cycle" which is the one actually investigated from the measurements of the airflow and the pressure in the ventilatory circuit. When the pressure cycle is synchronized with the breathing cycle, the ventilatory cycle is also synchronized with the two "primary" cycles. Typically when there are asynchrony events, by definition, the ventilatory cycle is not synchronized with the breathing cycle, thus justifying to distinguish these three different cycles. Ventilatory curves from the mechanical lung. The pressure P v and the mechanical lung airflow Q p measured in the mechanical lung (or Q v in the ventilatory circuit in clinical situations) connected to a ventilator in pressure support mode are plotted. P h is the high pressure, P l the low pressure, Q p, max the maximum airflow reached in the running cycle, η h corresponds to the high pressure triggering sensitivity, η l to the low pressure triggering sensitivity and τ pr to the pressure rise duration Once the pressure rise is triggered, the transition between the low and high pressure levels, corresponding to the level of ventilatory support Δ P=P h−P l, can be reached more or less rapidly. The duration of this "pressure rise" is computed from time T h at which the pressure rise is triggered by the ventilator and time T pr at which the high pressure P h is reached. This duration is expressed as $$ \tau_{\text{pr}}=T_{\text{pr}} - T_{h} \, $$ where τ pr designates the duration of the pressure rise whereas T pr designates the time at which the high pressure is reached. In order to simplify comparisons, this term τ pr will be used to express the pressure rise duration computed from the measured ventilatory pressure, even when the parameter is not expressed as a duration in the ventilator. The ventilatory cycle can therefore be characterized by the four durations as follows (Fig. 2). First, there is the high pressure phase duration τ h=T l−T h, which includes the pressure rise duration τ pr=T pr−T h. Then there is the low pressure phase duration τ l =T h,(n+1)−T l. The total cycle duration τ tot=T h,(n+1)−T h obviously corresponds to τ h+τ l. In these previous definitions, T h (T h,(n+1)) is the time at which the nth (n+1th) high pressure rise is triggered. Our objective is to assess the performances of ventilators that are connected to three lung models as previously defined. We investigated these performances in varying two of the most important parameters which influence the synchronization between patient's breathing cycles and ventilator pressure cycles. We thus varied the high pressure triggering sensitivity η h [8] and, the pressure rise duration τ pr [28]. Although it is known that the low pressure P l [29] and the pressure support level Δ P=P h−P l [9, 30] are particularly crucial to avoid deleterious asynchrony events such as non-triggered cycles or double-triggered cycles, Costa et al. also focused on η h and τ pr [19]. Since each parameter settings (five for each ventilator) is tested for a cohort of 1260 simulated lung dynamics, we limited the number of parameter values: the low pressure P l was therefore set to 5 cmH2O, and the high pressure P h to 20 cmH2O, corresponding to a pressure support of 15 cmH2O. According to Olivieri et al. [3], this low pressure is most common value, and the retained high pressure is the largest used in test bench studies. In practice, each ventilator has its own process for triggering the pressure rise and may work in a very specific way. Some ventilators, such as the Vivo 50 for instance, use triggering conditions which differ rather significantly from those used by the other devices. Consequently, it is not always possible to have the same operating conditions for all the ventilators involved in a comparative test. We investigated triggering performances for three sensitivity values η h corresponding to the highest, the median and the lowest sensitivity in the range proposed in each ventilator (the highest sensitivity being associated with the smallest inspiratory effort provided by the lung model to trigger the pressure rise). We then choose to assess the sensitivity of the high pressure triggering by measuring the mechanical lung airflow Q p and the muscular pressure P mus when the pressure rise starts, thus allowing direct comparisons. The low pressure triggering sensitivity η l is more consensual among manufacturers. It is commonly defined as a fraction Q l of the maximum airflow Q p, max reached during the running cycle. Depending on the ventilator, there are two ways to define this threshold Q l which may either correspond to Q l=η l Q p, max or to Q l=(1−η l) Q p, max. This is here another source of confusion for clinicians. We will use the first way for reporting the values we selected for η l. In our protocol, in order to limit the number of parameters to vary, η l is set to a given value for each lung mechanics tested. We did our best to set the low pressure triggering sensitivity η l to 25 % for the restrictive mechanics, 30 % for the normal model and 75 % for the obstructive model to reduce expiratory flow limitation which affects these latter patients [31]. In fact, it was shown that recent bilevel ventilators tend to trigger prematurely the pressure release when connected to normal lung mechanics, a tendency exacerbated with a restrictive lung. With our obstructive lung, this release is delayed, a feature which is aggravated in the presence of air leaks [32]. The pressure can rise according to a linear or an exponential function. Most often, there is no information concerning the exact duration of the pressure rise, and it is difficult to know which value is considered by the ventilator. Consequently, in order to allow objective comparison, as we did for the high pressure triggering sensitivity, the pressure rise duration was investigated for the shortest, median and longest settings available in each ventilator. The effective duration was then measured as the duration between time T h at which the high pressure rise is triggered and the time T 0.95P at which the pressure reaches 95 % of the maximum pressure observed during the running cycle. In modern ventilators, two additional parameters are available: they are the minimal (τ h, min) and maximal (τ h,max) durations for which the high pressure is delivered. These two settings may override the pressure release, that is, the ventilator ability to detect the end of patient's inspiratory effort. In the present work, we therefore choose τ h, min (τ h, max) equals to the minimum (maximum) value proposed by the ventilator to avoid any influence of these parameters on our results. Depending on the possibilities offered by the ventilator, we set off the backup frequency or set it at the minimum value. Selected parameter values for the tested ventilators are reported in Table 2. Table 2 Minimum, median and maximum values for three parameters which were varied in the domiciliary ventilators tested in our protocol Dynamics of the lung model-ventilator system In our simulations, the muscular pressure curve which drives the mechanical lung is noise free and the inspiratory effort initiation is therefore known precisely: the corresponding time is designated by T 0. Since the high pressure rise starts at time T h, the high pressure triggering delay δ h thus corresponds to $$ \delta_{\mathrm{h}} = T_{\mathrm{h}} - T_{0} \,. $$ Note that in clinical practice the inspiratory effort initiation determined when the esophageal pressure decreases by 1 cmH2O [30] (and not when it starts to decrease) and is therefore shortened by 50 to 100 ms, depending on the breathing frequency. It was shown that dyspnea is reduced when the clinically measured δ h was less than 100 ms in ventilated patients [10]: when the delay is greater than 100 ms, there is an increased patient work of breathing and a lower efficiency of ventilatory support [5]. To compensate the underestimation of δ h in our simulations compared to clinical studies, we considered that a correct delay τ h must be less than 200 ms. In the present work, we considered that the inspiratory effort ends when the muscular pressure P mus reaches 99 % of the maximum amplitude P max: the corresponding time is designated by T max. The triggering delay δ l of the pressure release thus corresponds to $$ \delta_{\mathrm{l}} = T_{\mathrm{l}} - T_{\text{max}} \, $$ where T l is the time at which the pressure release is triggered. The delay δ h is always positive but the delay δ l can have a negative or a positive value, depending on whether the pressure release triggering is advanced or delayed with respect to the end of the inspiratory effort, respectively. An ideal pressure cycle would therefore mean a perfect synchronization between the beginning of the inspiratory effort and the pressure rise triggering (δ h<200 ms), and between the end of the inspiratory effort and the pressure release triggering (|δ l|<300 ms). When there is a lack of synchronization, this is, by definition, an asynchrony event. It should be clear that, sometimes, an asynchrony event can be tolerated or even wanted: for instance, in the case of patients with an acute COPD, backup cycles can be welcome. Our aim is here to assess the quality of synchronization between the breathing cycles and the pressure cycles delivered by the ventilator. We do not intend to investigate what could be the ventilator settings for an optimal assistance for a given patient. This study was conducted for helping the clinicians to know when a good "mechanical" synchronization can be obtained. Detecting cycles and asynchrony events Simulations were performed for each ventilator included in the protocol and data were stored for an automatic analysis using our detection algorithm working with the time series of the muscular pressure P mus, the pressure P v measured in the piston chamber (nearly corresponding to the pressure in the ventilatory circuit) and the mechanical lung airflow Q p (assimilated to the patient airflow). These three time series were provided by the ASL 5000 and were filtered with a Butterworth filter acting as a low pass filter whose frequency cutoff was set at 10 Hz to reduce the measurement noise. The breathing cycles were determined by detecting time T 0 and T max from each oscillation of the muscular pressure. During each of them, the number of times the pressure P v becomes greater than a threshold value (equal to \(\frac {P_{\mathrm {h}}}{2}\)) allowed to determine whether this is a non-triggered, single-triggered or double-triggered cycle (sometimes we may also observe multi-triggered cycles). All these triggerings must occur for T 0<T<T max. Self-triggered (or auto-triggered) cycles were detected when the high pressure triggering occurs at time T such as T>T max. They were thus easily distinguished from the other types of triggered cycles. In order to detect any backup cycle (if the considered ventilator did not allow to disable this option), the duration between successive high pressure triggerings was calculated and compared to the backup frequency. Among the cycles for which the high pressure is triggered a single time during the inspiratory effort, we then computed the two delays δ h and δ l. When δ h≤200 ms (nearly always the case) and when |δ l |≤300 ms, the cycle was said to be synchronous (or normal). When δ l<−300 ms (δ l>+300 ms), the cycle was said to be a cycle with an advanced (delayed) pressure release. The characteristics of the different types of cycles are reported in Table 3. Table 3 The different types of cycles and their corresponding characteristics Once the cycles were detected and classified, various markers were computed to quantify ventilator performances. These markers are devoted to three main aspects. First, in order to characterize the high pressure triggerings, we measured the mechanical lung airflow Q p(T h) and the muscular pressure P mus(T h) at time T h at which the pressure rise is triggered. We computed the triggering delay δ h of the pressure rise and the lung model inspiratory work of breathing $$ W_{\mathrm{h}} = \int_{t= T_{0}}^{T_{\mathrm{h}}} (Q_{\mathrm{p}} (t) - Q_{\mathrm{p}} (0)) \cdot P_{\text{mus}}(t) \, \mathrm{d}t \, $$ which is expressed in millijoules (mJ). Second, the pressure rise and the pressure release were evaluated as follows. We measured the actual pressure rise duration $$ \tilde{\tau}_{\text{pr}} = T_{0.95P} - T_{\mathrm{h}} \, $$ between time T h and time T 0.95P at which the measured pressure P v reaches 95 % of the maximum pressure of the running cycle. We computed the mean high pressure $$ \tilde{P}_{\mathrm{h}} = \frac{1}{T'_{0.95P}-T_{0.95P}} \, \int_{t=T_{0.95P}}^{T'_{0.95P}} P_{\mathrm{v}} (t) \, \mathrm{d}t $$ delivered between time T 0.95P as previously defined and time \(T^{\prime }_{0.95P}\) at which the pressure returns below 95 % of the maximum pressure. This mean pressure \(\tilde {P}_{\mathrm {h}}\) allowed to check whether the high pressure P h set on the ventilator was actually provided. We also computed the triggering delay $$ \delta_{\mathrm{l}} = T'_{0.95P} - T_{\text{ie}} \, $$ of the pressure release where T ie is the time at which inspiration ends and expiration starts. Third, we computed the work W 0.95P delivered by the ventilator during the pressure rise, that is, between time T h and time T 0.95P . This marker defines the amount of ventilatory assistance provided to the lung model in response to its inspiratory effort; it characterizes the efficiency with which the high pressure level is reached. The work $$ W_{0.95P} = \int_{t=T_{\mathrm{h}}}^{T_{0.95P}} (P_{\mathrm{v}} (t) - P_{\mathrm{l}}) \cdot Q_{\mathrm{p}} (t) \, \mathrm{d}t \, $$ is expressed in Joules (J). The advantage of this work is that it characterizes the whole pressure rise and not only an arbitrary part of it. We also computed the power $$ \mathcal{P}_{0.95P} = \frac{1}{\tilde{\tau}_{\text{pr}}} \, W_{0.95P} \, $$ delivered by the ventilator; it is expressed in Watt (W). To complete these markers, we computed the minute volume $$ V_{\mathrm{m}} = V_{\text{max}} - V_{\mathrm{R}} \,, $$ insufflated into the mechanical lung. Here V max is the maximum volume reached during the running cycle and V R is the residual volume. It allows to check whether the delivered volume per minute is greater than 8 l/min, that is, than the volume commonly required by a patient at rest. Colored maps and synchronizability In order to facilitate the interpretation of our measurements, we constructed maps spanned by the breathing frequency f b and the mouth occlusion pressure P 0.1 by transforming the rate of detected asynchrony events (computed over 50 breathing cycles) for each pair (f b, P 0.1) in a colored pixel located according to the pair (f b,P 0.1). The color is allocated according to the rates of the different types of asynchrony events as reported in Table 4. For instance, an indigo pixel means that more than 85 % of the cycles are synchronous, less than 10 % are cycles with advanced pressure release, less than 10 % are with delayed pressure release, less than 10 % are double-triggered and less than 10 % are non-triggered, self-triggered or backup cycles. This color thus corresponds to a very good synchronization between the breathing cycles and the ventilator pressure cycles. Contrary to this, a red pixel (compared to indigo, red is the color at the opposite end of the visible light spectrum) is associated with more than 85 % of non-triggered, self-triggered or backup cycles. We constructed our color scale (Fig. 3) assuming that cycles can be ranked according to their effect on patient's comfort as $$\text{SC} \triangleright \text{Tsh} \triangleright \text{DT } \triangleright \text{NT} $$ where Tsh does not distinguish cycles with advanced or delayed pressure release, and NT does not distinguish non-triggered, self-triggered and backup cycles. A red pixel thus corresponds to a very poor synchronizability. Each map is made of 420 colored pixels, and provides the ability of the ventilator to synchronize its pressure cycles with the breathing cycles simulated by 420 different lung dynamics, that is, by 420 different simulated patients with a given lung mechanics. Color scale. This color scale is used to encode the rates of asynchrony events counted for each pair (f b,P 0.1). Indigo (left end of the spectrum) represents an excellent synchronizability which is gradually deteriorated as the color changes toward the red (right end of the spectrum) corresponding to a very poor synchronizability Table 4 Color legend for encoding the rates of asynchrony events computed over 50 breathing cycles for a each pair (f b,P 0.1). Since self-triggered and backup cycles are not triggered by an inspiratory effort, there are all designated as "non-triggered" cycles This color scale allows to encode the gradual emergence of asynchrony events depending on the lung dynamics characterized by the pair (f b,P 0.1). Uniform domain corresponds to a stable behavior of the ventilator with given rates of asynchrony events. When various colors (not close each other in the spectrum) co-exist in a neighborhood (more or less like in a patchwork), this means that there is an instability in the interactions between the lung model and the ventilator, that is, one may expect that in such a situation with a real patient, the type of cycles could significantly change in a short duration (a real patient always breathes with a cyclic dispersion in the breathing frequency and in the occlusion pressure). When the color changes in a smooth way (according to the color spectrum), this means that the domain for which the ventilator works well is rather well defined: it is thus possible to state without any ambiguity for which (sub-) cohort of simulated patients, the ventilator can be recommended. To quantify the global ventilator synchronizability over a map, that is, for a cohort of simulated patients with a given lung mechanics (obstructive, restrictive or normal) and for a given set of values for ventilator parameters, we computed a synchronizability \(\varepsilon _{\tilde {f}_{\mathrm {b}}, \tilde {P}_{0.1}}\) (where \(\tilde {f}_{\mathrm {b}}\) and \(\tilde {P}_{0.1}\) are the discrete values retained for varying the breathing frequency and the occlusion pressure, respectively) which takes into account the number of each type of cycles as $$ \varepsilon_{\tilde{f}_{\mathrm{b}}, \tilde{P}_{0.1}} = 1 - \frac{1}{N_{\mathrm{c}}} \sum_{n=1}^{N_{\mathrm{c}}} e_{n} \, $$ where N c is the number of breathing cycles simulated (N c=50 in our case) and $$\begin{array}{*{20}l} e_{n} = \left| \begin{array}{lll} 0 & \text{if the}\ n\text{th cycle is SC} \\[0.1cm] \frac{1}{4} & \text{if the}\ n\text{th cycle is Tapr or Tdpr} \\[0.3cm] \frac{1}{2} & \text{if the} n\text{th cycle is DT} \\[0.3cm] 1 & \text{if the} n\text{th cycle is NT.} \end{array} \right. \end{array} $$ Synchronizability \(\varepsilon _{\tilde {f}_{\mathrm {b}},\, \tilde {P}_{0.1}}\) is thus equal to 1 when the ventilator pressure cycles are well synchronized with the breathing cycles (100 % of synchronous cycles). This synchronizability is decreased as the number of asynchrony events increases. A mean synchronizability is then computed for each map according to $$ \varepsilon = \frac{1}{N_{\tilde{f}_{\mathrm{b}}} \times N_{\tilde{P}_{0.1}}} \sum_{i=1}^{N_{\tilde{f}_{\mathrm{b}}}} \sum_{j=1}^{N_{\tilde{P}_{0.1}}} \varepsilon_{ij} \,, $$ where (i,j) represent the pair of discrete values of (f b,P 0.1) characterizing the lung dynamics; \(N_{\tilde {f}_{\mathrm {b}}}\) and \(N_{\tilde {P}_{0.1}}\) are the numbers of values retained for the breathing frequency f b and the occlusion pressure P 0.1 for constructing the colored map, respectively. In our case, \(N_{\tilde {f}_{\mathrm {b}}} = 21\) and \(N_{\tilde {P}_{0.1}} = 20\), inducing 21×20=420 pixels in each map. A synchronizability ε close to 1 (100 %) corresponds to an excellent synchronization of the ventilator pressure cycles with the breathing cycles provided by the mechanical lung; when ε is about 0, this means that there are many asynchrony events. Comparative results Our results were averaged over the five parameter settings tested for each ventilator (the maximum high pressure triggering sensitivity versus the minimum, median and maximum pressure rise duration, and the minimum pressure rise duration versus the minimum and median high pressure triggering sensitivity). To compare ventilators performances which depend on the lung model to which they are connected, we performed a normality test (Shapiro-Wilk) to check whether the markers were normally distributed, with a statistical significance set at p<0.05. If the test was negative, a Wilcoxon rank-sum test was performed to determine if one distribution was stochastically greater than the other. If the test was positive, a Student t-test (if the variances between the two distributions were equal) or a Welch t-test (if the variances were different) was performed to determine whether the two sets of data were significantly different from each other. In order to assess the disparities in terms of synchronizability and performances between the different ventilators, we performed a Kruskal-Wallis rank-sum test, which is the non-parametric equivalent of the ANOVA test, to test whether samples originate from the same distribution. This procedure was applied to eight recent domiciliary ventilators: the BiPAP A40 (Philips Respironics), Elisée 150 (ResMed), Monnal T50 (Air Liquide Medical Systems), S9 VPAP ST (ResMed), SomnoVENT autoST (Weinmann), Stellar 100 (ResMed), Trilogy 100 (Philips Respironics), Vivo 50 (Breas), and an older one, the Smartair ST (Covidien). Selected values of the settings for these ventilators are reported in Table 2. All the ventilators were tested in a barometric regulation (the pressure is controlled), even when other regulations are available. Synchronizability ε The ability of a ventilator to synchronize the pressure cycles they deliver with the simulated breathing cycles depends on the lung model used: the mean synchronizability was significantly (p<0.05, Wilcoxon rank-sum test) smaller with the restrictive model (\(\overline {\varepsilon } = 56.6 \pm 26.5~\%\)) than with the obstructive model (\(\overline {\varepsilon } = 68.6 \pm 24.7~\%\)). As shown in Fig. 4, performances are different from a ventilator to another, as confirmed by the Kruskal-Wallis rank-sum test (p<0.01 for both restrictive and obstructive lung models) which indicates that the synchronizabilities computed for the nine ventilators do not belong to a unique distribution. With the restrictive model, the Vivo 50 and SomnoVENT autoST have a maximum synchronizability greater than 90 %, that is, close to an optimal synchronization with the pulmonary model. Contrary to this, the S9 VPAP ST and Stellar 100 have minimum synchronizability equal to 0 % with the largest high pressure triggering sensitivity and the smallest pressure rise duration. With the obstructive model, the BiPAP A40, Elisée 150 and Trilogy 100 have the best performances and present a maximum synchronizability greater than 90 %. The Monnal T50 and Smartair ST present a very poor synchronizability for low sensitivities of the pressure rise triggering when connected to this lung mechanics. Box plot representation of the synchronizabilities. Synchronizabilities computed for the ventilators connected to the restrictive (a) and obstructive b models for all different settings. Bottom and top of the box are the first and third quartiles, whereas ends of the whiskers are the minimum and maximum values Asynchrony events With the restrictive model, self-triggered cycles are the asynchrony events which mostly reduce the synchronizability. These cycles were encountered with more than 50 % of the ventilators tested, mainly for the large sensitivities of the high pressure triggering. This is observed with the Monnal T50, S9 VPAP ST, Stellar 100, BiPAP A40, Trilogy 100 and SomnoVENT autoST (Fig. 5). Double-triggered cycles may also happen and reduce the synchronizability; they were observed with the S9 VPAP ST, Stellar 100 and for a few ventilatory dynamics in the case of the Vivo 50. These two asynchrony events were due to a large pressure rise triggering sensitivity; consequently, there are more pressure cycles delivered than required by the simulated breathing cycles. Colored maps showing the incidence of two asynchrony events. Incidence of self-triggered cycles (lime, orange, red pixels) and double-triggered cycles (yellow pixels) for two ventilators connected to the restrictive lung model With the obstructive model, the time constant (equal to RC) of the lung model is large and induces slow variations in the airflow Q p. Detecting the inspiratory effort may therefore be difficult as evidenced by the numerous non-triggered cycles we detected, especially with the Monnal T50, Smartair ST and to a lesser extent, with the Vivo 50. Most of the time, small sensitivities of the high pressure triggering lead to ineffective inspiratory efforts from the lung model. Triggering performances Mean delays \(\overline {\delta }_{\mathrm {h}}\) (Fig. 6) were significantly (p<0.01, Welch t-test) shorter with the restrictive model (\(\overline {\delta }_{\mathrm {h}}=131 \pm 27\) ms) than with the obstructive model (\(\overline {\delta }_{\mathrm {h}}=187 \pm 41\) ms). The Vivo 50 and Trilogy 100 had the shortest triggering delays of the pressure rise (≤100 ms) with the restrictive model, whereas the Smartair ST presented the longest delay (>170 ms). The Monnal T50 and SomnoVENT autoST had good triggering performances with the obstructive model since the delays \(\overline {\delta }_{\mathrm {h}}\) were less than 140 ms with these two devices. The BiPAP A40, S9 VPAP ST, Stellar 100, Trilogy 100 and Vivo 50 presented the largest triggering delays (δ h>200 ms). High pressure triggering delays \(\overline {\delta }_{\mathrm {h}}\). Mean delays δ h for the nine ventilators (for their respective maximum high pressure triggering sensitivity) connected to the restrictive and obstructive models Triggering delays of the pressure rise measured with the restrictive lung mechanics are significantly (p=0.015, Student t-test) different from those measured with the obstructive model (Fig. 7). Mean triggering delays \(\overline {\delta }_{\mathrm {h}}\) ranked in ascending order. High pressure triggering delays measured with a restrictive lung mechanics (blue) ranked in ascending order and correspondence with an obstructive mechanics (red) In other words, when a device has small (large) delays with the restrictive model, it has large (small) delays with the obstructive model. This indicates that when the triggering strategy developed by a manufacturer for a ventilator is efficient for one lung mechanics, it might not perform as well for the second one. Some ventilators may therefore be more dedicated to a type of pulmonary disease than others. The mechanical lung work of breathing W h to trigger the pressure rise was significantly (p<0.05, Welch t-test) greater, in absolute value, for the obstructive model (\(\overline {W}_{\mathrm {h}}=-2.26\pm 1.66\) mJ) than for the restrictive model (\(\overline {W}_{\mathrm {h}}=-0.72\pm 0.62\) mJ). It reveals that the longer delay required to detect the simulated inspiratory efforts for triggering the pressure rise with the obstructive model may induce an increase of the work of breathing and, consequently, a less effective unloading of the respiratory muscles. The airflow Q p and the pressure P mus at the time T h the pressure rise is triggered were computed. Nevertheless, no statistical test was possible due to some configurations for which asynchrony events were too frequent and led to a synchronizability close to 0. The measured airflows were consistent with the announced values for the BiPAP A40 and Trilogy 100 whose triggering sensitivities are provided in l/min. No direct correspondence was found for the other ventilators and such assessment was impossible to perform when the triggering sensitivities were given without any unit or qualitatively as in the S9 VPAP ST or the Stellar 100. This is the main reason for which it is rather difficult to compare the performances of the different ventilators. In order to distinguish different strategies for triggering the pressure rise, we plotted the mechanical lung airflow and the muscular pressure at the time the pressure rise is triggered (versus an index designating each pair (f b,P 0.1) by an integer between 1 and 420) (Fig. 8). These curves provide patterns which allow us to classify the ventilators in three groups, suggesting similar response from the ventilators for a given group in the cohort of simulated patients. These different patterns explain why it is difficult to reliably compare triggering performances in different ventilators. Mechanical lung airflow and muscular pressure at high pressure triggering. Mechanical lung airflow Q p and muscular pressure P mus measured at time T h at which the pressure rise is triggered, for every breathing dynamics tested. Each segment represents the values measured for one given P 0.1 (increasing from the left to the right) and all breathing frequencies. All ventilators were set to the maximum pressure rise triggering sensitivity With the restrictive model, the minimum airflow required to trigger the pressure rise with the lowest sensitivity is large for the Monnal T50 (Q p≥47.8 l/min, corresponding to P mus≤−8.5 cmH2O) and Smartair ST (Q p≥46.2 l/min, P mus≤−8.2 cmH2O). It explains the numerous non-triggered cycles observed when these ventilators are connected to the restrictive model. To a lesser extent, the required airflow was quite large with the Vivo 50 (Q p≥18.3 l/min, P mus≤−3.9 cmH2O) for the minimal triggering sensitivity of the pressure rise. With the obstructive model, the non-triggered cycles with the Monnal T50 and Smartair ST were so frequent that it was not possible to correctly compute the mechanical lung airflow Q p or the muscular pressure P mus when the pressure rise is triggered. In fact, these minimum triggering sensitivities for the pressure rise which are available on these two devices seem to be not clinically pertinent. The required airflow was quite large with the Vivo 50 (Q p≥20.8 l/min, P mus≤−13.3 cmH2O) connected to the obstructive lung model. Pressure rise and return pressure release For the restrictive model, the effective pressure rise duration \(\tilde {\tau }_{\text {pr}}\) depends on the ventilator and, for a given device, depends on the lung model (Fig. 9). For the restrictive mechanics, the mean pressure rise duration was between 326±99 ms and 598±184 ms, corresponding to a mean range of 272 ms. Three ventilators presented a range less than 100 ms, that is, reduced possibilities to adapt the pressure rise to the lung dynamics: the Monnal T50 (\(404<\tilde {\tau }_{\text {pr}}<420\) ms), Smartair ST (\(426<\tilde {\tau }_{\text {pr}}<507\) ms) and SomnoVENT autoST (\(486<\tilde {\tau }_{\text {pr}}<507\) ms). The S9 VPAP ST and Stellar 100 had the widest range of pressure rise durations (\(222<\tilde {\tau }_{\text {pr}}<931\) ms and \(213<\tilde {\tau }_{\text {pr}}<860\) ms, respectively) and a linear scale expressed in seconds which corresponds to the measured values, being therefore easy to use. Mean pressure rise duration \(\tilde {\tau }_{\text {pr}}\). Range of the pressure rise duration \(\tilde {\tau }_{\text {pr}}\) measured for the nine ventilators connected to the restrictive (blue) and obstructive (red) lung models when the corresponding parameter is set to its smallest and largest values For the obstructive model, the mean pressure rise duration \(\tilde {\tau }_{\text {pr}}\) was between 274±181 ms and 805±247 ms, that is, a mean range of 531 ms which is greater than the one measured with the restrictive model. Measured ranges are small with the Monnal T50, Smartair ST, SomnoVENT autoST and Vivo 50 (less than 400 ms), and wide with the S9 VPAP ST (\(105<\tilde {\tau }_{\text {pr}}<1012\) ms) and Stellar 100 (\(169<\tilde {\tau }_{\text {pr}}<1016\) ms), as well as with the BiPAP A40 (\(223<\tilde {\tau }_{\text {pr}}<966\) ms). The S9 VPAP ST and Stellar 100 (both from ResMed) are the only two ventilators for which the measured pressure rise durations are identical with the restrictive and obstructive lung models, which roughly correspond to the announced settings (given in ms). The triggering delay δ l of the pressure release is most often negative with the restrictive model (\(\overline {\delta }_{\mathrm {l}} = -171\) ms for the nine ventilators and the five parameter settings). It is positive with the obstructive model (\(\overline {\delta }_{\mathrm {l}} = +233\) ms), as reported in [33]. The synchronization with the end of the inspiratory effort mostly depends on the pressure rise duration and on the ventilator ability to estimate the airflow actually delivered to the patient. Pressure release triggerings were particularly advanced (\(\overline {\delta }_{\mathrm {l}} = - 480\) ms) with the Elisée 150 connected to the restrictive model, meaning that cycles with advanced pressure release frequently occurred, even with the longest pressure rise duration (Fig. 10 a). Contrary to this, delays \(\overline {\delta }_{\mathrm {l}}\) were large with the SomnoVENT autoST connected to the obstructive model (\(\overline {\delta }_{\mathrm {l}}=646\) ms), leading to numerous cycles with delayed pressure release, even with the shortest pressure rise duration (Fig. 10 b). Colored maps with pressure release asynchrony events. Two maps showing the incidence of cycles with advanced pressure release (azure domain in the bottom left part) with the Elisée 150 connected to the restrictive mechanics (a) and of delayed pressure release (blue domain in the top part) with the SomnoVENT autoST connected to an obstructive mechanics (b) The mean pressure rise duration leading to the smallest triggering delay (\(\overline {\delta }_{\mathrm {l}} = -42 \approx 0\) ms) of the pressure release, and thus corresponding to an optimal synchronization with the end of the inspiratory effort, was equal to 511 ms for the restrictive model, and equal to 297 ms (\(\overline {\delta }_{l}= +182 < +300\) ms) for the obstructive model. These measured pressure rise durations were significantly different (p<0.01, Welch t-test) between the two lung mechanics. When connected to the obstructive model, the mean high pressure actually delivered to the mechanical lung was close to the preset value for all the ventilators. With the restrictive model, the measured high pressure \(\tilde {P}_{\mathrm {h}}\) may strongly depend on the selected pressure rise duration τ pr, as exemplified with the A40 and Trilogy 100: for these two ventilators with the most sensitive high pressure triggering and the longest pressure rise duration, the measured pressure \(\tilde {P}_{\mathrm {h}}\) are 14.8 cmH2O and 15.7 cmH2O, respectively. This represents, compared to the preset value P h=20 cmH2O, a lack of 5.2 and 4.3 cmH2O, respectively. This likely means that what is selected on these two devices (from the same manufacturer) is in fact the derivative of the pressure, \(\frac {\mathrm {d}P_{\mathrm {h}}}{\mathrm {d}t}\) (a slope as suggested by the translation in the French version, "pente"): when it is too small, the pressure release occurs before the preset value P h is reached. This is therefore a good example of the ambiguity which can occur with the heterogeneity in the variable settings and strategies. In principle, the physician would expect that the preset high pressure is reached before the pressure release is triggered, a feature which is rather difficult to check prior to any monitoring since neither the duration of the high pressure phase τ h nor the pressure rise duration τ pr (in ms) are known. The A40 has a measured pressure rise duration which varies between 306 and 538 ms when connected to the restrictive model and, between 233 and 966 ms when connected with the obstructive model (rather similar values were measured with the Trilogy 100). The booklet corresponding to these two ventilators announced a pressure rise duration between 100 and 600 ms. In fact, in spite of its English name "rise time", used in these two ventilators, we cannot be ensured that this parameter is actually a duration since the manufacturer translated it by "pente" (slope) in French, thus justifying why the pressure rise durations were measured within a so wide range, contrary to what was observed with the S9 VPAP ST and the Stellar 100. Further measurements should be performed to determine what is exactly set with this parameter. Contrary to what was observed when connected to the obstructive model, the mean pressure delivered was greater than the preset value with the Elisée 150, Smartair ST and SomnoVENT autoST (for all parameter settings): we thus measured \(\overline {\tilde {P}}_{h}=22.7\) cmH2O, \(\overline {\tilde {P}}_{h}=23.0\) cmH2O and \(\overline {\tilde {P}}_{h}=24.1\) cmH2O, respectively. Ventilators pressurization performances The mean power \(\overline {\mathcal {P}}_{0.95P}\) delivered by the ventilators was significantly (p<0.01, Wilcoxon rank-sum test) greater with the restrictive lung mechanics (\(\overline {\mathcal {P}}_{0.95P}=1.09 \pm 0.24\) W) than with the obstructive lung mechanics (\(\overline {\mathcal {P}}_{0.95P}=0.50 \pm 0.12\) W). This is in agreement with the fact that more work is required to inflate the lungs when compliance is reduced. This power is strongly correlated to the delivered pressure \(\tilde {P}_{\mathrm {h}}\). In order to evidence such a relationship, we compared the difference in the mean pressure \(\Delta \overline {P}_{\mathrm {h}}\) delivered by the ventilators connected to the restrictive lung models when the pressure duration is equal to its shortest and longest values (Fig. 11) to the mean relative power \(\Delta \overline {\mathcal {P}}_{\mathrm {h}}\) in the same conditions. These two quantities are strongly correlated (ρ=0.87, p<0.01, Pearson's product-moment correlation). Due to a very limited range for varying the pressure rise duration (as observed in the Monnal T50 and the SomnoVENT autoST), some ventilators do not offer a wide range of possibilities in the support delivered to the lung model when this parameter is varied. Link between high pressure and power delivered. Differences (in absolute value) in the pressure delivered by the ventilators connected to the restrictive model when the pressure rise duration is the shortest and the longest (blue bars). Differences in the power delivered in the same conditions (red line) are also drawn Minute volume V m The mean minute volume insufflated to the lung model was significantly (p<0.01, Welch t-test) greater when the ventilators were connected to the restrictive lung model (V m=12.5 l/min) than when they were connected to the obstructive one (V m=10.9 l/min). No significant difference was observed among the mean minute volume delivered by the different ventilators. The main limitation of the present study is that ventilators were tested with a mechanical lung which does not fully behave like the human respiratory system. In particular, the pressure support delivered by the ventilator has no feedback on the lung dynamics. However, patient's inspiratory efforts were modeled in a realistic way and should therefore be considered as actually representative of the physiological breathing dynamics. Moreover, the ASL 5000 is a mechanical lung which was already used by other investigators for ventilator benchmarking [15–19]. Another limitation is that our ventilatory circuit was built with a constant and calibrated leak whereas noninvasive ventilation is often associated with non-intentional leaks around the mask that may vary in time. Another series of tests will be therefore performed (and discussed elsewhere) with a device allowing non-intentional and variable leaks during pressure support, in order to better reflect ventilator behaviour in more realistic conditions. This is particularly important because the ventilator software have very different performances in accounting for a non-intentional leak and they may induce additional asynchrony events in doing that. A last limitation is due to the disparities inherent to the different strategies developed by the manufacturers to drive their ventilators: this makes some comparisons difficult to establish as previously explained. In this study, we defined a parametric procedure allowing to test the ventilators connected to a large cohort of simulated patients (and not a single one) and therefore providing reliable evaluations of the markers we introduced. It should be clear that our aim is not to provide the best parameter settings for a given patient but rather to assess how easy it is to obtain a good synchronization between the breathing cycles and the pressure cycles delivered by the ventilator. Indeed, a large "indigo" domain means that a patient with a similar lung mechanics but whose lung dynamics (characterized by the ventilatory frequency and the occlusion pressure) slightly changes will remain well synchronized with the pressure cycles delivered by the ventilator. This bench model study highlighted the disparities existing in the abilities of nine domiciliary ventilators to synchronize with various simulated patients. Our results suggest that some ventilators are better designed to answer the demand of certain lung mechanics than others. For instance, the Vivo 50 and SomnoVENT autoST have excellent results when connected to the restrictive model and the BiPAP A40, Elisée 150 and Trilogy 100 when connected to the obstructive model (Fig. 12). These departures in the observed performances — at least partly — explain why clinicians may encounter some difficulties to adequately set a given ventilator for patients with a type of pulmonary disorders and not for patients with another type of pathology. The asynchrony events encountered during the simulations were rather characteristic of a given lung mechanics, due to the breathing dynamics which strongly differs between restrictive and obstructive conditions. With a restrictive model, fast variations in the patient airflow (related to the short time constant of his respiratory system) can lead to self-triggered cycles when the triggering sensitivity of the pressure rise is too large. A particular attention should be therefore paid to set a suitable triggering sensitivity of the pressure rise to avoid these asynchrony events. Contrary to this, the slow variations in the patient airflow with obstructive conditions may result in non-triggered cycles. Large sensitivities of the pressure rise triggering should therefore be preferred. Nevertheless, in the latter case, the disparities existing between the ventilators can make such an adjustment difficult to obtain since the triggering strategies, as well as the units (when they are provided), differ from a device to another one. Even for the pressure rise triggering sensitivities, expressed in l/min, the correspondence with the actual patient airflow is not direct and can lead to misunderstandings (for instance, the airflow measured (28.0 l/min) at time T h when the pressure rise is triggered is significantly larger than the preset airflow (η h=5 l/min) in the Monnal T50 connected to a restrictive lung mechanics. The synchronizability maps computed for the maximum, median and minimum pressure rise triggering sensitivities are therefore particularly relevant since they allow to compare objectively the triggering performances of the ventilators. When the mouth occlusion pressure P 0.1 and the breathing frequency f b of a patient with restrictive or obstructive disorders are known, it should be possible to check whether a ventilator can synchronize correctly its pressure cycle with the breathing cycles of the lung model for such a pair (f b,P 0.1). Colored maps with excellent synchronizabilities. Examples of maps computed for ventilators presenting an excellent synchronizability with the restrictive (a and b) and with the obstructive model (c, d and e) By measuring the triggering delay of the pressure rise, we highlighted the important inter-ventilator variability for detecting the inspiratory efforts: the BiPAP 40, Elisée 150, Trilogy 100 and Vivo 50 are efficient with the restrictive lung mechanics whereas the Monnal T50 and SomnoVENT autoST presented good performances with the obstructive lung mechanics. Most likely, different triggering strategies can explain such disparities, some ventilators (as, for instance, the Vivo 50, Trilogy 100, Stellar 100, Elisée 150 and BIPAP 140) being excellent with the restrictive lung mechanics but significantly worse with an obstructive mechanics. Some others are good with both types of lung mechanics (the Monnal T50, SommnoVENT autoST and Smartair ST) (Fig. 9). Some triggering strategies are therefore quite specific to a type of pulmonary disease. From the measures performed on pressure rise duration, it was shown that long pressure rise durations led to more synchronous cycles with a restrictive lung mechanics whereas short durations better suit the obstructive lung mechanics. Such a feature could explain why some ventilators do not synchronize properly; indeed, a too tight range for the pressure rise durations (as observed in the Monnal T50 and the SomnoVENT autoST with the restrictive mechanics) does not allow to have the value required for the considered lung mechanics. Consequently, comparing the minimum, median and maximum pressure rise durations for each ventilator allows to quickly assess the effect of this setting on the synchronization between the breathing cycles and the ventilator pressure cycles. We showed that all ventilators actually deliver the preset pressure value when connected to the obstructive mechanics. Contrary to this, when connected to the restrictive mechanics, the pressure actually delivered is significantly less than the preset value (BiPAP A40, Trilogy 100). Nevertheless, no major effect was observed in the delivered minute volume V m. Our choices for the pressure release triggering sensitivity η l according to the lung mechanics (around 25 % for the restrictive model and around 75 % for the obstructive one) could have been optimized by increasing it for the restrictive lung mechanics and by decreasing it with the obstructive mechanics since we observed cycles with advanced pressure release in the former case and cycles with delayed pressure release in the latter case. A further study could be performed in varying this parameter in order to rigorously investigate its influence on patient-ventilator synchronization. We provided all parameters required to reproduce the tests we performed in this study. We introduced a terminology to distinguish the patient (or mechanical lung) breathing cycles from the ventilator pressure cycles, a key point to clearly describe all the observed events. The present procedure was developed to compare different ventilators in pressure support mode. Most of this procedure can be straightforwardly extended to other ventilation modes or types of ventilators. This parametric procedure can be also used to investigate the evolution of the performances provided by various version of a given device (improved from a mechanical and/or an algorithmic point of view). As an example, we compared the synchronizability ε computed for the Vivo 40 (Breas) and two different software versions dedicated to the Vivo 50: initially poor (ε=10 %, see Fig. 13 a), the synchronizability progressively increases (ε=84 %, Fig. 13 c) mainly, in this case, because the rate of self-triggered cycles decreases, these cycles being replaced by synchronous cycles, once an intermediate unstable device was produced (Fig. 13 b). Indeed, heterogeneous synchronizability map as the one observed with the Vivo 50 equipped with the software, version 1.74 (Fig. 13 b) is the signature of a control loop which can be easily in conflict with itself. Evolution of the synchronizability. Synchronizability computed for some of the devices produced by Breas, that is, the Vivo 40 and the Vivo 50 with two different versions of its software. The three devices were connected to the restrictive model with η h=1 and τ pr=1. Self-triggered cycles (lime, orange, red pixels) were gradually replaced by synchronous cycles (indigo pixels) or cycles with advanced pressure release (azure pixels) This work was first devoted to develop a new parametric procedure for testing performances of devices used in mechanical ventilation. The main improvement is that the ventilators were tested for a large cohort of 420 simulated lung dynamics. This allows to take into account the inter-patient variability and, consequently, to provide a rather reliable evaluation of the ventilators performances. By performances, we mainly focused our attention on the ability presented by the ventilators to synchronize their pressure cycles with the breathing cycles simulated by the mechanical lung. The synchronizability we computed thus assesses the ability of the ventilator to correctly answer the inspiratory demand in synchronizing the phase during which the high pressure is delivered with the inspiration (here simulated with a mechanical lung). The clinician who has to treat a patient whose lung mechanics and dynamics can change in time is helped by our results in the sense that he may check whether his patient is in the middle of an indigo domain rather than in a red domain. Thus a slow and/or limited change in his patient physiological or pathological state would not affect too much the synchronization between the patient breathing cycles and the pressure cycles delivered by the ventilator. In order to adjust the parameter settings depending on the lung mechanics, it is widely accepted that the sensitivity for the pressure release must be around 25 % for a restrictive lung mechanics and around 75 % for an obstructive one. We used these "common" parameter values in our study. According to our results, the most critical parameter is the high pressure triggering sensitivity η h. Indeed, for seven of the nine ventilators tested, the best synchronizabilities were obtained by choosing for obstructive lung mechanics more sensitive triggers for the pressure rise than for restrictive mechanics. Contrary to this, the most sensitive trigger must be used with the Elisée 150 and the Vivo 50. The pressure rise duration τ pr is influent for these two devices as well as for the Monnal T50. For the two formers, most likely this parameter balances the lack of effect induced by changing the triggering sensitivity. In the latter (the Monnal T50), the logic is inverted compared to other ventilators: a larger τ pr is obtained with a smaller parameter value, most likely because it quantifies the slope of the pressure rise, and not its duration. When a clinician has a restrictive patient to treat, a Vivo 50 or a SOMNOvent autoST should be very easy to synchronize. When an obstructive patient is considered, a Respironics or an Elisée 150 could allow an easy synchronization. When a rather small synchronizability is obtained, this means that the number of parameter values for which a good synchronization is observed is smaller and, consequently, more difficult to find. The parameter values maximizing the synchronisability ε according to the pulmonary mechanics for each tested ventilator are reported in Table 5. Consequently, it can not be said that a given ventilator is better than another one without specifying the lung mechanics and dynamics: the ventilator must therefore be carefully chosen for each patient. It is hoped that the synchronizability maps provided by such a study could be useful for guiding the clinician in his choice. Table 5 Values of the two parameters (high pressure triggering sensitivity η h and pressure rise duration τ pr) maximizing the synchronizability ε according to the pulmonary model considered. These values were obtained from the tests performed under the protocol we defined and do not imply that a better combination of settings can not be found. The pressure release triggering sensitivity η l is also reported Tobin MJ, Jubran A, Laghi F. Patient–Ventilator Interaction. Am J Respir Crit Care Med. 2001; 163:1059–63. Chopin C, Chambrin MC. Essai de classification des modes actuels de ventilation mécanique en pression positive. Réanimation Urgences. 1998; 7:87–99. Olivieri C, Costa R, Conti G, Navalesi P. Bench studies evaluating devices for non-invasive ventilation: critical analysis and future perspectives. Intensive Care Med. 2012; 38:160–7. Fauroux B, Leroux K, Desmarais G, Isabey D, Clément A, Lofaso F, Louis B. Performance of ventilators for noninvasive positive-pressure ventilation in children. Eur Respir J. 2008; 31:1300–7. Thille A, Lyazidi A, Richard J-C, Galia F, Brochard L. A bench study of intensive-care-unit ventilators: new versus old and turbine-based versus compressed gas-based ventilators. Intensive Care Med. 2009; 35:1368–76. Ouanes I, Lyazidi A, Danin PE, Rana N, Di Bari A, Abroug F, Louis B, Brochard L. Mechanical influences on fluid leakage past the tracheal tube cuff in a benchtop model. Intensive Care Med. 2011; 37:695–700. Lyazidi A, Thille AW, Carteaux G, Galia F, Brochard L, Richard J-C. Bench test evaluation of volume delivered by modern ICU ventilators during volume-controlled ventilation. Intensive Care Med. 2010; 36:2074–80. Thille AW, Rodriguez P, Cabello B, Lelouche F, Brochard L. Patient-ventilator asynchrony during assisted mechanical ventilation. Intensive Care Med. 2006; 32:1515–22. Nava S, Bruschi C, Rubini F, Palo A, Iotti G, Braschi A. Respiratory response and inspiratory effort during pressure support ventilation in COPD patients. Intensive Care Med. 1995; 21:871–9. Battisti A, Tassaux D, Janssens J-P, Michotte J-B, Jaber S, Jolliet P. Performance characteristics of 10 home mechanical ventilators in pressure-support mode: A comparative bench study. Chest. 2005; 127:1784–92. Richard J-C, Carlucci A, Breton L, Langlais N, Jaber S, Maggiore S, Fougère S, Harf A, Brochard L. Bench testing of pressure support ventilation with three different generations of ventilators. Intensive Care Med. 2002; 28:1049–57. Murata S, Yokoyama K, Sakamoto Y, Yamashita K, Oto J, Imanaka H, Nishimura M. Effects of inspiratory rise time on triggering work load during pressure-support ventilation: a lung model study. Respir Care. 2010; 55:878–84. Fresnel E, Muir J-F, Letellier C. Realistic human muscle pressure for driving a mechanical lung. EPJ Nonlinear Biomed Phys. 2014; 2(7). doi:10.1140/epjnbp/s40366-014-0007-8. Stell IM, Paul G, Lee KC, Ponte J, Moxham J. Non-invasive ventilator triggering in chronic obstructive pulmonary disease. A test lung comparison. Am J Respir Crit Care Med. 2001; 164:2092–7. Costa R, Navalesi P, Spinazzola G, Rossi M, Cavaliere F, Antonelli M, Proietti R, Conti G. Comparative evaluation of different helmets on patient-ventilator interaction during non-invasive ventilation. Intensive Care Med. 2008; 34:1102–8. Ferreira JC, Chipman DW, Kacmarek RM. Trigger performance of mid-level ICU mechanical ventilators during assisted ventilation: a bench study. Intensive Care Med. 2008; 34:1669–75. Borel JC, Sabil A, Janssens JP, Couteau M, Boulon L, Lévy P, Pépin JL. Intentional leaks in industrial masks have a significant impact on efficacy of bilevel non-invasive ventilation: a bench test study. Chest. 2009; 135:669–77. Ferreira JC, Chipman DW, Hill NS, Kacmarek RM. Bilevel vs ICU ventilators providing non-invasive ventilation: effect of system leaks: a COPD lung model comparison. Chest. 2009; 136:448–56. Costa R, Navalesi P, Spinazzola G, Ferrone G, Pellegrini A, Cavaliere F, Proietti R, Antonelli M, Conti G. Influence of ventilator settings on patient-ventilator synchrony during pressure support ventilation with different interfaces. Intensive Care Med. 2010; 36:1363–70. Chatburn RL. Which ventilators and modes can be used to deliver noninvasive ventilation?Respiratory Care. 2009; 54(1):85–101. Scott GC, Burki NK. The relationship of resting ventilation to mouth occlusion pressure. An index of resting respiratory function. Chest. 1990; 98(4):900–6. Budweiser S, Jörres RA, Criée C-P, Langer V, Heinemann F, Hitzl AP, Schmidbauer K, Windisch W, Pfeifer M. Prognostic value of mouth occlusion pressure in patients with chronic ventilatory failure. Respir Med. 2007; 101:2343–51. Herrera M, Blasco J, Venegas J, Barba R, Doblas A, Marquez E. Mouth occlusion pressure (P0.1) in acute respiratory failure. Intensive Care Med. 1985; 11:134–9. Perrigault P-FO, Pouzeratte YH, Jaber S, Capdevila XJ, Hayot M, Boccara G, Ramonatxo M, Colson P. Changes in occlusion pressure (P0.1) and breathing pattern during pressure support ventilation. Thorax. 1999; 54:119–23. Janssens J-P, Derivaz S, Breitenstein E, De Muralt B, Fitting J-W, Chevrolet J-C, Rochat T. Changing patterns in long-term noninvasive ventilation: A 7-year prospective study in the Geneva lake area. Chest. 2003; 123:67–79. Brochard L. Pressure Support Ventilation. Update Intensive Care Emerg Med. 1991; 15:381–91. Bounoiare D. Vers une évaluation objective de l'assistance ventilatoire non invasive : synchronisabilité des ventilateurs et qualité du sommeil. Thèse de l'Université de Rouen, 2012. http://theses.fr/s149631. Tassaux D, Gainnier M, Battisti A, Jolliet P. Impact of expiratory trigger setting on delayed cycling and inspiratory muscle workload. Am J Respir Critical Care Med. 2005; 172:1283–9. Chao DC, Scheinhorn DJ, Stearn-Hassenpflug M. Patient-ventilator trigger asynchrony in prolonged mechanical ventilation. Chest. 1997; 112:1592–9. Leung P, Jubran A, Tobin MJ. Comparison of assisted ventilator modes on triggering, patient effort, and dyspnea. Am J Respir Critical Care Med. 1997; 155:1940–8. Baydur A, Milic-Emili J. Expiratory flow limitation during spontaneous breathing: Comparison of patients with restrictive and obstructive respiratory disorders. Chest. 1997; 112:1017–23. Scala R, Naldi M.Ventilators for noninvasive ventilation to treat acute respiratory failure. Respiratory Care. 2008; 53:1054–80. Jolliet P, Tassaux D. Clinical review: patient-ventilator interaction in chronic obstructive pulmonary disease. Critical Care. 2006; 10:236. E. Fresnel would like to thank ADIR Association for supporting her Ph.D. thesis. CORIA UMR 6614 — Normandie Université, CNRS-Université et INSA de Rouen, Campus Universitaire du Madrillet, Saint-Etienne du Rouvray, F-76800, France Emeline Fresnel & Christophe Letellier ADIR Association, Hôpital de Bois-Guillaume, F-76031, Rouen, France GRHV EA 3830, CHU Charles Nicolle, Rouen, F-76031, France & Jean-François Muir Search for Emeline Fresnel in: Search for Jean-François Muir in: Search for Christophe Letellier in: Correspondence to Emeline Fresnel. EF and CL have developed the approach of the work and have written the paper, while JFM has provided medical expertise. All authors read and approved the final manuscript. licensee Springer on behalf of EPJ. This is an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Fresnel, E., Muir, J. & Letellier, C. Performances of domiciliary ventilators compared by using a parametric procedure. EPJ Nonlinear Biomed Phys 4, 6 (2016) doi:10.1140/epjnbp/s40366-016-0033-9 DOI: https://doi.org/10.1140/epjnbp/s40366-016-0033-9 Lung model Patient-ventilator interaction Ventilator performances
CommonCrawl
Halal Ninja Your guide to halal investing How to Analyze Companies What makes a company 'good'? Operating Efficiency Having found a bunch of leads to look into, the next step is to analyze the companies — to sort out the good from the bad. In this post, I describe how I do exactly that. The purpose of a company is to benefit its shareholders. As a result, the return that these shareholders receive on their investment is arguably the most important metric for a company. The de facto method for gauging shareholder returns is aptly called 'return on equity'. The return on equity (or RoE) is just the yearly profit divided by the company's equity: $$\text{Return on Equity} = \frac{\text{Net Income}}{\text{Equity}}$$ If you're unfamiliar with the term 'equity', it's a measure of how much the outstanding shares of the company are currently worth. (Basically, if you add up all the money that's been ploughed into the company since it started — how much would you get? ) To improve RoE, the formula suggests we simply need to increase net income. That's only slightly helpful, mainly because it's not very actionable. Breaking down the formula can get us a much better result: $$\text{RoE} = \frac{\text{Net Income}}{\text{Sales}} \times \frac{\text{Sales}}{\text{Assets}} \times \frac{\text{Assets}}{\text{Equity}}$$ Notice that the formula hasn't changed; 'sales' and 'assets' cancel out to give us the same end result. The formula, in this form, is known as the 'Dupont Model': $$$\frac{\text{Net Income}}{\text{Sales}}$$$ represents "profitability", which tells us how what portion of the revenue a company generates is actually profit. $$$\frac{\text{Sales}}{\text{Assets}}$$$ represents "operating efficiency", a measure of how well the company employs the assets it has in generating sales. $$$\frac{\text{Assets}}{\text{Equity}}$$$represents "leverage". Also known as the $$$\text{Equity Multiplier}$$$, it can be increased by taking on debt (and using that to increase assets). In summary, the 3 ways to improve RoE are through higher profitability, increased operating efficiency and leverage. When assessing whether to invest in a company, it's important to understand the overall RoE trend, as well as the trend of its parts. I'll start with leverage, since that's the most likely to kill a company. Additionally, I want to make sure that any company I invest in doesn't expressly rely on interest for operations — a strong sign that it may not be a halal investment to make. The downside of leverage as a source of capital is that it needs to be paid back no matter what — and that defaulting could result in the company going bankrupt. Things to look out for when assessing the likelihood of a company to default on it's debt payments Does the company have debt? If so, assess: Liquidity risk (ability to meet ST needs) Solvency risk (ability to meet LT obligations) How do we measure a company's reliance on debt? Debt (as reported in financial statements) How do we assess liquidity risk? How do we assess solvency risk? Corporate Bond Yield (published on Finra, if available) Beyond managing the insolvency risk inherent in leveraged companies, we need a way to measure what the shareholders are left with it the company does in fact go bankrupt... Assessing Liquidation Value Since debt holders are paid before shareholders in the event of bankruptcy, we'd need to determine what assets are left after the creditors are paid back - and then divide this remainder amongst all the shareholders. $$\frac{\text{Tangible assets - Liabilities}}{\text{Shares outstanding}}$$ You can even discount tangible assets to arrive at a more conservative figure or, if you're being criminally conservative:1 $$\frac{\text{Cash + Marketable Securities - Liabilities}}{\text{Shares outstanding}}$$ In summary, when assessing the impact of debt on a company, we'd need to know: Extent of debt (debt, equity multiplier) Solvency risk Liquidation value to shareholders Beyond analyzing what the company is worth right now , we should consider the potential upside. Of course, we'd want to look at historical performance to get an idea for what we can expect in the near future. Let's talk about the two remaining factors: Operating Efficiency, and Profitability. This is an easy one, so we'll get it out of the way before talking about the monster that is Profitability. Recall that: $$\text{Operating Efficiency} = \frac{\text{Sales}}{\text{Assets}}$$ Essentially, it's a measure of how well the company employs the assets it owns to generate sales (i.e. how 'efficient' it is in its day-to-day operations). Another term for this is $$$\text{Asset Turnover}$$$ — the higher, the better. In theory, profitability is an easy one to assess: $$\text{Profitability} = \frac{\text{Net Income}}{\text{Sales}}$$ Easy. Just divide the profit by the total sales. There's just one problem: the darn $$$\text{Net Income}$$$ has a habit of bouncing around like crazy! So at times, we need to move higher and higher up the income stream — excluding things that would otherwise drive Net Income below zero and into loss-making territory: Revenue Sales - COGS Cost of Revenue = Gross Income Gross Profit - Operating Expenses (excl. D&A) Excluding Depreciation & Amortization = EBITDA - Depreciation & Amortization = Operating Income Operating Profit, EBIT + Other Income - Interest - Taxes = Net income ⭐ Profit (or Loss), Earnings That means looking at things like Operating Income, EBITDA or even Gross Income (you better believe things are desperate when you need to look that far up!) — which exclude things like tax, interest and Depreciation & Amortization. Compare to industry average ratios to get a sense for what the median performance is like, and how the target company compares to it's peers. I like companies that are smaller in size (and therefore more likely to be overlooked/written off) and that are trading at a fraction of their Tangible Book Value. Here, you just need the company to not go bankrupt (which most people already think it will) — and if escapes bankruptcy, you're in for some massive gains. Also, I'd probably have to consider other 'softer' factors when trying to predict where the company may be in the future. This includes: Does management care? Recent board/executive changes Read posts from analysts with a successful track record on Seeking Alpha Charts (Technical Analysis), specifically around RSI, resistance levels, etc Optimal portfolio allocation between industries How much emphasis I need to place on industry outlook (i.e. macro-indicators, industry research, etc) The sort of companies I'm interested in, the 'deep value' companies — I'm not expecting stellar financials. I come in knowing that there's something wrong, and it's just a question of figuring out whether the risk is real, or perceived. This does not take into consideration preferred shares, and their liquidation preference, assuming there are any of course. In case there are, we'd need to consider the share count, the dividend dues as well as the liquidation preference. If a company goes bankrupt, for instance, preferred stockholders will get paid before common stockholders. It's important to remember, however, that if a company ends due to bankruptcy, paying creditors is the priority over paying preferred and common stockholders. Another unique feature of preferred stocks is that they have a fixed dividend↩
CommonCrawl
Review of whole number patterns Pictorial representations of algebra Components in an expression Laws of arithmetic with algebraic terms Build algebraic expressions I Describing patterns using algebra (Investigation) Build algebraic expressions II Expressions vs equations Substitute into algebraic expressions I Build algebraic equations I Build algebraic equations II Add and subtract algebraic terms I Add and subtract negative algebraic terms Rules for describing sequences Find the rule from a table of values Algebra in context Magic Squares and Algebra (Investigation) Identify equivalent expressions Substitute into algebraic expressions II Substitute to complete a table of values Substitute into algebraic add/sub expressions Add and subtract algebraic terms II Substitute into algebraic expressions III Multiply algebraic terms Divide algebraic terms Perform mixed operations with algebraic terms Substitute into algebraic expressions IV Substitute into common formulae Substitution resulting in an equation Multiply algebraic terms with negatives Divide algebraic terms with negatives Mixed operations with algebraic terms (incl negatives) Algebraic fractions Simplify algebraic expressions involving multiple operations Distributive law I Distributive law (non-linear terms) Build algebraic expressions III Factor numeric factors Rewriting expressions Equivalent expressions Algebra in measurement Simplify algebraic expressions involving distributive law Substitute into algebraic expressions Multiply and divide algebraic terms Multiply algebraic terms with indices Divide algebraic terms with indices Identify highest common algebraic factor Simplify non- linear algebraic expressions Distributive law II Simplify non-linear algebraic expressions involving distributive law Expand binomial expressions Factor algebraic terms Simplify further algebraic expressions I Identify components in an expression Expand further binomial expressions Expand perfect squares Expand difference of two squares Factorise algebraic factors LCD with rational expressions Simplify algebraic fractions Add and subtract algebraic fractions Add and subtract algebraic fractions with binomial numerators Multiply and divide algebraic fractions Mixed Operations with Algebraic Fractions Simplify further algebraic expressions II Build algebraic expressions Expand further algebraic expressions Algebraic Expressions Spreadsheets and Substitution (Investigation) Simplify and manipulate algebraic expressions Factorise algebraic expressions Expand binomials Evaluate rational expressions Addition and subtraction of rational expressions I Addition and subtraction of rational expressions II Measurement problems We've looked at how to apply the distributive law in groups of different ways. This is just a fancy way of saying expanding brackets, which we can do, even if we have algebraic terms. In this chapter, we are going to look at questions that involves more than one set of brackets. This includes how to expand binomial products, as well as how to simplify algebraic expressions by expanding multiple sets of brackets, then collecting the like terms. What's a binomial? We've already come across binomial expressions when we looked at how to expand brackets. Expressions such as $2\left(x-3\right)$2(x−3) are the product of a term (outside the brackets) and a binomial expression (the sum or difference of two terms). So a binomial is a mathematical expression in which two terms are added or subtracted. They are usually surrounded by brackets or parentheses, such as ($x+2$x+2). The most common way to multiply binomials of the form $\left(ax+b\right)\left(cx+d\right)$(ax+b)(cx+d) is to use the FOIL method, which stands for First, Outer, Inner, Last. The picture below shows how the terms are multiplied when we use the FOIL method. FOIL in Action Let's see how this expansion works diagramatically by finding the area of a rectangle. Notice that the length of the rectangle is $x+5$x+5 and the width is $x+2$x+2. So one expression for the area would be $\left(x+5\right)\left(x+2\right)$(x+5)(x+2). Another way to express the area would be to split the large rectangle into two smaller rectangles. This way, the area would be $x\left(x+2\right)+5\left(x+2\right)$x(x+2)+5(x+2) Finally, If we add up the individual parts of this rectangle, we get $x^2+5x+2x+10$x2+5x+2x+10, which simplifies down to $x^2+7x+10$x2+7x+10 - the same answer we got when expanded with the FOIL method. There are two special cases of binomial expansion that we can use to simplify our calculations. The first case occurs when the expanded form results in a difference of two squares. To see this, let's expand the expression $\left(A-B\right)\left(A+B\right)$(A−B)(A+B) and collect the like terms. $\left(A-B\right)\left(A+B\right)$(A−B)(A+B) $=$= $A^2+AB-BA-B^2$A2+AB−BA−B2 $=$= $A^2-B^2$A2−B2 The second case occurs when the product of two binomial expressions are the same, which we call a perfect square. Let's start by expanding the expression $\left(A+B\right)^2$(A+B)2. $\left(A+B\right)^2$(A+B)2 $=$= $\left(A+B\right)\left(A+B\right)$(A+B)(A+B) $=$= $A^2+AB+BA+B^2$A2+AB+BA+B2 $=$= $A^2+2AB+B^2$A2+2AB+B2 So to summarise, when we identify a product in the form of $\left(A-B\right)\left(A+B\right)$(A−B)(A+B), we can expand this expression and get $A^2-B^2$A2−B2. When we see a expression in the form of $\left(A+B\right)^2$(A+B)2, we can expand and get $A^2+2AB+B^2$A2+2AB+B2. Expand and simplify the following: $\left(m-6\right)\left(m+3\right)$(m−6)(m+3) Reveal Solution Expand the following perfect square: $\left(6y+\frac{1}{2}\right)^2$(6y+12​)2 $5x\left(2x-9y\right)\left(2x+9y\right)$5x(2x−9y)(2x+9y) Expand the following: $\left(2x+1\right)\left(5x+7\right)\left(2x-1\right)$(2x+1)(5x+7)(2x−1)
CommonCrawl
Grutter T., L. de Carvalho P, Virginie D., Taly A, Fischer M., Changeux J-P. 2005. A chimera encoding the fusion of an acetylcholine-binding protein to an ion channel is stabilized in a state close to the desensitized form of ligand-gated ion channels. C. R. Biol.. 328:223–234. Pons S, Sallette J, Bourgeois JP, Taly A, Changeux J-P, Devillers-Thiéry A. 2004. Critical role of the C-terminal segment in the maturation and export to the cell surface of the homopentameric $\alpha$7–5HT3A receptor. Eur. J. Neurosci.. 20:2022–2030. Konstantakaki M., Changeux J-P, Taly A. 2007. Docking of alpha-cobratoxin suggests a basal conformation of the nicotinic receptor. Biochem. Biophys. Res. Commun.. 359:413–418. Taly A, Changeux J-P. 2008. Functional organization and conformational dynamics of the nicotinic receptor: a plausible structural interpretation of myasthenic mutations. Ann. N. Y. Acad. Sci.. 1132:42–52. Grutter T., de Carvalho L.P, Dufresne V., Taly A, Changeux J-P. 2006. Identification of two critical residues within the Cys-loop sequence that determine fast-gating kinetics in a pentameric ligand-gated ion channel. J. Mol. Neurosci.. 30:63–64. Taly A, Corringer P.J, Grutter T., L. de Carvalho P, Karplus M., Changeux J-P. 2006. Implications of the quaternary twist allosteric model for the physiology and pathology of nicotinic acetylcholine receptors. Proc. Natl. Acad. Sci. U.s.a.. 103:16965–16970. Taly A, Corringer P-J, Grutter T, De Carvalho LPrado, Karplus M, Changeux J-P. 2006. Implications of the quaternary twist allosteric model for the physiology and pathology of nicotinic acetylcholine receptors. Proceedings of the National Academy of Sciences. 103:16965–16970. Grutter T., de Carvalho L.P, Dufresne V., Taly A, Edelstein S.J, Changeux J-P. 2005. Molecular tuning of fast gating in pentameric ligand-gated ion channels. Proc. Natl. Acad. Sci. U.s.a.. 102:18207–18212. Taly A, Corringer P.J, Guedin D., Lestage P., Changeux J-P. 2009. Nicotinic receptors: allosteric transitions and therapeutic targets in the nervous system. Nat. Rev. Drug Discov.. 8:733–750. Taly A, Delarue M., Grutter T., Nilges M., Le Novere N., Corringer P.J, Changeux J-P. 2005. Normal mode analysis suggests a quaternary twist model for the nicotinic receptor gating mechanism. Biophys. J.. 88:3954–3965. Bocquet N., L. de Carvalho P, Cartaud J., Neyton J., Le Poupon C., Taly A, Grutter T., Changeux J-P, Corringer P.J. 2007. A prokaryotic proton-gated ion channel from the nicotinic acetylcholine receptor family. Nature. 445:116–119.
CommonCrawl
28 June 2017 / Data Science Neural networks: representation. This post aims to discuss what a neural network is and how we represent it in a machine learning model. Subsequent posts will cover more advanced topics such as training and optimizing a model, but I've found it's helpful to first have a solid understanding of what it is we're actually building and a comfort with respect to the matrix representation we'll use. Read my post on logistic regression. Be comfortable multiplying matrices together. Neural networks are a biologically-inspired algorithm that attempt to mimic the functions of neurons in the brain. Each neuron acts as a computational unit, accepting input from the dendrites and outputting signal through the axon terminals. Actions are triggered when a specific combination of neurons are activated. In essence, the cell acts a function in which we provide input (via the dendrites) and the cell churns out an output (via the axon terminals). The whole idea behind neural networks is finding a way to 1) represent this function and 2) connect neurons together in a useful way. I found the following two graphics in a lecture on neural networks by Andrea Palazzi that quite nicely compared biological neurons with our computational model of neurons. To learn more about how neurons are connected and operate together in the brain, check out this video. A computational model of a neuron Have you read my post on logistic regression yet? If not, go do that now; I'll wait. In logistic regression, we composed a linear model ${z\left( x \right)}$ with the logistic function ${g\left( z \right)}$ to form our predictor. This linear model was a combination of feature inputs $x_i$ and weights $w_i$. $$ z\left( x \right) = {w_1}{x_1} + {w_2}{x_2} + {w_3}{x_3} + {w_4}{x_4} + b = {w^{\rm{T}}}x + b $$ Let's try to visualize that. The first layer contains a node for each value in our input feature vector. These values are scaled by their corresponding weight, $w_i$, and added together along with a bias term, $b$. The bias term allows us to build linear models that aren't fixed at the origin. The following image provides an example of why this is important. Notice how we can provide a much better decision boundary for logistic regression when our linear model isn't fixed at the origin. The input nodes in our network visualization are all connected to a single output node, which consists of a linear combination of all of the inputs. Each connection between nodes contains a parameter, $w$, which is what we'll tune to form an optimal model (tuning these parameters will be covered in a later post). The final output is functional composition, $g\left( {z\left( x \right)} \right)$. When we pass the linear combination of inputs through the logistic (also known as sigmoid) function, the neural network community refers to this as activation. Namely, the sigmoid is an activation function which controls whether or not the end node "neuron" will fire. As you'll see later, there's a whole family of possible activation functions that we can use. Comparison to a perceptron unit Most tutorials will introduce the concept of a neural network with the perceptron, but I've found it's easier to introduce the concept of neural networks by latching onto something familiar (logistic regression). However, for the sake of completeness I'll go ahead and introduce the perceptron unit and note its similarities to the network representation of logistic regression. The perceptron is the simplest neural unit that we can build. It takes a series of inputs, $x_i$, combined with a series of weights, $w_i$, which are compared against a threshold value, $\theta$. If the linear combination of inputs and weights is higher than the threshold, the neuron fires, and if the combination is less than the threshold it doesn't fire. We can rewrite the perceptron function by moving the threshold to the left side and we end up with the same linear model used in logistic regression. The weights, $w_i$ in the perceptron algorithm are synonomous with the weights in logistic regression and the threshold value, $\theta$, in the perceptron algorithm is synonomous with bias $b$. f w ( x ) = { ∑ w i x i − θ ≥ 0 → neuron fires ∑ w i x i − θ < 0 → neuron doesn't fire } At a high level, they're practically identical - the main difference being the activation function, $g\left( z \right)$, used to control neuron firing. The perceptron activation is a step-function from 0 (when the neuron doesn't fire) to 1 (when the neuron fires) while the logistic regression model has a smoother activation function with values ranging from 0 to 1. Building a network of neurons The previous model is only capable of binary classification; however, recall that we can perform multi-class classification by building a collection of logistic regression models. Let's extend our "network" to represent this. Note: While I didn't explicitly show the activation function here, we still use it on each linear combination of inputs. I mainly just wanted to show the connection between the visual representation and matrix form. Here, we've built three distinct logistic regression models, each with their own set of parameters. Take a moment to make sure you understand this matrix representation. (This is why matrix multiplication is listed as a prerequisite.) It's rather convenient that we can leverage matrix operations as it allows us to perform these calculations quickly and efficiently. The above example displays the case for multi-class classification on a single example, but we can also extend our input matrix to classify a collection of examples. This is not simply useful, but necessary for our optimization algorithm (in a later post) to learn from all of the examples in an efficient manner when finding the best parameters (more commonly referred to as weights in the neural network community). Again, go through the matrix multiplications to convince yourself of this. Although I color coded the weights here for clarity, we'll need to develop a more systematic notation. Notice how the first output neuron uses all of the blue weights, the second output neuron uses all of the green weights, and the third output neuron uses all of the orange weights. Moving forward, we'll describe our weights more succinctly as a vector $w_i$ where the subscript $i$ now represents the neuron which uses that set of weights. Thus, we can define a weight matrix, $W$, for a layer. Our bias may similarly be described as a vector $b$. Hidden layers Up until now, we've been dealing solely with one-layer networks; we feed information into the input layer and observe the results in the output layer. (The input layer often isn't counted as a layer in the neural network.) The real power of neural networks emerges as we add additional layers to the network. Any layer that is between the input and output layers is known as a hidden layer. Thus, the following example is a neural network with an input layer, one hidden layer, and an output layer. I'll use the superscript $\left[ l \right]$ to refer to the ${l^{th}}$ layer of the network and the subscript $i$ to refer to the ${i^{th}}$ neuron in a layer. For example, $a_2^{\left[ 1 \right]}$ represents the activation of the second neuron in the first hidden layer. We can calculate this value by first combining the proper weights and bias with the previous layer's values $$ z_2^{\left[ 1 \right]} = w_2^{\left[ 1 \right]{\rm{T}}}{a^{\left[ 0 \right]}} + {b_2} $$ and then passing this through our activation function, $g\left( z \right)$. Notice how each neuron combines every value from the previous layer as input. Note: Our input vector, $x$, can also be referred to as the activations of the $0^{th}$ layer. More generally, we can calculate the activation of neuron $i$ in layer $l$. $$ z_i^{\left[ l \right]} = w_i^{\left[ l \right]{\rm{T}}}{a^{\left[ {l - 1} \right]}} + b_i^{\left[ l \right]} $$ $$ a_i^{\left[ l \right]} = g\left( {z_i^{\left[ l \right]}} \right) $$ Similarly, we can calculate all of the activations for a given layer $l$ by using our weight matrix ${W^{\left[ l \right]}}$. $$ {Z^{\left[ l \right]}} = {W^{\left[ l \right]}}{A^{\left[ {l - 1} \right]}} + {b^{\left[ l \right]}} $$ $$ {A^{\left[ l \right]}} = g\left( {{Z^{\left[ l \right]}}} \right) $$ In a network, we take the output from one layer and feed it in as the input to our next layer. We can stack as many layers as we want on top of each other. The field of deep learning studies neural network architectures with many hidden layers. Matrix representation Let ${n^{\left[ l \right]}}$ represent the number of units in layer $l$. For a given layer, we'll have a weights matrix ${W^{\left[ l \right]}}$ of shape $\left( {{n^{\left[ l \right]}},{n^{\left[ {l - 1} \right]}}} \right)$ and a bias vector of shape $\left( {{n^{\left[ l \right]}},1} \right)$. The activations of a given layer will be a matrix of shape $\left( {{n^{\left[ l \right]}},m} \right)$ where $m$ represents the number of observations being fed through the network. Recall the earlier section where I demonstrated calculating the neural network output of multiple observations using an efficient matrix representation. When I was first learning about neural networks, the trickiest part for me was figuring out what my matrix dimensions needed to be and how to manipulate them to get them into the proper form. I'd recommend doing a couple practice problems to get more comfortable before we continue to talk about training a neural network in my next post. Feeling like you've got a grasp? Check out this neural network cheat sheet of common architectures. Soft clustering with Gaussian mixed models (EM). Sometimes when we're performing clustering on a dataset, there exist points which don't belong strongly to any given cluster. If we were to use something like k-means clustering, we're forced to make a Support vector machines. Today we'll be talking about support vector machines (SVM); this classifier works well in complicated feature domains, albeit requiring clear separation between classes. SVMs don't work well with noisy data, and the algorithm
CommonCrawl
Not Sure How to Start a Problem? Choose a Path… and Then Maybe Another Path! What's the Difference Between Critical Thinking Skills and Problem-Solving Skills? Are You Smarter Than the Smart-Bots? Word Problems Don't Have to Be Scary! Help Your Child Learn More By Saying Less Help Your Kids to Flex Their Focus! The Power of Geometry Vocabulary Getting the Pythagorean Theorem (Really) Right Confident Parents Build Confident Problem Solvers Using Math Models: Can You See All the Sets? Seven Questions (and One Strategy) to Build Critical Thinking Skills Playing Cards: The Most Amazing Learning Tool! Help Your Child to Become a Better Problem Solver Why is Understanding Patterns SO Important? Did You Know There's Math Hiding in Your Spreadsheet? If You Like Magic Squares, This Puzzle is For You! We Have to Teach Our Kids How to Struggle How to Teach Your Child Persistence Dissections in Science, Math, and Reasoning Wondering Why, with Your Child, is Worthwhile Your Favorite Mathematician Should be Euler! The Number Line Contains All Your Favorite Books! The Importance of Frames of Reference in Learning Geometry is NOT Boring Kids Can Learn SO Much Through Tangram Paradoxes! Puzzling with your Kids Kids Learn "New Math" for Good Reasons Why Learn Math You Might Never Actually Use? Making Math Relevant Help Your Kids Become Stronger Problem Solvers With One Question! It's Not Magic, it's Number Sense! How do You Grow a Critical Thinker? Division is Not Boring! What Would You Decide? A Probability Puzzle! Where is the Math in Bubbles? Teaching Through Mystery The Joy of Discovering Mathematics… for Yourself! Pictures Help us Retain Mathematics A Discrete Geometry Drawing Challenge The Power of Notation in Math Building Your Child's Problem Solving Tools: Drawing Napoleon... the Mathematician? Trouble with an SAT-style math question? Plug in numbers! Yes, You'll Enjoy this Puzzling Mathematical Paradox! Sports Build Your Kids' Mathematics Skills Algebra IS Useful... Right? SAT-style Geometry Trick: Make the "Best" Picture Teach Kids How to Respond To Mistakes There is Awesome Math in Your Kitchen! Sometimes a Jigsaw Puzzle is More than Just a Puzzle! The Value of Labeling Things in Math All Games Build Math and Critical Thinking Skills How Can You Help Your Child Work Through Frustration? How Can You Prove Something True by Proving Something Else Can't be True? What is Graph Theory, and Why Does it Matter SO Much? Math Should be Complex, not Complicated Talking About Mental Math is with Kids is Meaningful Experiment, and Grow a Little Problem-Solving in Your Garden! How Can a Problem that Feels Incomplete be Finished? Help your Child Build a Problem-Solving Strategy Toolbox Why Should Kids Learn to Generalize in Math? 6 Strategies for Building a Positive Math Mindset ​5 Tips for Helping your Child with Math Homework The Math of Dice and Fair Games The Value of Mathematical Simplicity We are All Natural (Math) Problem Solvers! Being Good at Math Does NOT Depend on How "Smart" We Are How Well Do You Really Know Pi? 5 Reasons Your Family Should Play More Games! Teach Kids to Make Mistakes! Look Out! Big Ideas are Hiding in Your Arithmetic 5 Tips to Help Kids Get Unstuck in Math One Easy Way to Do More Math With Your Kids Homer Simpson and... the Pythagorean Theorem? How exSTEMsions is Different The Benefits of Taking a Break in Problem Solving The Importance of Out-of-the-Box Thinking A Single Picture can Explain the Pythagorean Theorem! Help Your Kids Learn to Jump In and Experiment! What Happens When You Can't Make a Physical Model to Solve a Problem? We're in love with division If you're reading these blogs, you probably like math. And just like with music and movies, so it goes with math: everybody has their favorites. If you asked me what parts of math were my favorites, you might be perplexed if I told you that I really like subtraction. Sounds boring! Or what if I said "I am absolutely in LOVE with multiplication." That would be a little strange, wouldn't it? Well, here's what I'm really in love with: division. Division rocks! No, I don't like doing long division, and I don't have a favorite fraction. What I love about division is how it opens up an incredibly interesting world of mathematics: probability. The lowly and boring operation of division is the backbone of probability, and probability is amazing. Probability can be amazing! For example, if you take a standard six-sided die and roll it, what's the probability of getting a 4? It's 1/6, right? And MAN that is totally boring, so that's not what I'm talking about AT ALL. Here's a way more interesting situation. Suppose you gather together a bunch of people in a room, and ask each of them what their birthday is (month and day). You might wonder about how likely it is that two of these people share the same birthday. That might lead you to wonder about the following simple question: how many people would you need to gather so that the probability that at least two of them have the same birthday is more than 1/2? By "need", we really mean this: what's the fewest number of people you could get away with to have the probability that some two of them share the same birthday be greater than 1/2? Probability is tough. It's dangerous to wave your hands around without calculating things very carefully. For example, since there's 365 birthdays, one for each day of the year, wouldn't we definitely need at least 183 people to ensure that the probability that two of them have the same birthday is greater than 1/2? Perhaps that's a decent first stab at an answer, but we haven't actually done any math yet, so we should be wary of this first guess. Actually, what's really interesting here is that you don't need that many people. Do you know how many people you need? 23. That's it. Think about that. Say that number out loud to yourself a few times. Once you have gathered 23 people, the probability that some two of them share a birthday is $$\frac{38093904702297390785243708291056390518886454060947061}{75091883268515350125426207425223147563269805908203125}$$ which turns out to be about 50.7%. Look at that lovely division, and the amazing thing it's telling you: the probability that two people have the same birthday in a group of 23 people is a little over 1/2!!! Compare that to the standard first guess of 183. WOW. We're not lying to you, and this isn't magic. 23!!! The strange probabilities of the bingo card Amazing probabilities turn up in the unlikeliest of places. For example, have you ever played bingo? Usually you go to a "bingo hall", and play with a whole bunch of people. You have a board in front of you that typically looks something like this: The column under the B can contain any of the numbers from 1 through 15, with no repeats. The next column will contain numbers in the range 16-30, the next 31-45, then 46-60, and lastly the O column has numbers in the range 61-75. Everyone has a different bingo card. Someone starts announcing letter/number combinations, like B13! N32! G44! When such a combination is called, you check and see if that number under that letter is on your card. In the above picture, B13 is not on the card, but G44 is. If what's called out is on your card, you can mark it with a pen or circular stamp. The goal of the game is to get 5 stamps in the same row, column, or long diagonal. The first one to get it wins a prize! It's a simple enough game. And even if you enjoy it, you might not think there's anything interesting to say about it, mathematically speaking. However, in the September 2017 issue of Math Horizons, a mathematics journal with an audience of undergraduate math majors, a fascinating article appeared detailing an incredibly amazing fact about the game of bingo. The authors found that if you gather a large enough group of people to play bingo, and they play many, many times, a very interesting trend will appear. The chances that the winning board has a completed row is much higher than the chances that it has a completed column! This is a really weird and unexpected conclusion. Over the long run, you might strongly suspect that there should be a roughly equal number of row wins as column wins, but this just isn't the case. What's true is that with a large enough group of players, the overall probability of a row win will be a little over 75%!!! Think about that… wow! If you can, check out that article! Don't you dare think that division is boring. In the right context, and with the right questions, division can be dazzling. Did we convince you to love division too? Follow the blog using the link at the top of the page to get notified when new posts appear, and get even more to love! Want awesome tips and a mini-challenge, all designed to help you build vital problem-solving and critical thinking skills in your child? Click here to sign up for our monthly newsletter! Awesome Math Math in Daily Life In our newsletter, we share our best ideas to help you do more math with your child, our favorite blogs, fabulous mini-challenges and more! © The exSTEMsions Group Sign up to be notified whenever a new post has been published!
CommonCrawl
Journal of Nonlinear Science April 2014 , Volume 24, Issue 2, pp 359–382 | Cite as Hamiltonian Dynamics of Several Rigid Bodies Interacting with Point Vortices Steffen Weißmann First Online: 26 February 2014 We derive the dynamics of several rigid bodies of arbitrary shape in a two-dimensional inviscid and incompressible fluid, whose vorticity is given by point vortices. We adopt the idea of Vankerschaver et al. (J. Geom. Mech. 1(2): 223–226, 2009) to derive the Hamiltonian formulation via symplectic reduction from a canonical Hamiltonian system. The reduced system is described by a noncanonical symplectic form, which has previously been derived for a single circular disk using heavy differential-geometric machinery in an infinite-dimensional setting. In contrast, our derivation makes use of the fact that the dynamics of the fluid, and thus the point vortex dynamics, is determined from first principles. Using this knowledge we can directly determine the dynamics on the reduced, finite-dimensional phase space, using only classical mechanics. Furthermore, our approach easily handles several bodies of arbitrary shape. From the Hamiltonian description we derive a Lagrangian formulation, which enables the system for variational time integrators. We briefly describe how to implement such a numerical scheme and simulate different configurations for validation. Point vortices Fluid-structure interaction Variational integrator Cotangent bundle reduction Communicated by Eva Kanso. Mathematics Subject Classification (2010) 76B47 53D20 Ulrich Pinkall proposed the basic idea for deriving the symplectic form. It is my great pleasure to thank him for invaluable discussions and suggestions. Felix Knöppel and David Chubelaschwili helped to work out many of the details. Eva Kanso and the anonymous reviewers provided important feedback for improving the exposition. This work is supported by the DFG Research Center Matheon and the SFB/TR 109 "Discretization in Geometry and Dynamics." Appendix 1: Cotangent Bundle Reduction of Fluid–Body Dynamics In analogy to Arnold's geometric description of fluid dynamics (Arnold 1966), the dynamics of rigid bodies interacting with a surrounding incompressible fluid can be viewed as a geodesic problem on a Riemannian manifold. The kinetic energy defines a Riemannian metric on the configuration space, and geodesics satisfy Hamilton's equations on the cotangent bundle with kinetic energy as the Hamiltonian. This insight is due to Vankerschaver et al. (2009) (VKM). The authors use the framework of cotangent bundle reduction (Marsden et al. 2007) to obtain a reduced Hamiltonian system with magnetic symplectic form for the case of a single body in a fluid whose vorticity field is concentrated at point vortices. The Hamiltonian formulation by VKM is an extension of Arnold's original work (Arnold 1966), which describes the motion of an incompressible inviscid fluid in a fixed fluid domain \(\mathcal {F}\) as a geodesic on the group \(\mathop {\mathrm {Diff}}\nolimits _{\mathop {\mathrm {vol}}\nolimits }(\mathcal {F})\) of volume-preserving diffeomorphisms on \(\mathcal {F}\). However, when the fluid interacts with rigid bodies, the fluid domain is no longer fixed. The idea of VKM is to consider the space \(\mathop {\mathrm {Emb}}\nolimits _{\mathop {\mathrm {vol}}\nolimits }(\mathcal {F}^0, \mathbb {R}^2)\) of volume-preserving embeddings of an initial reference configuration \(\mathcal {F}^0\) into \(\mathbb {R}^2\) instead of \(\mathop {\mathrm {Diff}}\nolimits _{\mathop {\mathrm {vol}}\nolimits }(\mathcal {F})\). Any incompressible fluid motion is then described by a curve in the subset \(\mathcal {Q}^\mathcal {F}\subset \mathop {\mathrm {Emb}}\nolimits _{\mathop {\mathrm {vol}}\nolimits }(\mathcal {F}^0, \mathbb {R}^2)\) which is compatible with the body motion. The configuration space of the coupled system is \(\mathcal {Q} = \mathrm {SE}(2)^n \times \mathcal {Q}^\mathcal {F}\), and the dynamics is a canonical Hamiltonian system on \(T^* \mathcal {Q}\) with kinetic energy as the Hamiltonian. The kinetic energy is invariant under volume-preserving diffeomorphisms of the initial fluid configuration \(\mathcal {F}^0\) (particle relabeling symmetry); i.e., the symmetry group \(\mathop {\mathrm {Diff}}\nolimits _{\mathop {\mathrm {vol}}\nolimits }(\mathcal {F}^0)\) acts from the right on \(\mathcal {Q}^\mathcal {F}\), and thus on \(\mathcal {Q}\). This action turns \(\mathcal {Q}\) into a principal fiber bundle over \(\mathrm {SE}(2)^n\). This structure allows one to follow the famous Kaluza–Klein approach to determine the Hamiltonian dynamics. In order to factor out the \(\mathop {\mathrm {Diff}}\nolimits _{\mathop {\mathrm {vol}}\nolimits }(\mathcal {F}^0)\)-symmetry one needs to fix a value of the associated momentum map, which corresponds to choosing an initial vorticity field of the fluid. This is where the assumption is used that vorticity is concentrated at \(m\) point vortices. The reduced phase space is \(\mathcal {M} = T^* SE(2)^n \times \mathbb {R}^{2 m}\) (see VKM, Sect. 4.2), and the dynamics is given by a reduced symplectic form \(\sigma \) on \(\mathcal {M}\) with kinetic energy as the Hamiltonian. The following theorem formulates the starting point for the derivations made in this paper: The dynamics of \(n\) rigid bodies interacting with \(m\) isolated point vortices is a Hamiltonian system. The Hamiltonian is the kinetic energy (21), and the phase space is $$\begin{aligned} \mathcal {M} = T^* \mathrm {SE}(2)^n \times \mathbb {R}^{2 m}. \end{aligned}$$ The cotangent bundle \(T^* \mathrm {SE}(2)^n\) corresponds to the rigid body configuration, and \(\mathbb {R}^{2 m}\) is the phase space for \(m\) point vortices. The symplectic form is $$\begin{aligned} \sigma = \sigma _{\text {can}} + \mathrm {d}\alpha + \sigma _\gamma , \end{aligned}$$ where \(\sigma _{\text {can}}\) is the canonical symplectic form on the cotangent bundle \(T^*\mathrm {SE}(2)^n\), \(\sigma _\gamma \) is the Kirillov–Kostant–Sariou form on the coadjoint orbit \(\mathbb {R}^{2 m}\), and \(\mathrm {d}\alpha \) is a magnetic term, i.e., a two-form on \(SE(2)^n \times \mathbb {R}^{2 m}\). This is proven in VKS, Sect. 4. We emphasize here that the proofs do not rely on the fact that only a single rigid body was considered. Appendix 2: The Cotangent Bundle of Euclidean Motions In this appendix we consider the Lie group of Euclidean motions \(\mathrm {SE}(2)\) and denote the pairing between covectors and vectors by \(\left( \cdot , \cdot \right) \). For any covector \(\mu \in T_g^* \mathrm {SE}(2)\) we can find a body momentum \(M \in \mathbb {R}^3 \cong \mathfrak {se}^*(2)\) such that \(\left( \mu , \delta g\right) := \langle M, \Lambda \rangle \), for any \(\delta g = g \Lambda \). Note that \(\Theta (\delta \mu , \delta g) := \left( \mu , \delta g\right) \) is a one-form on the cotangent bundle \(T^*\mathrm {SE}(2)\), and \(\langle M, \Lambda \rangle \) is its push-forward to the left trivialization \(\mathbb {R}^3 \times \mathrm {SE}(2) \cong T^*\mathrm {SE}(2)\). It is the canonical one-form, and its exterior derivative gives the canonical symplectic form on \(T^*\mathrm {SE}(2)\). We now compute the symplectic form when pushed forward to the left trivialization \(\mathbb {R}^3 \times \mathrm {SE}(2)\), using the general formula for the exterior derivative of a one-form: $$\begin{aligned} \mathrm {d}\Theta (X, Y) = \nabla _X \Theta (Y) - \nabla _Y \Theta (X) - \Theta ([X, Y]). \end{aligned}$$ Here \(X\) and \(Y\) are vector fields and \([X, Y]\) is the Jacobi–Lie bracket of \(X\) and \(Y\). Consider a two-parameter family \((M(s,t), g(s,t))\) in \(\mathbb {R}^3 \times \mathrm {SE}(2)\), whose partial derivatives (denoted by \(\delta \) and \('\), respectively) commute. The vector fields will be \(X = (\delta M, \delta g)\) and \(Y = (M', g')\), where \(\delta g = g \Gamma \) and \(g' = g \Xi \) with \(\Xi = (\varOmega , V)\). One can check that the partial derivatives of \(g\) commute if and only if $$\begin{aligned} \Gamma '&= \delta \Xi - \mathop {\mathrm {ad}_{\Xi }}\nolimits \Gamma , \quad \mathop {\mathrm {ad}_{\Xi }}\nolimits = \left( \begin{array}{c@{\quad }c} 0 &{} 0 \\ V \times &{} \varOmega \times \end{array} \right) . \end{aligned}$$ The commuting partial derivatives ensure that the Jacobi–Lie bracket in (39) vanishes. The covariant derivatives are usual directional derivatives here, so we obtain the canonical symplectic two-form \(\sigma = \mathrm {d}\Theta \) in the left trivialization as $$\begin{aligned} \sigma \left( (\delta M, \Gamma ), (M', \Xi )\right) = \delta {{\Big \langle }M, \Xi {\Big \rangle }} - {\Big \langle }M, \Gamma {\Big \rangle }' = {\Big \langle }\delta M, \Xi {\Big \rangle } - {\Big \langle }M' - \mathop {\mathrm {ad}^*_{\Xi }}\nolimits M, \Gamma {\Big \rangle },\nonumber \\ \end{aligned}$$ Here \(\mathop {\mathrm {ad}^*_{\Xi }}\nolimits \) is the matrix transpose of \(\mathop {\mathrm {ad}_{\Xi }}\nolimits \): $$\begin{aligned} \mathop {\mathrm {ad}^*_{\Xi }}\nolimits = - \left( \begin{array}{c@{\quad }c} 0 &{} V \times \\ 0 &{} \varOmega \times \end{array} \right) . \end{aligned}$$ Aref, H.: Point vortex dynamics: a classical mathematics playground. J. Math. Phys. 48, 065401 (2007)CrossRefMathSciNetGoogle Scholar Arnold, V.I.: Sur la géométrie différentielle des groupes de lie de dimension infinie et ses applications à l'hydrodynamique des fluides parfaits. Annales de l'institut Fourier 16(1), 319–361 (1966)CrossRefMathSciNetGoogle Scholar Borisov, A.V., Mamaev, I.S.: An integrability of the problem on motion of cylinder and vortex in the ideal fluid. Regul. Chaotic Dyn. 8, 163–166 (2003)CrossRefzbMATHMathSciNetGoogle Scholar Borisov, A.V., Mamaev, I.S., Ramodanov, S.M.: Dynamic interaction of point vortices and a two-dimensional cylinder. J. Math. Phys. 48(6), 065403 (2007)CrossRefMathSciNetGoogle Scholar Chorin, A.: Numerical study of slightly viscous flow. J. Fluid Mech. 57, 785–796 (1973)CrossRefMathSciNetGoogle Scholar Helmholtz, H.: Über Integrale der hydrodynamischen Gleichungen, welche den Wirbelbewegungen entsprechen. Reine Angew. Math. 55, 25–55 (1858)CrossRefzbMATHGoogle Scholar Kirchhoff, G.R.: Über die Bewegung eines Rotationskörpers in einer Flüssigkeit. Reine Angew. Math. 71, 237–262 (1870)CrossRefzbMATHGoogle Scholar Kobilarov, M., Crane, K., Desbrun, M.: Lie group integrators for animation and control of vehicles. ACM Trans. Graph. 28 (2009)Google Scholar Lamb, H.: Hydrodynamics. Cambridge University Press, Cambridge (1895)zbMATHGoogle Scholar Lin, C.C.: On the motion of vortices in two dimensions—I and II. Proc. Natl. Acad. Sci. USA 27, 570–575 (1941)CrossRefzbMATHGoogle Scholar Majda, A.J., Bertozzi, A.L.: Vorticity and Incompressible Flow. Cambridge Texts in Applied Mathematics. Cambridge University Press, Cambridge (2002)Google Scholar Marsden, J., Misiolek, G., Ortega, J.P.: Hamiltonian Reduction by Stages. Lecture Notes in Mathematics. Springer, Berlin (2007)Google Scholar Marsden, J.E., West, M.: Discrete mechanics and variational integrators. Acta Numer. 10, 357–514 (2001)CrossRefzbMATHMathSciNetGoogle Scholar Milne-Thomson, L.M.: Theoretical Hydrodynamics, 5th edn. MacMillan and Co., Ltd., London (1968)Google Scholar Nair, S., Kanso, E.: Hydrodynamically coupled rigid bodies. J. Fluid Mech. 592, 393–411 (2007)CrossRefzbMATHMathSciNetGoogle Scholar Newton, P.K.: The N-Vortex Problem: Analytical Techniques, Vol. 145 of Applied Mathematical Sciences. Springer, Berlin (2001)CrossRefGoogle Scholar Rowley, C.W., Marsden, J.E.: Variational integrators for degenerate Lagrangians, with application to point vortices. Proceedings of 41st IEEE Conference on Decision and Control, vol. 2, pp. 1521–1527 (2002)Google Scholar Saffman, P.G.: Vortex Dynamics. Cambridge University Press, Cambridge (1992)zbMATHGoogle Scholar Shashikanth, B.N.: Poisson brackets for the dynamically interacting system of a 2D rigid cylinder and N point vortices: the case of arbitrary smooth cylinder shapes. Regul. Chaotic Dyn. 10(1), 1–14 (2005)CrossRefzbMATHMathSciNetGoogle Scholar Shashikanth, B.N., Marsden, J.E., Burdick, J.W., Kelly, S.D.: The Hamiltonian structure of a two-dimensional rigid circular cylinder interacting dynamically with N point vortices. Phys. Fluids 14(3), 1214–1227 (2002)CrossRefzbMATHMathSciNetGoogle Scholar Shashikanth, B.N., Sheshmani, A., Kelly, S.D., Marsden, J.E.: Hamiltonian structure for a neutrally buoyant rigid body interacting with N vortex rings of arbitrary shape: the case of arbitrary smooth body shape. Theor. Comput. Fluid Dyn. 22, 37–64 (2008)CrossRefzbMATHGoogle Scholar Vankerschaver, J., Leok, M.: A novel formulation of point vortex dynamics on the sphere: geometrical and numerical aspects. J. Nonlinear Sci. 24(1), 1–37 (2013)Google Scholar Vankerschaver, J., Kanso, E., Marsden, J.E.: The geometry and dynamics of interacting rigid bodies and point vortices. J. Geom. Mech. 1(2), 223–266 (2009)CrossRefzbMATHMathSciNetGoogle Scholar Weißmann, S., Pinkall, U.: Underwater rigid body dynamics. ACM Trans. Graph. 31(4) (2012)Google Scholar 1.Institut für MathematikTU BerlinBerlinGermany Weißmann, S. J Nonlinear Sci (2014) 24: 359. https://doi.org/10.1007/s00332-014-9192-y Received 26 March 2013 First Online 26 February 2014
CommonCrawl
Cute Determinant Question I stumbled across the following problem and found it cute. Problem: We are given that $19$ divides $23028$, $31882$, $86469$, $6327$, and $61902$. Show that $19$ divides the following determinant: $$\left| \begin{matrix} 2 & 3&0&2&8 \\ 3 & 1&8&8&2\\ 8&6&4&6&9\\ 0&6&3&2&7\\ 6&1&9&0&2 \end{matrix}\right|$$ linear-algebra determinant J. M. is a poor mathematician 22k1111 gold badges9191 silver badges196196 bronze badges $\begingroup$ This is nice! May I ask about the source of the problem - where did you see it? $\endgroup$ – Martin Sleziak Jun 8 '12 at 12:17 $\begingroup$ Golan's linear algebra book. $\endgroup$ – Potato Jun 8 '12 at 16:04 Multiply the first column by $10^4$, the second by $10^3$, third by $10^2$ and fourth by $10$ - this will scale the value of the determinant by $10^{4+3+2+1}=10^{10}$, which is coprime to $19$. Now add the last four columns to the first one - this will not change the value of the determinant. Finally notice the first column now reads $23028, 31882, 86469, 6327$, and $61902$: each is a multiple of $19$ so we can factor a nineteen cleanly out the determinant. $\begingroup$ Perhaps you should add this example to en.wikipedia.org/wiki/Coprime to show an example for a usage for coprimes $\endgroup$ – mplungjan Jun 8 '12 at 9:14 If the determinant is $0$ it is obvious that $19|0$. Now suppose that the determinant is not $0$. $$\begin{align*} 2\cdot10^4+3\cdot10^3+0\cdot10^2+2\cdot10+8\cdot1&=23028\\ 3\cdot10^4+1\cdot10^3+8\cdot10^2+8\cdot10+2\cdot1&=31882\\ 8\cdot10^4+6\cdot10^3+4\cdot10^2+6\cdot10+9\cdot1&=86469\\ 0\cdot10^4+6\cdot10^3+3\cdot10^2+2\cdot10+7\cdot1&=06327\\ 6\cdot10^4+1\cdot10^3+9\cdot10^2+0\cdot10+2\cdot1&=61902 \end{align*}$$ By Cramer's rule $$1=\frac{\left|\begin{matrix} 2 & 3 & 0 & 2 & 23028 \\ 3 & 1 & 8 & 8 & 31882 \\ 8 & 6 & 4 & 6 & 86469 \\ 0 & 6 & 3 & 2 & 06327 \\ 6 & 1 & 9 & 0 & 61902 \end{matrix}\right|}{\left|\begin{matrix} 2 & 3 & 0 & 2 & 8 \\ 3 & 1 & 8 & 8 & 2 \\ 8 & 6 & 4 & 6 & 9 \\ 0 & 6 & 3 & 2 & 7 \\ 6 & 1 & 9 & 0 & 2\end{matrix}\right|}$$ $$\left|\begin{matrix} 2 & 3 & 0 & 2 & 8 \\ 3 & 1 & 8 & 8 & 2 \\ 8 & 6 & 4 & 6 & 9 \\ 0 & 6 & 3 & 2 & 7 \\ 6 & 1 & 9 & 0 & 2\end{matrix}\right|=\left|\begin{matrix} 2 & 3 & 0 & 2 & 23028 \\ 3 & 1 & 8 & 8 & 31882 \\ 8 & 6 & 4 & 6 & 86469 \\ 0 & 6 & 3 & 2 & 06327 \\ 6 & 1 & 9 & 0 & 61902 \end{matrix}\right|$$ But last determinant is obviously divisible by $19$. Gaston BurrullGaston Burrull $\begingroup$ I think that I must delete this answer because there is another answer which is accepted and has many up votes. $\endgroup$ – Gaston Burrull Jun 8 '12 at 5:32 $\begingroup$ No, yours is more detailed. You are fine. $\endgroup$ – Potato Jun 8 '12 at 5:43 $\begingroup$ Your answer is good. Multiple approaches can shed new light on a problem. $\endgroup$ – Potato Jun 8 '12 at 5:46 $\begingroup$ @GastónBurrull: Your answer is good. I would not have thought to use Cramer's rule. Please do not delete it. $\endgroup$ – MJD Jun 8 '12 at 17:55 $\begingroup$ Just saw this question since it was recently modified. Simpler, at least to me, is to remember that one can always add a multiple of one column to another and not change the determinant. So add $10^4$ times the first column plus $10^3$ times the second column plus $10^2$ times the third column plus $10^1$ times the fourth column to the fifth column. These column operations do not change the determinant and yield the last equation above. $\endgroup$ – robjohn♦ Jul 12 '15 at 13:04 Integer proof Perform the column operation $C_5\leftarrow 10^4C_1+10^3C_2+10^3C_3+10C_4+C_5$: the coefficient of $C_5$ is $1$ so this doesn't change the determinant. All elements of $C_5$ ($23028$, $31882$, $86469$, $6327$, and $61902$) are now divisible by $19$, so we can factor out $19$: hence the determinant is divisible by $19$. Modular proof In $\mathbb Z/19\mathbb Z$, the columns $10^4C_1+10^3C_2+10^3C_3+10C_4+C_5$ sum to $0$: hence the matrix is not invertible and has determinant $0$. So in $\mathbb Z$, the determinant is a multiple of $19$. Gaston Burrull Generic HumanGeneric Human $\begingroup$ I like your modular proof! $\endgroup$ – Potato Jun 8 '12 at 19:15 Not the answer you're looking for? Browse other questions tagged linear-algebra determinant or ask your own question. determinant divisible by 13 Using the Determinant to verify Linear Independence, Span and Basis Matrix Determinant Identity Linear algebra determinant Determinant (and invertibility) of generalized Vandermonde matrix Find matrix determinant A determinant made of $n \times n$ determinants. Determinant Question. Use elimination to find the determinant Mistake computing a $4\times 4$ determinant Determinant of a certain matrix,
CommonCrawl
Post-Quantum Signatures This week's Study Group was led by Peter Scholl who spoke about hash-based signatures and in particular a recent paper by Bernstein, Hopwood, Hulsing, Lange, Niederhagen, Papchristodoulou, Schwabe and Wilcox O'Hearn, called SPHINCS: practical stateless hash-based signatures [1]. Most real-world signature schemes like RSA and DSA can be broken by a quantum computer, due to Shor's algorithm [2]. One could use lattice-based signatures but their security is not well understood: the authors of [1] note that "their quantitative security levels are highly unclear" even in the pre-quantum setting. On the other hand, hash-based signature schemes using 'quantum-resistant' one-way hash functions are secure in the quantum setting [3], although the symmetric security parameter needs to be doubled for quantum resistance due to Grover's algorithm [4]. The trouble is that such schemes tend to be inefficient or stateful, where the latter means one cannot have a signing key shared across multiple devices and the scheme may be vulnerable to 'restart attacks' where the scheme is forced to re-use a secret key, compromising security. The paper of Peter's talk seeks to address this by constructing a (relatively) efficient, stateless, hash-based signature scheme called SPHINCS. The construction is proved secure "based on weak standard-model assumptions, avoiding collision resistance and the random-oracle model". One Time Signature Schemes First, we revisited the Lamport One Time Signature (OTS) scheme. (This isn't the OTS used in the paper but it will serve as a reasonable approximation.) Here your secret key is a sequence of pairs of bitstrings $(x_0, y_0), ..., (x_n, y_n)$ and the public key is the sequence of pairs of hashes $(H(x_0), H(y_0)), ..., (H(x_n), H(y_n))$. To sign a message $m$ one takes each bit $m_i$ of $m$ and selects $x_i$ or $y_i$ according to whether $m_i$ is 0 or 1. To verify a signature one hashes each $x_i$ or $y_i$ in the signature and checks the output against the public key. This is called "One Time" since the secret key can only sign one message before security is seriously compromised: your signature reveals half of the elements of the secret key. From 'One Time' to 'Many Times' A natural way to build "many time" signature schemes is to iterate OTS schemes. The trouble is that doing this in a naive way means the verification algorithm needs a huge number of public key components, enough for all the bits of all the messages you ever want to sign! Instead, we compress each OTS public key using a "Merkle tree" where the (original) OTS public keys are the leaf nodes and parents are constructed by hashing the concatenation of the children (together with a bitmask for added security). We want the (new) public key to be the root of this tree and so our signature must supply enough information (in as brief a way as possible) for the verifier to recover the root. This is how we do it: Given the path from a leaf to the root, let the sequence of siblings of each node on the path be called the authentication path of the leaf. In a signature, supply the index of the leaf node used and its authentication path. With the leaf and its sibling, the verifier can construct the parent. Then with the sibling of the parent, the verifier can construct the grandparent. The verifier continues like this, using the siblings given in the authentication path to recover the root node i.e. the public key. So we now have a scheme with a practical-sized public key. We can also shrink the secret key by just storing a seed to a Pseudo-Random Generator which will then output the many OTS secret keys which, when hashed, give the leaves of the Merkle tree. However, we haven't "eliminated the state", to use the authors' phrase: one must store a counter recording which OTS secret keys have been used so far. "Eliminating the State" Goldreich's [5] answer to this problem was to deterministically choose which OTS key to use next, rather than just use them in order, i.e. some hash of the message determines the index of the OTS secret key to use in signing. Of course, now one needs a much bigger tree in order to avoid accidentally using a "one time" key more than once. In fact, for security, there needs to be exponentially many OTS key pairs, so we can't just have one leaf of the tree for each OTS secret key or the tree would be far too big for efficient signing. Instead, we associate every node with an OTS key pair and at each step from a leaf to the root, sign the hash of the public keys of the child nodes with the secret key of the parent node. The new (longer) signature contains the index of the leaf node used and the OTS signatures of all the nodes on the path to the root. The trouble with this scheme is that the length of the signature is cubic in the security parameter. This is where the authors of [1] come in. From 'One Time' to 'A Few Times' The main idea of the paper is to use a 'few times signature' scheme (instead of an OTS) at the bottom of the tree to reduce the number of leaves needed for security and hence the overall height of the tree, thus shortening the signature. Their choice of scheme for the bottom of the tree is called HORS: Hash to Obtain Random Subset. In HORS, the secret key is the tuple $(s_1, ..., s_t)$, the public key is the hashes $(H(s_1), ..., H(s_t))$ and a message $m$ determines a ("random") subset $S$ of $\lbrace1, 2, ..., t\rbrace$ with fixed size $k$ (much smaller than $t$). Then the signature for $m$ is the set of secret key components corresponding to $S$, i.e. $\lbrace s_i | i \in S\rbrace$. Now we can use the secret key 'a few times' before security is compromised as only a small number of the components $s_i$ of the secret key are revealed in each signature. But again we have the problem that the public key needs to be very large and so a Merkle tree is once again employed: the new public key becomes the root node, recoverable from the index of a leaf node and the corresponding authentication path. Notice that this means the bottom of the tree in SPHINCS, the construction proposed in [1], is itself a tree. So SPHINCS consists of a hyper-tree (a tree of trees). The SPHINCS Tree of Trees The SPHINCS hyper-tree has total height $h$ consisting of $d$ layers, each of height $h/d$. The index from the hash of the message determines a HORS tree on the bottom layer of the hyper-tree and a leaf of this HORS tree from which we compute the HORS signature of the message. Then this signature is signed according to the OTS scheme on the next layer (where the initial index in some way determines the tree and the leaf to use). We repeat this OTS signing on each layer and finally output the SPHINCS signature consisting of the index, the HORS signature (which contains all the information needed to recover the HORS public key) and each OTS signature and each authentication path needed to recover the root at each layer. Real World Considerations (in the Quantum Computing World!) After proving its security, the authors demonstrate the practicality of SPHINCS with certain choices of parameters: the hyper-tree has total height 60 consisting of 12 layers (each of height 5), the number of HORS secret key elements is $2^{16}$ and 32 of these are revealed in each HORS signature (i.e. $t=2^{16}$, $k = 32$). There is also a parameter that affects the OTS scheme but we haven't detailed it here. With these choices, the signatures have size 41KB, the keys have size around 1KB and one can sign hundreds of messages per second on a modern quad-core computer. By most accounts, quantum computers are something of a pipe-dream at the moment. Nevertheless, it's reassuring to know that security is still achievable - and indeed practical - against adversaries who can exploit the enormous power of quantum computers, whenever that day comes. [1] - http://eprint.iacr.org/2014/795.pdf [2] - http://en.wikipedia.org/wiki/Shor%27s_algorithm [4] - http://en.wikipedia.org/wiki/Grover%27s_algorithm [5] - Foundations of Cryptography: Volume 2, Basic Applications (2004) Posted by Anonymous at 3:53 PM 52 Things: Number 7: How does randomness help in c... Study group: Network Security Risk Assessment Usin... 52 Things: Number 6: How can we interpret NP as th... 52 Things: Number 5: What is meant by the complexi...
CommonCrawl
Grating equation grating equation A grating lobe occurs when you steer too far with a phased array and the main beam reappears on the wrong side. What's my diffractive angle, theta m, in terms my incident angle theta n? I see the wavelength of light, the wavelength of the grating, and the grating order. However, a diffraction grating has many slits, rather than two, and the slits are very closely spaced •. It explains how to calculate the second order angle given the wavelength of Wavelength (nm) Groove density (l/mm) Period (nm) Angle of incidence (degrees) Littrow angle (degrees) Grating Equation : Let a grating have thickness d and refractive index n at wavelength. It explains how to calculate the second order angle given the wavelength of Sep 16, 2014 · It is the integral method, which reduces the grating problem to an integral equation or a set of two coupled integral equations (Maystre, 1984; DeSanto, 1981). The power terms like P wg and P t take the unit of power per unit length along the x-direction. Turn the telescope to both sides to obtain green lines. For use with monochromators, the grating equation can be expressed as: Mλ = 2 × a × cos φ × sin θ Diffraction Grating Equation with Example Problems1 1 Grating Equation In Figure 1, parallel rays of monochromatic radiation, from a single beam in the form of rays 1 and 2, are incident on a (blazed) diffraction grating at an angle θ i relative to the grating normal. 0 nmn mobs inc incsin sin sin 𝑛 m ` qis the refractive index where diffraction is being observed. The diffraction's equation is: mλ = a sinθ m Where: m: The order number of the diffracted image, m=1,2,3 θ: Grating angle, in radian a: Grating Spacing, in lines per meter The grating equation gives the calculation of diffraction angles (which are the same for transmissive (as in the picture) or reflective gratings. A diffraction grating is an optical element that diffracts energy into its constituent wavelengths. 13). Light of wavelength lambda which passes through a diffraction grating of spacing d will create a bright spot at angles It is straightforward to show that if a plane wave is incident at any arbitrary angle θ i, the grating equation becomes. com The grating equation If we say: all angles are measured counter-clockwise, starting at the grating normal, then in the above drawings, a is positive and b is negative. 2,α is the angle between the incident light and the normal to the grating (the incident angle) and ß is the angle between the diffracted light and the normal to the grating (the diffraction angle), then, they satisfy the following relationship: 1. Page . • This is in the range of ordinary laboratory diffraction gratings. A prime example is an optical element called a diffraction grating. 3 Matrix-equation representation of boundary condition Diffraction gratings used in optics and photonics applications are available at Edmund Optics An interesting optical property is reported for photonic balls with colloidal particles larger than 400 nm: a ring‐like iridescent reflection appears on the peripheral part of the ball when observed under an optical microscope. Here is a great design by randofo. The Grating Equation . 9 of 10 . We have seen that diffraction patterns can be produced by a single slit or by two slits. 2(1) - Projection of Grating Spectra. A diffraction grating can be manufactured by carving glass with a sharp tool in a large number of precisely positioned parallel lines, with untouched regions acting like slits (Figure \(\PageIndex{2}\)). A vector method has been developed for tracing rays through gratings formed by (i) the intersection of a surface with parallel planes, (ii) the intersection of a surface with concentric cylinders, and (iii) holographic means. Question 1: A diffraction grating is of width 5 cm and produces a deviation of 30 0 in the second-order with the light of wavelength 580 nm. Angular dispersion 24 2. If this condition is met, the wavenumber of the grating matches the difference of the wavenumbers of the incident and reflected waves. The main advantage of this method is that it can solve almost any grating problem, regardless of the grating material, the range of wavelength (from X-rays to microwaves) or the shape of Equation 2. THE GRATING EQUATION 16 2. That means that n λ is a constant for fixed geometry. Call: (800) 321-4314. Diffraction gratings are used in spectrometers. The diffraction's equation is: mλ = a sinθ m Where: m: The order number of the diffracted image, m=1,2,3 θ: Grating angle, in radian a: Grating Spacing, in lines per meter Diffraction grating, component of optical devices consisting of a surface ruled with close, equidistant, and parallel lines for the purpose of resolving light into spectra. Jan 01, 2008 · The well-known diffraction grating equation can be represented by a simple and useful graph that makes the analysis of the diffraction orders produced by a grating easier. We observe it at a particular angle, ß. The directions of these beams depend on the wavelength and direction of the incident beam, and on the groove frequency of the grating. grating equation [1,2], which says that if light of wave-length is incident on a grating with grating constant d at an angle (relative to the grating normal), then the dif-fracted light leaves the grating at a wavelength-dependent angle satisfying d sin sin m 0, with m an integer indicating the diffraction order [3]. 8. Colorful displays can also be created with diffraction gratings. Reflection gratings are further classified as plane or concave, the latter being a spherical surface ruled with… Blaze Grating: Littrow Condition TT i 1 if It is called Littrow Condition By designing a and wedge angle, we can design a grating optimizing for a particular wavelength. THE PHYSICS OF DIFFRACTION GRATINGS 16 2. 0 cm in width, how many lines are on this diffraction grating? Homework Equations d = (m)(λ)/sinθm The Attempt at a Solution d = Grating pattern: (widely spaced bright fringes on a dark background) Common Trick: In a question on diffraction gratings, it is common that you are told that the grating has, for example, 600 lines per millimetre. They have approximately the size of the main lobe and are distributed grid-like in the diagram. Existence of Diffraction Orders 21 2. Other articles where Reflection grating is discussed: diffraction grating: …to be a transmission or reflection grating according to whether it is transparent or mirrored—that is, whether it is ruled on glass or on a thin metal film deposited on a glass blank. Polychromatic light diffracted from a grating. Find the slit Jul 15, 2017 · As you can see, the diffraction grating equation is satisfied even for negative values of , since the quantity can be negative. , it is dispersive. 1. The behavior of a reflection grating can be analyzed by solving the following four equations in four unknowns, a_1 = d_1\cos\theta (1) a_2 = d_2\cos i (2) a = a_1+a_2 (3) d_1^2 -a_1^2 = d_2^2 -a_2^2, (4) where i and \theta are not necessarily equal (for example, the angles are not equal in a so-called blazed grating). This periodic separation of slits is usually on the order of micrometers. What does grating mean? A grill or network of bars set in a window or door or used as a partition; a grate. This Nov 26, 2020 · A prime example is an optical element called a diffraction grating. Diffraction are new. Angles of diffraction: the grating equation Figure 4: A collimated beam incident from the left on a reflection grating and the outgoing diffracted beams (red and blue). Diffraction grating formula. The negative sign is a result of the electron being bound to the proton, and the potential is The masking grating is always present, and the question is to what degree does it mask the test grating, making it harder to detect. By controlling the cross-sectional profile of the grooves, it is possible to concentrate most of the diffracted energy in a particular order for a given wavelength. GRATING WORKS (Wi thout Equations) When we learning light we usually surt by talking the colors of the and the fact that White light an brokZ1 up, into a Of colors. It is straightforward to show that if a plane wave is incident at any arbitrary angle θi, the grating equation becomes. Bar Grating: Bar grating provides a reliable load-bearing paneled surface that is commonly used in plants, warehouses, and other manufacturing facilities. The grating you will be using has a line or groove density of approximately 300 lines/mm, thus the line spacing d is on the order of 3 × 10 –6 m. The analysis of a diffraction grating is very similar to that for a double slit. The incoming wave is normal to the groove face a)O i 1) i 2a O T Blazing angle Harrison's 260 mm wide gratings with blaze angle 75o λ=500 nm T i T 1 Z λ=500nm-> R>106 The grating equation comes from the concept of diffraction grating where the light is split into slits or bands and diffracted intro several beams travelling in different directions. In particular, the dimensions of the grating structure are defined by the following equations: ##EQU1## and where n TE and n TM are integers. In certain directions the waves interfere constructively. Diffraction grating – problems and solutions. The third order maximum of intensity is at an angle of \ (22^\circ\) to the Diffraction grating. H. The equations are in a form that prevent indeterminancy due to unusual ray–surface intersection points, and arbitrary orientation of the ruling direction is permitted a grating on the first surface of the prism, the grating being adapted to disperse radiation having multiple spectral bands wherein the resolving power of the dispersive optical element is defined by the expression: ##EQU4## where λ is the wavelength of light, λ is the average wavelength of a spectral band of light, B is the prism base width, A i 's are the constants used to describe the Experiment 15: The Diffraction Grating . For example, if the In section 3. To standardise the grating: Turn the telescope to obtain the image of the slit. n(x, y, z) is the grating index profile. Let a beam of light having two wavelengths λ1 and λ2 is falling normally on a grating AB which has (a + b) grating element and N number of slits as shown in Figure·5. 82 m Sinusoidal phase grating s incident plane wave Λ glass refractive index n q=+5 q=+4 q=+3 q=+2 q=+1 q=0 q= –1 q= –2 q= –3 q= –4 q= –5 Field after grating is expressed as: amplitude plane waves (diffraction orders), propagation angle The q-th exponential physically is a plane wave propagating at angle θq i. Write the grating equation and determine what is known. In OptiGrating, the coupled mode equations are based on non-orthogonal coupled mode theory. This depends on the spacing of the grating and the wavelength of the incident light. , no significant power can be coupled from the . The Calculations box will calculate the angle of incidence and the angle of diffraction for […] Grating equation – transmission grating with normal incidence • Θd is angle of diffracted ray • λis wavelength • l is spacing between slits • p is order of diffraction Θd input Diffracted light l p d λ sinθ= Except for not making a small angle approximation, this is identical to formula for location of maxima in multiple slit a sinusoidal grating profile h of the form h(x) =ssin(Kx), K = 2p L, (1) where s is the grating amplitude, K is the grating wave number, and L is the period of the grating. In each slit the light bends in every direction. Because this distance is on the order of the wavelength of light, when light waves pass through the grating the slits are able to cause constructive and SPIE Digital Library eBooks. Dust, scratches & pinholes on the surface of the grating 157 10. The angular dispersion of a grating is determined by differentiating the diffraction angle with respect to wavelength and is equal to order*(ruling density) /cos(angle of diffraction), where the ruling density is the number of grating lines per unit length. Specifically, light duty steel or aluminum grating covered floors, platforms, walkways, trenches, etc. Young's double slit problem solving Resolvance of Grating. The form of the light diffracted by a grating depends on the structure of the elements and the number of elements present, but all gratings have intensity maxima at angles θ m which are given by the grating equation This physics video tutorial explains how to solve diffracting grating problems. The duplication process is described below for replicated gratings. Diffraction grating definition, a band of equidistant, parallel lines, usually more than 5000 per inch (2000 per centimeter), ruled on a glass or polished metal surface for diffracting light to produce optical spectra. Figure 2: Output angles of a reflective diffraction grating with 800 lines per millimeter as functions of the Diffraction Limited Grating Resolving Power The angular width of a diffraction peak is limited by the physical size, W, of the grating (seen tilted in projection at the diffracted angle). 4 we discuss the transient grating, which is the time-domain analogue of degenerate four-wave mixing. The grating equation shows that the angles of the diffracted orders only depend on the grooves' period, and not on their shape. Spurious fringe patterns due to the recording system 159 10. 4 shows a first order spectrum from 200 to 1000 nm spread over a focal field in spectrograph configuration. When light passes through a diffraction grating, it is dispersed into a spectrum. 11. These rays are then diffracted at an angle −θ r. The condition for strong intensities is, of course, that ϕ should be a multiple of 2π. Implications of the grating equation: order overlap. The incident and diffracted angles and are governed by the grating equation and depend on wavelength and the lines/mm of the grating. On the transmission side (the right side of the grating in the Figure), the equation is similar: 𝑡)sin(𝜃𝑡= 𝑖 (sin𝜃𝑖 𝑐)+ 𝜆 Λ (2) where m is an integer 0, ±1, ±2, … For m = 0, again, the second term in the right hand side of Equation A parallel bundle of rays falls perpendicular to the grating. There are few escapes from such a simple expression. (ii) techniques and procedures used to determine the wavelength of light using a diffraction grating and using the equation A transmission diffraction grating is similar to the Young's slit experiment except instead of having two slits there are many equally spaced lines ruled on a glass slide. Aug 29, 2017 · Measuring the distance between the grating and the screen and measuring the position of the maxima is immediate to obtain the angles θ m and from these we can calculate the grating pitch, using the equation previously described and knowing that λ is 632. Note the reading of both the verniers. Corresponding equations can be derived for the π-polarisation by consideration of the appropriate Fresnel reflection formulae [e. L. Diffraction gratings. THE DIFFRACTION GRATING 12 1. e. Overlapping of Diffracted Spectra 22 2. A light source of unknown frequency is incident on a grating. Recalling Resolution depends on only the total number of grooves and Based on equation 2-3, it is clear that the maximum spectral range of a spectrometer is determined by the detector length (L D), the groove density (1/d) and the focal length (F). The effect of wavelength can be demonstrated by illuminating the line grating with white light (containing a mixture of all configuration. A grating is a set of equally spaced, narrow, parallel sources. Resolvance of Grating. A diffraction grating is an optical component with a regular pattern. Also, d is the distance between slits. Assume reflection grating. A BRIEF HISTORY OF GRATING DEVELOPMENT 12 1. The gratings also need to be free-standing (i. Once a value for the diffraction grating's slit spacing d has been determined, the angles for the sharp lines can be found using the equation d sin θ = m λ for m = 0, ± 1, ± 2,. 6 micrometers, corresponding to about 625 tracks per millimeter. You can find the complete series here: Part 1. The diffraction grating splits up the light into a spectra. d = grating spacing in metres (m) Diffraction Grating Simulation with OmniSim FDTD and FETD software. The geometry of the diffraction pattern from a grating is governed by the grating equation: (45) a (sin θ i + sin φ m) = m λ, where a is the groove spacing (pitch), θ i is the incident angle, ϕ m is the diffracted angle of the m 'th order, and m is the order of diffraction. Each grating is fabricated from a highly accurate master grating that is copied many times. 3. A light source is incident on a grating at a particular angle, α. g. A series of bright spots are seen on a screen placed some distance from the grating. Substituting d for QP and nλ for QY and rearranging gives the grating equation: n λ = d sin. A grill or network of bars set in a Fig. A typical diffraction grating consists of an optical material substrate, with a large number of parallel grooves ruled or replicated in its surface and overcoated with a reflecting material such as aluminum. Both the waveguide nature coupling and grating coupling are considered. In general the grating equation for constructive maxima is. Royce offers welded, press-locked, riveted, heavy duty, aluminum bar grating, as well as bar grating stair treads to fit our customers' needs. The visuals become even more stunning when you turn one diffraction grating… Expression for resolving power. These rays are then di↵racted at an angle r. we know: d*sin(26) = m(420nm) d*sin(28) = mλ divide these two, the unknows d and m vanish, and we get: sin(26)/sin(28) = 420nm/λ λ = 420nm*sin(28)/sin(26) λ = 449. Diffraction grating is an optical component having a periodic structure which can split and diffract light t several beams travelling in different directions. CalcTool allows you to enter grating density in standard units, or as a period. 2 Number of Orders of Spectra with a Grating . If, for example, a light source emits a continuum of wavelengths from 20 nm to 1000 nm, then at the For any value where m does not equal 0, the grating equation is dependent on the wavelength, l, so the various colored images of the source corresponding to slightly different values of b spread out into a continuous spectrum. In this example we use OmniSim's FDTD and FETD Engines to model a diffraction grating with an oblique incidence, and we compare the results with analytical data obtained from the grating equation. , no diffraction. The Grating Equation. The period Λ of the grating determines the diffraction angles of the resulting diffraction orders, as given by the grating equation: mλ=Λ(sinθm−sinθi),(6. Likewise, longer wavelengths give rise to larger diffraction angles at constant line grating spacings. Red laser beam split by a diffraction grating. Jun 19, 2016 · Homework Statement A diffraction grating gives a second-order maximum at as angle of 31° for violet light (λ = 4. Setting n 1 to 1 and running n 2 from 2 to infinity yields the Lyman series. This calculator finds the period of Bragg grating needed for a predetermined wavelength and index of refraction. To use this information in the diffraction grating equation, we need to convert it into a slit spacing, d. (noun) A diffraction grating is a material that contains a large number of parallel slits separated by a distance (d). A diffraction grating can be manufactured by carving glass with a sharp tool in a large number of precisely positioned parallel lines, with untouched regions acting like slits (Figure 4. In the case where no light is incident upon the waveguide, there will only be downwards The grating equation for light diffraction states that for the diffraction pattern that forms when light is passed through an array of regularly spaced openings, that the diffraction order number (n) times the wavelength of the light (λ) is equal to the distance between slits (d) times the sine of the diffraction angle (λ), or: Holographic diffraction gratings are optically generated by recording an interference pattern on a photorestist-coated substrate. This is known as the DIFFRACTION GRATING EQUATION. 4. The grating equation only predicts the directions of the modes, not how much power is in them. A final "simple but useful" short equation is the grating lobe equation, which tells you where a "repeat major lobe" (a duplicate of the main lobe with the same amplitude, as opposed to a sidelobe) will show up IF one does not spatially sample the array at the Nyquist criterion of half a wavelength or smaller. Then the relationship between angles and i are Grating, Reflection Grating The unique property of having light only in certain orders makes the diffraction grating a key component in many optical systems. the a long time, learning You an exciting and A diffraction grating is a device with many, many parallel slits very close together. It consists of a large number of parallel, closely-spaced slits with the spacings of the order of the wavelength of light. of light. Each wavelength of input beam spectrumis sent into a different direction, producing a rainbowof colors under white light illumination. These angles are measured from the grating normal, which is shown as the dashed line perpendicular to the grating surface at its center. This allowsfortheanalysisoftheelementalmakeupoftheobjectsinquestionbasedontheresult- ing spectra. 37 L_{c} \cos \left(\theta_{i n}\right)$$ where Lc is grating characteristic length. A close-up of the center of the grating can be seen at the bottom right. 1, operates based on multiple reflections of an input diverging source. For the reflected orders, nm = ni, and the Grating Equation becomes m =sin θ +mThe Grating Equation. The grating size can be expressed in terms of the number of grooves, N, and their spacing d. Diffraction Grating: Diffraction grating is a thin film of clear glass or plastic that has a large number of lines per (mm) drawn on it. Diffraction grating equation: Dec 02, 2016 · For fixed grating spectrometers, it can be shown that the angular dispersion from the grating is described by Equation (1) where d is the groove period (which is equal to the inverse of the groove density), Beta is the diffraction angle, m is the diffraction order, and λ is the wavelength of light as can be seen in Figure 1. Resolution is given as the reciprocal linear dispersion Grating Coupler for Slab Waveguide Kevin Randolph Harper Brigham Young University - Provo Figure 3. Di↵raction Grating Equation with Example Problems1 1 Grating Equation In Figure 1, parallel rays of monochromatic radiation, from a single beam in the form of rays 1 and 2, are incident on a (blazed) di↵raction grating at an angle i relative to the grating normal. For a given angle of incidence, θ, it gives the angle of diffraction θ mfor each "order" mfor which a solution to (1) exists. Let us try to find out where we get strong intensity in these circumstances. QUESTIONS 1. For the zeroth order (m = 0), α. (a + b) sin θ = n λ. Can one use a Compact Disc like a reflection grating? Absolutely!! One can use a compact disc to determine the wavelength of standard grating equation: n = d (sin + sin ), where n is the echelle order, the wavelength, d is the grating ruling separation, and and are the incident and dispersion angles relative to the grating normal. This means the grating bar aspect ratio d/b is 100 or greater. For constructive interference to occur, we know that the phase shift in scattering from adjacent echellettes must be an integer number of wavelengths (so the crests of the scattered waves from the two facets are in-phase). This is only done within the two-particle description, as no transport exists on the single-particle level. To be more complete, if a grating is at an interface between two media with indexes ni and nt, the Grating Equation (1) is written as, (4) where nm is the index of refraction of the region into which the diffracted light travels. The grating (as an optical instrument) was invented by Joseph von Fraunhofer in 1823, when the wave theory was already available but not universally recognized. In the simulation, red light has a wavelength of 650 nm, green light has a wavelength of 550 nm, and blue light has a wavelength of 450 nm. 6 um. The grating equation is a good starting point when describing the properties of gratings. When the masking grating is similar in spatial frequency to the test grating, masking is strong, and when it is dissimilar, masking is weak. "GRATING" is a spreadsheet program written in MS-Excel for the purpose of analysis and design of welded steel grating and fabricated aluminum bar grating. If we use white light, what color do you expect for the zero order, '=0 at the central point o? Note that if we use a grating of larger lines/mm, we get a smaller !. A triangular profile is commonly used. Typically, the FWHM is 0. [4] Equation (1) is known as the grating equation. (b) In an experiment, laser light of wavelength 633 nm is incident on a grating. d(sinθ i + sin θ m) When solved for the diffracted angle maxima, the equation is Diffraction gratings allow optical spectroscopy. From a computational point of view, it makes sense to simplify our search space to only focus on positive values of . 5. How to use grating in a sentence. 2. And using equation 1 we can calculate the value of each wave length. your plot with a best-fit line and have Kaleidagraph display the equation of the line along with the uncertainties in the slope and intercept. Using a spectrometer, it is possible to measure the in-cident angle as well as the angle of the diffracted max-ima. Grating equation (2)' can be updated as the following relationship: Fig. One application of diffraction gratings is in spectroscopy, derivative of the grating equation. Irregularities in the depth of the grooves 158 10. 79nm Define grating. A grating can separate individual wavelengths since the grating equation is satisfied at different points in the imaging plane for different wavelengths. The problem is that a diffraction grating can be used in many ways and the equation can be simplified in different ways depending on the situation. Grating Lobes. When parallel bundle of rays falls on the grating, these rays and their associated wave fronts form an orthogonal set so the wave fronts are perpendicular to the rays and parallel to the grating (as shown in Fig. where is the periodic refractive index perturbation of the grating, and n 0 (x, y) is the index profile of waveguide. A parallel bundle of rays falls perpendicular to the grating. Interference patterns produced by a diffraction grating are projected on a screen. According to the superposition principle, the net displacement is simply given by the May 24, 2016 · Volume grating as its name suggest is a grating that occupies the volume of a medium. DISPERSION 23 2. 1. 6 x 10-6°C. The wavelength dependence in the grating equation shows that the grating separates an incident polychromaticbeam into its constituent wavelength components, i. This is at a path difference of rays from successive slits of 0 λ, 1 λ, 2 λ, etc . and β0 are equal and opposite, resulting in the light simply being reflected, i. 6. Grating lobes sometimes occur with phased array antennas (and also with ultrasound probes used in sonography). where m is called the order of the spectrum, λ is the wavelength, d is the spacing between grating lines, and θ is the diffraction angle measured with respect to the direction of the light incident on the grating. Wavelength scale For a constant deviation mounting and with an angular deviation of, the grating equation can be written (assuming -1 order Sep 10, 2018 · According to the equation, the size of the diffraction angle decreases as the line grating space intervals increase. • For red light of wavelength 600 nm, this would give a May 11, 2012 · Equations (76) and (77) are the PSM equations for an unslanted reflection grating at oblique incidence for the σ-polarisation. At normal incidence, -----(5) where, N is the number of lines per unit length of the grating The equation is useful for calculating the change in wavelength of a monochromatic laser beam in various media. However, angular separation of the maxima is generally much greater because the slit spacing is so small for a diffraction grating. The minimum wavelength difference that can be resolved by the diffraction grating is given by. Calculate the percent deviation for each wavelength using % deviation = (data-theory)/theory x 100% where "theory" is the tabulated wavelength from the last experiment. Remember that 10 8 angstrom = 1 cm. Since the grating equation is: d Sinθ =mλ The grating equation can be easily generalized for the case that the incident light is not at normal incidence, Δ=Δ 1 +Δ 2=asinθi+asinθm=mλ a()sinθ i +sinθ m=mλ, m=0,±1,±2, Grating Equation for Planar Diffraction Slide 19 The angles of the diffracted modes are related to the wavelength and grating period through the grating equation. Therefore the diffraction grating can resolve light composed of different wavelengths. " D = Δθ/Δλ For lines of nearly equal wavelengths to appear as widely as possible,we would like our grating to have the largest possible dispersion. We can use this to calculate where our first grating lobe (m = ±1) would appear. 3 nm in most sensor applications. dsinθ m = mλ. Oct 25, 2020 · Note that this is exactly the same equation as for double slits separated by \(d\). convention for the planar diffraction grating equation was emphasized, (3) the advantages of discussing " conical " dif- fraction grating behavior in terms of the direction cosines of Grating definition, a fixed frame of bars or the like covering an opening to exclude persons, animals, coarse material, or objects while admitting light, air, or fine material. Using 0 as obtained above and the value of the grating spacing obtained from the number of nrlings per mm stated on the grating (it should be about 600 rulings/mm), use equation (l) to determine the wavelengths of each of the lines observed in the rnercury spectnrm. One obtains this 3) Grating lobe equation . A grating is said to be a transmission or reflection grating according to whether it is transparent or mirrored—that is, The "Grating Equation" satisfied for a parallel beam of monochromatic light. As you know from the discussion of double slits in Young's double-slit experiment, light is diffracted by, and spreads out after passing through, each described by Equation (1). Email: sales@gratingpacific. Assume air. codirectional grating coupler, in the central region of the figure, has a grating layer of length The rectangular-tooth grating layer is composed of semiconductor material in one region and glass in the other. Equation (2), also known as the Bragg reflection wavelength, is the peak wavelength of the narrowband spectral component reflected by the FBG. Nov 05, 2018 · What is a Diffraction Grating? Lastly, such a symmetric pattern is produced when the light is monochromatic and coherent. Irregularities in the position of the grooves 157 10. The value of θ m is given by the grating equation shown above, so that θ m = arcsin mλ d FIG. From the data of procedure (4), again determine the wavelength of the bright green 10. Calculate the difference in the reading to obtain the diffraction angle. A diffraction grating is an optical component whose effect is similar to a prism: it splits white light into its component colors. What is the wavelength of the light used? (1 Å = 10-10 m) Known : Diffraction grating equation. A wave front is incident on a periodic array of scatterers (separated by distance a) depicted in the figure. 3. This page supports the multimedia tutorial Diffraction. Safe-T-Span® Pultruded Fiberglass Grating; Back to Previous Page Get In Contact with Us. If the diffraction grating is 1. where N is the total number of grooves on the diffraction grating. Dec 12, 2007 · The well-known diffraction grating equation can be represented by a simple and useful graph that makes the analysis of the diffraction orders produced by a grating easier. Grating lobes are a Diffraction Grating Kaleidoscope: Kaleidoscopes create impressive visual displays by simply turning a knob. The diffraction pattern produced by the grating is described by the equation m× λ= dsin θ, where mis the order number, λis a selected wavelength, dis the spacing of the grooves, and θis the angle Interference and Diffraction 14. You should be able to derive this. This is called the Littrow condition. Enter the desired values in the white boxes. Since the angular position of the diffraction spot changes with wavelength (according to the grating equation), a white light source will be split into its spectral components at different angles. 2. It is characterized by a translation coordinate system which allows us to write a system of linear partial differential equations with constant coefficients, after using the Maxwell equations in curvilinear coordinates. The Rydberg formula may be applied to hydrogen to obtain its spectral lines. A special case occurs when the angle of incidence i is equal to the angle of diffraction i'. Do not ignore the sign; it contains information. described by Equation (1). The new equation that we will use is: Conclusion. The number of spectra that are visible in a given grating can be easily calculated with the help. Sketch the pattern you observed when the laser light passed through a diffraction grating (i. These high-quality instrument-grade ruled gratings will satisfy almost all of your diffraction requirements, especially when high efficiency is the primary concern. The special resolution of the resulting spectra is determined by there resolving power equation. In section 3. This type of grating can be photographically mass produced rather cheaply. From Equation (1) with a grating of given groove density and for a given value of α and β: (9) kλ = constant. A grating disperses light of different wavelengths to give, for any wavelength, a narrow fringe. Bragg and his son Sir W. Diffraction gratings are used to make very accurate measurements of the . Under this condition (in 1 st order), the grating equation reduces to λ = 2d (sin i). We are interested in the potential diffracted wave front depicted on the right side of the scatterer array. study of diffraction by a grating, quite different from those used nowadays. Using more expensive laser techniques, it is possible to create line densities of 3000 lines/mm or higher. When light encounters an entire array of identical, equally-spaced slits, called a diffraction grating, the bright fringes, which come from constructive interference of the light waves from different slits, are found at the same angles they are found if there are only two slits. Weiner, and Christopher Lin The virtually-imaged phased array VIPA is a side-entrance etalon with potential application as a The grating thus works with a constant angular deviation between the incident and diffracted light. grating synonyms, grating pronunciation, grating translation, English dictionary definition of grating. Fiber Bragg gratings One embodiment is to create a fib-siner Bragg grating (FBG) in an optical fiber. ll. The equation given is nλ = d (X/L) where n is the order of fringe λ is the wavelength of light d is the distance between the slits x is the distance from the between lines in the grating is 5-0 X 10 6m. subjected to uniformly distributed gravity loading (dead + live), along with a This grating calculator is intended for reference and educational purposes only. A diffraction grating is a piece of glass with lots of closely spaced parallel lines on it each of which allows light to pass through it, this is a transmission diffraction grating. 2: Experimental set-up for measuring wavelengths with a diffraction grating The image created at θ m is called the m thorder image particles, the forms of energy conservation written in momentum terms would be different, leading to different diffraction equations as shown in the following. The Grating Equations As shown in Fig. This allows precise spectroscopy. From Equation (1­1) with a grating of given groove density and for a given value of alpha and beta: (l­9) so that if the diffraction order k is doubled, lambda is halved, etc. Commercial surface-relief gratings are produced using an epoxy casting replication process developed in the mid-1900s. The Diffraction Grating A diffraction grating is essentially a large number of equally spaced sources, and thus the equation applies. When white light — a medley of wavelengths exhibiting tremendous incoherence — is diffracted, the pattern generated is profoundly variegated. This paper explains the construction of the graph and how to use it with an example. If the wavlength of the incident light is known, then equation (1) may be solved for d, the average dis-tance between lines on the grating. Surface irregularities in the grating coating 157 10. The FWHM (full-width-half-maximum) or bandwidth of this reflection depends on several parameters, particularly the grating length. Linear If the grating acts as a mirror, n = 0. where λ is the vacuum wavelength of light, n the average refractive index of the medium, θ the propagation angle in the medium relative to the direction normal to the grating, and Λ the grating period. Dispersion, different wavelengths giving constructive interference at different angles, requires that n be a non-zero integer. 5 we show how the two-particle equations of the matter can be related to the Boltzmann and the diffusion That is the grating structure is designed to match the propagation constant for both the TE and TM modes at a specific wavelength λ 0 at the centre of the stopband. However, it is possible that the array will have equally strong radiation in other directions. There are typically two different types of diffraction grating – the ruled grating and the holographic grating. Due to Cauchy-Schwarz inequality z R dzjA z(z)j2 R dzjA tz(z)j2 grating equation [1,2], which says that if light of wave- length is incident on a grating with grating constant d at an angle (relative to the grating normal), then the dif- fracted light Bragg's Law refers to the simple equation: (eq 1) n = 2d sin derived by the English physicists Sir W. 1) where λ is the wavelength of the incident light,θiis the angle of the incident light wave, and θmis the diffraction angle of themth diffraction order. 1). dsinθ=nλ(1) The path length of light is related to wavelength of light and the structure of grating. However, in yars the has replaced the pr-:sm this purpœe it is and exFnsive. Reflection Region 0 trn inc incsin sinm x nn m In the grating equation, m is the order of diffraction, which is an integer. when the orbital speed found in Equation (14) is inserted. They occur in uniformly spaced arrays (arrays with an equal distance between adjacent elements Grating definition is - a wooden or metal lattice used to close or floor an opening. This approach to the resolvance of a grating has made use of the fact that the phase is a continuous variable which can be represented analytically, and that the differential of this variable is also well-defined. TO light into c Newton a prism. Ex component of the nearfield of the a beam of finite width Bar Grating: Bar grating provides a reliable load-bearing paneled surface that is commonly used in plants, warehouses, and other manufacturing facilities. Jun 25, 2020 · The dispersion D of the grating is defined as: "The angular separation Δθ per unit wavelength Δλ is called the dispersion D of the grating. Fiber Bragg Grating Tutorial figure: When a Bragg grating exists in an optical fiber, it will reflect a specific wavelength dependent on the period of the Bragg grating and the index of refraction of the optical fiber. Calculate the angle between the central maximum and the second order maxi mum. attach the piece of paper from your screen). On the transmission side (the right side of the grating in the Figure), the equation is similar: 𝑡)sin(𝜃𝑡= 𝑖 (sin𝜃𝑖 𝑐)+ 𝜆 Λ (2) where m is an integer 0, ±1, ±2, … For m = 0, again, the second term in the right hand side of Equation Jul 09, 2020 · A diffraction grating is a multi-slit surface with a periodic structure that splits and diffracts light into several beams traveling in different directions. The location of the different diffraction orders from a diffraction grating: A diffraction grating has a very large number of slits spaced closely together, such that the light from each of these slits interferes with the light from the others. In this formula a is the angle of the incident ray respective to a line normal to the surface of the grating, b est the angle of emergence after diffraction, m is the number of ruled grooves per millimiter (in general, m is comprised between 50 and 1200 grooves/mm), k is an integer number for the spectrum order and l is the wavelength in millimeter. As light of wavelength λ from the discharge tube passes through the grating, it is diffracted through an angle, θ, given by the grating equation. The grating has \ (2. 22, α = 0. In theory, they function much the same as two slit apertures (see Experiment 9). A typical grating has density of 250 lines/mm. Here (a+ b) is the grating element and is equal to 1/N = 2. 8 nm. Extension to the Diffraction Grating zThe first multiplier describes the Fraunhofer diffraction on one slit and the second describes the interference from N point sources zd·sinϕis the path length difference ∆between the rays emitted by the slits zTherefore, we can write the equation for the main maximums of Strategy Once a value for the diffraction grating's slit spacing d has been determined, the angles for the sharp lines can be found using the equation Since there are 10,000 lines per centimeter, each line is separated by 1/10,000 of a centimeter. Generally, a diffraction grating consists of a peri-odically varying boundary of period L between two dissimilar materials with refractive indices n 1 and n 2. But that does NOT mean that a fixed geometry gives us just one wavelength at a time. Obviously, d = \(\frac {1} { N }\), where N is the grating constant, and it is the number of lines per unit length. Lc calculated in the paper is 13 +/-1um, which gives us optimal beam waist radius between 16 and 18. Making this change, and using Equation 1 from Part 1 for ∆Φ, gives: Which simplifies to. (1) This is the well-known Grating Equation. The equation for the position of the maxima for a diffraction grating is exactly the same as the equation for double-slits because it's derived in the same way, using the same geometry. Your two equations are very different in fact: Snell's law can be derived using geometric optics, while the grating equation requires wave theory of light. from the nonlinear grating formed by the interference of 𝐸 and ℇ𝑖. Grating (top), prism (bottom), from Diffraction grating Some differences between gratings and prisms * Gratings ar Equation (3) is equivalent to the coupling efficiency of a one-dimensional grating coupler system, where both the grating coupler and the target mode extend uniformly along x-direction. Related End-of-Chapter Exercises: 7, 16 – 18, 38, 39, 48. You can use the slider to control the grating spacing, which is the distance between neighboring openings in the grating. 55 x 10-6/°C and η = 8. In this formula, \(\theta\) is the angle of emergence at which a wavelength will be bright. The double-pass optical design of ARIES' equation for destructive interference. Bragg in 1913 to explain why the cleavage faces of crystals appear to reflect X-ray beams at certain angles of incidence (theta, ). The electrical potential energy is given by 2 0 1 n 4 n e PE πε r =− (18) which gives 4 22 0 1 4 e n me PE ε nh =− 2 (19) when the Bohr radius given by Equation (13) is substituted. Where, n is the order of grating, d is the distance between two fringes or spectra; λ is the wavelength of light; θ is the angle to maxima; Solved Examples. If β m is on the opposite side of the grating normal from α, its sign is opposite. Typically, for such gratings the term volume Bragg gratings (VBGs) is used in relation to Sir William Bragg who in 1915 used diffraction of light propagating through a crystal to determine the crystal's lattice structure [ 1 ]. 5. Then solving for d max. Mar 13, 2017 · Understanding diffraction grating behavior including conical and rayleigh anomalies from transmission gratings ilration of a blazed scientific diagram orders at normal incidence the parameters table due to n s engineering physics class resolution resolving power equation tessshlo Understanding Diffraction Grating Behavior Including Conical And Rayleigh Anomalies From Transmission Gratings Generalized grating equation for virtually-imaged phased-array spectral dispersers Albert Vega, Andrew M. According to the Huygens–Fresnel principle, each point on the wavefront of a propagating wave can be considered to act as a point source, and the wavefront at any subsequent point can be found by adding together the contributions from each of these Use the grating equation with d=(1/6000) cm to find the wavelength for each color. com. In deriving the above form of the grating equation, we assumed an incident beam perpendicular to the grating. 1 Light diffraction by a thin moving grating For light diffraction we use the dispersion relation for photonsω=kcas well as the grating dispersion relationl=Klvgto set the energy balance in the form: Feb 06, 2020 · The equation is too simplistic to compensate for the differences. 1 Superposition of Waves Consider a region in space where two or more waves pass through at the same time. Two, three, four and many slits; Diffraction Mostly this confusion comes from how the so-called "grating equation" is used. Grating lobes is the term for secondary main lobes (very strong sidelobes) in the antenna diagram. 0 1 sin 5 1 sin0 5 sin90 0 1. 54 N cm, N being number of lines per inch in the grating. Therefore we have the formula for a grating in which light both comes in and goes out at an angle: ϕ = 2πdsinθout / λ − 2πdsinθin / λ. Contact & Support +1 888 902 0894 (United States) +1 360 685 5580 (International) Typical straw man parameters are: Grating period p = 200 nm, grating depth d = 4 micron, grating bar thickness b = 40 nm, and sidewall roughness < 1 nm. This paper explains the c A diffraction grating is the tool of choice for separating the colors in incident light. Simple keystroke commands allow real-time interactive control of the incidence angle, grating ruling density (lines/mm), wavelength, and diffraction order. not supported by a membrane) to minimize x-ray absorption. By making certain approximations Nov 04, 2011 · Some of these show a cross section of the geometry of a diffraction grating, a common illustration in textbooks of optics, spectroscopy, and analytical chemistry. The groove density, depth and profile dictate the spectral range, efficiency, resolution and performance of the diffraction grating. What is the wavelength of the light used? (1 Å = 10-10 m) Known : This physics video tutorial explains how to solve diffracting grating problems. You will need to look up the wavelengths of the 3 hydrogen lines to plug into the equation. Transmission diffraction gratings consist of many thin lines of either absorptive material or thin 3) Grating lobe equation . The distance between these spots and the central spot is Jun 14, 2018 · This equation is one of the most fundamental to optical microscopy and demonstrates that an objective's ability to resolve fine details in a specimen, such as a periodic grating, is dependent upon both the wavelength of illuminating light rays and the numerical aperture. Diffraction grating equation: Grating Equation . The equation for maximum spacing is a function of wavelength of operation and maximum look angle: I know what capital K is, it's two pi over lambda divide out the common terms and this is the grating equation. Mar 05, 2012 · the grating equation is: d*sin(θ) = mλ where d is the spacing, θ is the angle m is the order and λ is the wavelength. This is evident on CDs as a vague, hazy rainbow. The relationship between the grating spacing and the angles of the incident and diffracted beams of light is known as the grating equation. After passing through grating rays forms the diffraction patterns which can be seen through telescope. CD as Diffraction Grating: Interference • The tracks of a compact disc act as a diffraction grating • Nominal track separation on a CD is 1. The perfect grating 159 10. so that if the diffraction order k is doubled, λ is halved, etc. Figure 1 shows an example, where the diffraction orders −1 to +3 are possible. Figure 3. of the equation. THERMO RGL 14 2. However, the slits are usually closer in diffraction gratings than in double slits, producing fewer maxima at larger angles. Label each of the interference maxima. Using more expensive laser techniques, it is possible to create line densities of (3000 lines)/mm or higher. Behind the grating is interference . See full list on thorlabs. A diffraction grating is used to spatially separate light of different wavelengths. The beam O. must be satisfied for any integer m. DIFFRACTION ORDERS 21 2. Feb 29, 2012 · Diffraction grating formula: a sinθ = nλ where a is the separation of the lines on the grating (1/lines per metre) and θ is the angle diffracted. Nov 10, 2017 · The following equation, known as the classical Bragg grating equation (1), teaches that these types of optical sensors are influenced by temperature and strain variantions: where λB is the Bragg wavelength; while the parameters for a silica fiber with a germanium-doped core are ρe = -0. wavel~ngth . Figure 2 Grating resolution/dispersion calculator for determining the grating resolution for each Princeton Instruments camera and spectrometer combination. A typical grating with a poor line density is (250 lines)/mm. Since there are 10,000 lines per centimeter, each line is separated by 1/10,000 of a centimeter. Figure \(\PageIndex{4}\): Diffraction grating showing light rays from each slit traveling in the same direction. In the grating equation, m is the order of diffraction, which is an integer. You can pretty easily identify where the light constructively interferes by using the following equation: The equations above may lead to values of sin θ out with a modulus larger than 1; in that case, the corresponding diffraction order is not possible. The VIPA, sketched in Fig. Elements must be spaced properly in order to avoid grating lobes. the a long time, learning You an exciting and source shines on the grating, images of the light will appear at a number of angles—θ 1, θ 2, θ 3 and so on. Outside the grating region and the two subwaveguides have negligible interaction, i. These unintended beams of radiation are known as grating lobes. Since the sine of an angle is always of magnitude less than 1, the largest value of λ for which there is a solution for the grating equation is for n = 1, sin α = 1, sin β = 1 The Grating Equation A beam of light which falls on a grating will be diffracted into one or several beams. 05 to 0. Problem 4: The diffraction grating equation. com It's the same equation that we had before where d sin theta equals m lambda gives you the constructive points for a diffraction grating interference pattern on the wall. A grating containing 4000 slits per centimeter is illuminated with a monochromatic light and produces the second-order bright line at a 30° angle. which is the diffraction wavelength limit for any grating. The condition for maximum intensity is the same as that for a double slit . For incident angle u i, diffracted angle u m of the mth diffracted order is determined by a grating equation5 expressed as n 1 sin u i 1 n o sin u m 1 ml L 0, (1) where l is the vacuum In equation (1), k is the wave number and d is the spacing between adjacent array elements. 50 \times 10^ {5}\) lines per metre. sin cos mmcos m mmdm dd aada λ θ θθθλ λ θ == = Angular dispersion can be converted to linear dispersion at the exit slit by using the diagram below, f dθ dl tan() dl dd f dl d f dd θθ θ λ λ ≈ = where f is the monochromator focal = length. This is true for most types of monochromators, such as the Czerny-Turner, Ebert and Littrow types. It uses Koglenik's equations and therefore, doesn't account for losses in multiple orders nor Fresnel reflections. n. Positive orders have been omitted for clarity. The optimal beam waist diameter can be calculated by the following equation: $$\omega_{0}=1. Plugging these values into the grating equation yields λ = 2d. To do this: Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Then from the equation, number of lines per unit length of the grating can be calculated. 16]. 𝑛 m ` q𝑛 g l a L1. The simulation uses our implementation [36] of the unidirectional pulse propagation equation (UPPE) method [37], where all 3 beams intersect in the 500µm thick fused silica plate (with 𝐸 and ℇ𝑖 crossing at angle 𝜃 and 𝐸 𝑖 normal to the surface). 1 and Fig. 5 Reflective Grating When the relationship between the incident light and the m th-order diffracted light describes mirror reflection with respect to the facet surface of the grooves, most of the energy is concentrated into the mth-order diffracted light. The transmission-type diffraction grating flints on a stand just in front of a HeNe laser, and is brightly projected on the lecture room screen. n=(a+b) sin θ/λ . the q-th diffraction order Here, for the first time to our knowledge, we present a generalized grating equation, which constitutes an approximate analytical dispersion law for the VIPA and provide experimental data that confirm our result. 0 × 10^2 nm). Previous studies considered grating diffraction or Bragg diffraction separately to explain this iridescence. It turns out that the results lead to the same model. Where: k – k factor of the Bragg grating p e – photoelastic constant (variation of index of refraction with axial tension) The p e for the optical fiber is Meaning that strain Sensitivity of a FBG is given by the expression: Equation 3 For FBG @1550 nm (typical) we can simplify its strain sensitivity to: Equation 4 They then say that the equation for the diffraction intensity pattern is gi Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The dispersion of a grating is governed by the grating equation, usually written as: where: n is the order of diffraction, λ is the diffracted wavelength d is the grating constant (the distance between successive grooves) θ i is the angle of incidence measured from the normal and θ d is the angle See full list on thorlabs. This d max is the condition for no grating lobes in the reduced scan angle (θ max), where θ max is less than π/2 (90°). grating equation eop, y05tv, vwe, sqsn, ntc, 3jp, ujz, cmj, yh8h, 0qol, oyq, 6p, a2ca, ll, 2om9p,
CommonCrawl
Works by Benjamin H. Feintzeig ( view other items matching `Benjamin H. Feintzeig`, view all matches ) On the Choice of Algebra for Quantization.Benjamin H. Feintzeig - 2018 - Philosophy of Science 85 (1):102-125.details In this article, I examine the relationship between physical quantities and physical states in quantum theories. I argue against the claim made by Arageorgis that the approach to interpreting quantum theories known as Algebraic Imperialism allows for "too many states." I prove a result establishing that the Algebraic Imperialist has very general resources that she can employ to change her abstract algebra of quantities in order to rule out unphysical states. Direct download (10 more) Deduction and definability in infinite statistical systems.Benjamin H. Feintzeig - 2017 - Synthese 196 (5):1-31.details Classical accounts of intertheoretic reduction involve two pieces: first, the new terms of the higher-level theory must be definable from the terms of the lower-level theory, and second, the claims of the higher-level theory must be deducible from the lower-level theory along with these definitions. The status of each of these pieces becomes controversial when the alleged reduction involves an infinite limit, as in statistical mechanics. Can one define features of or deduce the behavior of an infinite idealized system from (...) a theory describing only finite systems? In this paper, I change the subject in order to consider the motivations behind the definability and deducibility requirements. The classical accounts of intertheoretic reduction are appealing because when the definability and deducibility requirements are satisfied there is a sense in which the reduced theory is forced upon us by the reducing theory and the reduced theory contains no more information or structure than the reducing theory. I will show that, likewise, there is a precise sense in which in statistical mechanics the properties of infinite limiting systems are forced upon us by the properties of finite systems, and the properties of infinite systems contain no information beyond the properties of finite systems. (shrink) The Classical Limit as an Approximation.Benjamin H. Feintzeig - 2020 - Philosophy of Science 87 (4):612-639.details I argue that it is possible to give an interpretation of the classical ℏ→0 limit of quantum mechanics that results in a partial explanation of the success of classical mechanics. The interpretation... On Noncontextual, Non-Kolmogorovian Hidden Variable Theories.Benjamin H. Feintzeig & Samuel C. Fletcher - 2017 - Foundations of Physics 47 (2):294-315.details One implication of Bell's theorem is that there cannot in general be hidden variable models for quantum mechanics that both are noncontextual and retain the structure of a classical probability space. Thus, some hidden variable programs aim to retain noncontextuality at the cost of using a generalization of the Kolmogorov probability axioms. We generalize a theorem of Feintzeig to show that such programs are committed to the existence of a finite null cover for some quantum mechanical experiments, i.e., a finite (...) collection of probability zero events whose disjunction exhausts the space of experimental possibilities. (shrink) Probability in the Physical Sciences in Philosophy of Probability Quantum Mechanics, Miscellaneous in Philosophy of Physical Science Quantum Nonlocality in Philosophy of Physical Science The classical limit of a state on the Weyl algebra.Benjamin H. Feintzeig - unknowndetails This paper considers states on the Weyl algebra of the canonical commutation relations over the phase space R^{2n}. We show that a state is regular iff its classical limit is a countably additive Borel probability measure on R^{2n}. It follows that one can "reduce" the state space of the Weyl algebra by altering the collection of quantum mechanical observables so that all states are ones whose classical limit is physical. Is the classical limit "singular"?Jer Steeger & Benjamin H. Feintzeig - 2021 - Studies in History and Philosophy of Science Part A 88 (C):263-279.details We argue against claims that the classical ℏ → 0 limit is "singular" in a way that frustrates an eliminative reduction of classical to quantum physics. We show one precise sense in which quantum mechanics and scaling behavior can be used to recover classical mechanics exactly, without making prior reference to the classical theory. To do so, we use the tools of strict deformation quantization, which provides a rigorous way to capture the ℏ → 0 limit. We then use the (...) tools of category theory to demonstrate one way that this reduction is explanatory: it illustrates a sense in which the structure of quantum mechanics determines that of classical mechanics. (shrink) Reductive Explanation and the Construction of Quantum Theories.Benjamin H. Feintzeig - 2022 - British Journal for the Philosophy of Science 73 (2):457-486.details I argue that philosophical issues concerning reductive explanations help constrain the construction of quantum theories with appropriate state spaces. I illustrate this general proposal with two examples of restricting attention to physical states in quantum theories: regular states and symmetry-invariant states. 1Introduction2Background2.1 Physical states2.2 Reductive explanations3The Proposed 'Correspondence Principle'4Example: Regularity5Example: Symmetry-Invariance6Conclusion: Heuristics and Discovery. Review of Jeffrey A. Barrett's The Conceptual Foundations of Quantum Mechanics - Jeffrey A. Barrett, The Conceptual Foundations of Quantum Mechanics. Oxford: Oxford University Press (2020), 272 pp., $88.00. [REVIEW]Benjamin H. Feintzeig - 2022 - Philosophy of Science 89 (1):202-205.details Quantum Mechanics in Philosophy of Physical Science Localizable Particles in the Classical Limit of Quantum Field Theory.Rory Soiffer, Jonah Librande & Benjamin H. Feintzeig - 2021 - Foundations of Physics 51 (2):1-31.details A number of arguments purport to show that quantum field theory cannot be given an interpretation in terms of localizable particles. We show, in light of such arguments, that the classical ħ→0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbar \rightarrow 0$$\end{document} limit can aid our understanding of the particle content of quantum field theories. In particular, we demonstrate that for the massive Klein–Gordon field, the classical limits of number operators can be understood to encode local information about particles (...) in the corresponding classical field theory. (shrink) Correction to: Deduction and definability in infinite statistical systems.Benjamin H. Feintzeig - 2020 - Synthese 197 (12):5539-5540.details Prop. 1 on p. 10 is false as stated. The proof implicitly assumes that all Cauchy sequences.
CommonCrawl
Biofibres from biofuel industrial byproduct—Pongamia pinnata seed hull Puttaswamy Manjula1, Govindan Srinikethan1 & K. Vidya Shetty1 Biodiesel production using Pongamia pinnata (P. pinnata) seeds results in large amount of unused seed hull. These seed hulls serve as a potential source for cellulose fibres which can be exploited as reinforcement in composites. These seed hulls were processed using chlorination and alkaline extraction process in order to isolate cellulose fibres. Scanning electron microscopy (SEM), dynamic light scattering (DLS), thermogravimetric analysis (TGA), X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR) and nuclear magnetic resonance spectroscopy (NMR) analysis demonstrated the morphological changes in the fibre structure. Cellulose microfibres of diameter 6–8 µm, hydrodynamic diameter of 58.4 nm and length of 535 nm were isolated. Thermal stability was enhanced by 70 °C and crystallinity index (CI) by 19.8% ensuring isolation of crystalline cellulose fibres. The sequential chlorination and alkaline treatment stemmed to the isolation of cellulose fibres from P. pinnata seed hull. The isolated cellulose fibres possessed enhanced morphological, thermal, and crystalline properties in comparison with P. pinnata seed hull. These cellulose microfibres may potentially find application as biofillers in biodegradable composites by augmenting their properties. Cellulose is nature's most lavishly available polymer. Highly purified cellulose fibre is been isolated from several plant sources, such as branch barks of mulberry (Li et al. 2009), pineapple leaf fibres (Cherian et al. 2010; Mangal et al. 2003), pea hull fibre (Chen et al. 2009), coconut husk fibres (Rosa et al. 2010), banana rachis (Zuluaga et al. 2009), sugar beet (Dinand et al. 1999; Dufresne et al. 1997), wheat straw (Kaushik and Singh 2011), palm leaf sheath (Maheswari et al. 2012), Arundo donax L stem (Fiore et al. 2014), cotton stalk (Hou et al. 2014). From the past two decades these biofibres are being used as filler material in the preparation of composites and have gained prodigious attention (Hubbe et al. 2008). In view of better utilization of renewable resources, there is a need to explore other renewable greener sources, which can be utilized in developing high strength light weight biocomposites for high-end applications. Pongamia pinnata seed hull is chosen for the present work to exploit its potential for cellulose fibres which could be utilized as reinforcement in biocomposites. In India and south East Asia, Pongamia pinnata (Karanja) seed is used for biodiesel production (Demirbas 2009). It is also a traditional medicinal plant with all parts having certain medicinal value (Yadav et al. 2004). Biofuel production using P. pinnata seeds has resulted in large-scale cultivation of these trees (Shwetha et al. 2014). The biofuel processing fallouts in significant amount of residual P. pinnata seed hull, in which cellulose percentage approximates to 40% and is similar as in shelly wood (Nadeem et al. 2009). Thus these underused seed hulls can find potential application as a source for cellulose fibres. Isolation of cellulose fibres is customarily carried out by mechanical treatments such as homogenisation (Du et al. 2016; Julie et al. 2016), sonication, (Sheltami et al. 2012; Saurabh et al. 2016), steam explosion (Saelee et al. 2014) etc.; chemical treatments such as acid hydrolysis (Abidin et al. 2015), TEMPO oxidation (Du et al. 2016), chlorination and alkaline treatments (Sheltami et al. 2012; Johar et al. 2012; Maheswari et al. 2012) etc.; enzymatic treatments (Saelee et al. 2014) and conjointly with the combination of two or more of the aforementioned processes. Chemical treatments usually act upon the binding material of the fibril structure enabling the fibres to individualize (Johar et al. 2012). Chlorination treatment being a chemical treatment is a well-established treatment which assists isolation of high quality pure cellulose fibres by bleaching and delignifies the cellulose material, while alkali treatment dissolves the wax, pectin and hemicellulose ensuring efficient isolation of cellulose microfibres. These chemical methods are used in combination to isolate cellulose fibres from different sources (Espino et al. 2014; Johar et al. 2012; Sheltami et al. 2012; Mandal and Chakrabarty 2011; Moran et al. 2008) and are also found to be efficient and economical when compared to high energy-consuming mechanical methods (Motaung and Mtibe 2015). In the present research work, the cellulose fibres were isolated from the P. pinnata seed hull using chlorination and alkaline process. The isolated cellulose microfibres were characterized using scanning electron microscopy (SEM), dynamic light scattering (DLS), thermogravimetric analysis (TGA), X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR) and nuclear magnetic resonance spectroscopy (NMR) analysis for their morphological, thermal and crystalline properties. Pongamia pinnata seed hulls were collected from "SEEDS" Research Centre, University of Agricultural Sciences, Bengaluru, India. All the chemicals used were of analytical grade. Fibre processing Pongamia pinnata seed hulls were separated from stones and other plant materials by hand picking. The dust and mud particles sticking to the seed hulls were removed by washing them extensively in tap water and finally with distilled water. Later dried under sunlight for two days and stored in sealed polythene bags for further use. Cleaned seed hulls were ground, screened (0.25 mm sieves) and oven dried at 105 °C for 8 h. Isolation of cellulose microfibres Cellulose microfibres were isolated from P. pinnata seed hull by chlorination and alkaline extraction process (Maheswari et al. 2012). Cleaned seed hull fibres were dewaxed using toluene–ethanol mixture (2:1) for 6 h. Excess of solvent from the fibres was removed by suction and later kept for drying in hot air oven. Fibres were bleached with 7% NaClO2 taken in fibre to liquor ratio of 1:50 (pH vicinity 4–4.2 was maintained using acetic acid and sodium acetate buffer) for 2 h at 100 °C and was washed successively using 2% sodium bisulphate, distilled water and ethanol. Further the extraction of holocellulose from fibres was carried out by treating with 17.5% NaOH solution at 20 °C for 45 min and subsequently washed with 10% acetic acid. Later the fibres were treated with 0.8% acetic acid and 0.7% nitric acid in the ratio 15:1 at 120 °C for 15 min. The mixture was cooled, filtered and washed sequentially with 95% ethanol and distilled water. The resulting cellulose fibres were oven dried at 105 °C until consistent weight was achieved. Scanning electron microscopy (SEM) The morphological structure of gold-sputtered P. pinnata seed hull fibres and isolated cellulose fibres were observed under SEM (JSM-6380LA, JEOL, EVISA). The micrographs were recorded at acceleration voltage of 5–8 kV. Dynamic light scattering (DLS) The fibre dimension of aqueous dispersed isolated cellulose fibre (distilled water) was measured by the dynamic light scattering instrument (DLS, nanoparticle analyser, HORIBA Scientific, nano partica SZ-100, Japan). Fourier transform infrared spectroscopy (FTIR) Pongamia pinnata seed hull fibres and isolated cellulose fibres mixed with KBr were pressed to form transparent thin pellets. FTIR spectra of the fibres were recorded in the extent of 400–4000/cm with 4/cm resolution using FTIR instrument (Jasco 4200, Jasco analytical instruments, USA). X-ray analysis (XRD) XRD measurements for P. pinnata seed hull fibres and isolated cellulose fibres were obtained by X-ray diffractometer (X'Pert3 Powder, PANalytical, The Netherlands) using Cu Kα radiation (1.5406 Å) with Ni filtered at 40 kV, 15 mA. Scattered radiations were recorded in the range of 2θ = 10° − 30° at a scan rate of 4°/min. The Segal method [Eq. (1)] was used to calculate crystallinity index (CI) considering the intensities of \(\left( {2 0 0} \right)\) peak (I 200, 2θ = 22.6°) and the intensity minimum between the \((2 0 0)\) and \((1 1 0)\) peaks (I am, 2θ = 180), where I 200 represents the intensities of crystalline and amorphous material and I am for the amorphous material. $${\text{CI}} \% = \left( {1 - \frac{{I_{am} }}{{I_{200} }}} \right) \times 100$$ Thermogravimetric analysis (TGA) Thermograms for P. pinnata seed hull fibres and isolated cellulose fibres were determined using a thermogravimetric analyser (TGA Q50, TA instruments, USA) at a 10 °C/min heating rate in nitrogen atmosphere. 13C NMR (CP-MAS) spectroscopy Spectra of P. pinnata seed hull fibres and isolated cellulose fibres were run on solid-state NMR spectrometer (Bruker DSX 300 MHz). 75.46 MHz operating frequency was fixed for 13C nuclei. Fibres were spun at 7.5 kHz spinning rate with filled 5 mm rotor at room temperature. Delignification of seed hull using acidified sodium chlorite was compassed, as an initial step in the isolation of cellulose. The alkaline treatment aids in the oxidation of lignin and hemicellulose, solubilizes the residual lignin and hemicellulose resulting in the isolation of cellulose fibres. These cellulose fibres were characterized for their morphological features, thermal stability and also to ensure removal of matrix components such as lignin and hemicellulose. SEM analysis The scanning electron microscope images of P. pinnata seed hull fibre after different stages of chemical treatment are as presented in Fig. 1a–c. Dewaxed seed hull fibre presented in Fig. 1a show irregular appearance due to cellulose fibre embedded between waxes and cementing materials such as lignin and hemicellulose (Reddy and Yang 2005; Haafiza et al. 2013). The fibres after sodium chlorite bleaching show cellulose fibres emerging out of the matrix as shown in Fig. 1b. This could be accounted to oxidation and solubilisation of matrix components viz. lignin and hemicellulose. The cementing components—lignin and hemicellulose isolated from the fibres are dissolved by mild alkali treatment (Elanthikkal et al. 2010). As a result, the SEM image of the isolated cellulose fibres as presented in Fig. 1c illustrates individualized single strand of cellulose fibres of diameter 6–8 µm, which in turn is a bundle of cellulose microfibres (Chen et al. 2011) having diameter of 270–370 nm. The cellulose fibres isolated in this work were of smaller diameter compared to that of the other cellulose fibres obtained from different sources such as soybean straw (Reddy and Yang 2009) yielding fibres of diameter 15.6 µm and coconut palm sheath (Maheswari et al. 2012) yielding fibres of 10–15 µm diameter. Scanning electron microscope images a dewaxed Pongamia pinnata seed hull, b sodium chlorite-treated fibres and c isolated cellulose fibres DLS analysis The aqueous dispersion of cellulose fibres was analysed by the dynamic light scattering technique in order to find their size distribution. DLS analysis results are summarized in Table 1. The histogram presented in Fig. 2 shows the presence of two distributions, indicating the presence of two dimensions (Kavitha et al. 2013; Srinivas et al. 2012) which is owing to the fibrous structure of cellulose representing both length and diameter. de Carvalho Mendes et al. (2015) also reported such two peaks in DLS histogram of the aqueous dispersion of cellulose fibrous structure. Dimensions determined by DLS epitomise hydrodynamic size (sphere size) having same diffusional coefficient as the fibres being measured (Horiba knowledgebase 2017). The mean hydrodynamic size of isolated cellulose fibres for shorter dimension (diameter) was observed to be 58.4 nm, whereas the longer dimension (length) of the fibres was observed to be 536.3 nm with the standard deviation of 3.3 and 44.1 nm, respectively. The diameter of the fibre obtained by SEM analysis is lesser than that obtained by DLS technique. The sizes of the fibre obtained by SEM and DLS are not comparable, as the diameter obtained by SEM presents the dry fibre size, whereas that obtained by DLS signifies the hydrodynamic diameter in aqueous dispersion. The difference in size estimated by the two methods is generally higher for the nonspherical particles. Table 1 Particle size distribution values of isolated cellulose fibre DLS analysis spectra of isolated cellulose fibres FTIR spectroscopy monitors the functional groups present in the fibres. Figure 3a and b present the spectra obtained for P. pinnata seed hull fibres and isolated cellulose fibres. The band around 3600–3000/cm assigned to stretching vibrations of O–H and C–H is observed in both P. pinnata seed hull fibres and isolated cellulose fibre, indicating the presence of cellulose-related functional groups (Qiao et al. 2016; Shin et al. 2012; Kalita et al. 2015; Sun et al. 2004a, b, c; Kaushik and Singh 2011). Peaks at 2894.63 and 2919.7/cm is generally assigned to C–H stretching vibration in lignin polysaccharide (cellulose and hemicellulose) (Shin et al. 2012; Sun et al. 2004a, b, c, d; Kaushik and Singh 2011; Zhong et al. 2013). Peak at 1735.62/cm is assigned to C=O stretching vibration of carbonyl, acetyl and uronic ester group of the ferulic and p-coumaric acids of lignin and/or xylan component of hemicellulose. The disappearance of these peaks in cellulose fibre spectra, confirms the removal of lignin and hemicellulose (Kalita et al. 2015; Kaushik and Singh 2011; Sun et al. 2004a, c, d; Elanthikkal et al. 2010; Rosa et al. 2012; Oun and Rhim 2016). Peaks at 1646.91 and 1648.84/cm are attributed to O–H bending of absorbed water and are observed in both the spectra; the presence of water could be related to the hydrophilic nature of cellulose component even though the samples analysed were dry (Qiao et al. 2016; Sun et al. 2004b, c; Kaushik and Singh 2011; Zhong et al. 2013; Rosa et al. 2012; Oun and Rhim 2016; Haafiza et al. 2013). Peaks at 1457.92 and 1423.21/cm are usually attributed to aromatic C=C stretch of lignin and the reduction of peak at 1423.21/cm in cellulose fibre spectra indicates the fractional delignification after the treatments (Sun et al. 2004a, b, c, d; Kaushik and Singh 2011; Elanthikkal et al. 2010; Haafiza et al. 2013). Peaks around 1373.07 and 1168.65/cm observed in P. pinnata seed hull fibres are assigned to C–H asymmetric deformation and C–O antisymmetric bridge stretching, respectively (Kalita et al. 2015; Sun et al. 2004a, b, c, d; Kaushik and Singh 2011; Zhong et al. 2013; Rosa et al. 2012). Finally the increase in peak 1033.66/cm, observed in isolated cellulose fibre spectra attributed to –C–O–C– pyranose ring skeletal vibration which indicates an increase in cellulose content (Sun et al. 2004a, b; Elanthikkal et al. 2010). FTIR spectra of a Pongamia pinnata seed hull fibres, b isolated cellulose fibres The 13C NMR spectra of untreated P. pinnata seed hull fibres and isolated cellulose are as shown in Fig. 4a, b. P. pinnata seed hull fibres spectrum in Fig. 4a, illustrates the presence of corresponding signals for the cellulose, hemicellulose and lignin, whereas in the case of isolated cellulose fibre spectrum as shown in Fig. 4b peaks of only cellulose carbon atoms were illustrated. Peaks between 107 and 60 ppm corresponding to six carbon atoms assigned to cellulose molecules are observed in both the spectra. The cellulose carbon atom peak at 107.6 is associated with C1 (Halonen et al. 2013), peaks at 77–67 ppm are assigned to C2, C3 and C5 carbon atoms (Sun et al. 2004a, d), peaks at 91.454 − 84.447 are of C4 (Bhattacharya et al. 2008) and finally 65.305 − 58 is associated with C6 carbon atom (Sun et al. 2004b, c, d). Similar observations were reported by Halonen et al. 2013, where the peaks around 109 − 101 ppm were associated with C1 atom, 80 − 68 ppm to C2, C3 and C5, 91 − 80 ppm to C4 and 68 − 58 ppm to C6 (Bhattacharya et al. 2008). In case of cellulose spectrum, the absence of peaks at 20–33 and 110–140 ppm associated with methylenes in lignin and 58.896 ppm of –OCH3 groups in lignin and hemicellulose, ensures the removal of hemicellulose and lignin, the matrix components (Sun et al. 2004d; Bhattacharya et al. 2008). The 13C NMR spectra of a untreated Pongamia pinnata seed hull fibres, b isolated cellulose fibres Thus the removal of hemicellulose and lignin from the P. pinnata seed hull fibres are supported by both NMR and FTIR spectral data. The thermograms of untreated P. pinnata seed hull fibres and isolated cellulose fibres as shown in Fig. 5 have onset degradation temperature of 200 and 270 °C, respectively. The major degradation peak at around 250–350 °C observed for isolated cellulose fibre is mainly due to pyrolysis of cellulose and thermal depolymerisation of hemicellulose (Abraham et al. 2011; Li et al. 2015; Chen et al. 2011; Luduena et al. 2011), showing 75% degradation of cellulose. The increase in the decomposition temperature of the isolated cellulose fibres is related to the crystallinity of cellulose due to the removal of lignin and amorphous hemicelluloses (Abe and Yano 2009). Residual presence in both P. pinnata seed hull fibres and isolated cellulose fibres at 800 °C was observed to be 25 and 7%, respectively, which indicates reduction in the presence of carbonaceous materials in the nitrogen atmosphere which is associated with the removal of hemicellulose (Li et al. 2015). Thus the high thermal properties perceived in case of isolated cellulose microfibres may broaden the fields of application of cellulose fibres at temperatures above 200 °C especially for biocomposite processing. Thermogram of untreated Pongamia pinnata seed hull fibres and isolated cellulose fibres X-ray diffraction (XRD) X-ray diffractograms of P. pinnata seed hull fibres and isolated cellulose fibres are presented in Fig. 6. Two peaks are observed at 2θ = 16° and 22.6° for both the samples which is the characteristic of crystal polymorphs of cellulose I and cellulose II, respectively (Bondeson et al. 2006; Novo et al. 2015). The peak at 2θ = 16° corresponds to the \((1 1 0)\) and 2θ = 22.6° corresponds to the \((2 0 0)\). The crystallinity index (CI) obtained using Eq. (1) for P. pinnata seed hull fibres and isolated cellulose fibres were 27.2, and 47%, respectively. The crystallinity of the isolated cellulose microfibres was increased by 72.79%. This could be due to the presence of large amount of crystalline cellulose and removal of amorphous hemicellulose and lignin (Rosa et al. 2010) from isolated cellulose fibres by chlorination and alkaline treatment. X-ray diffraction patterns of untreated Pongamia pinnata seed hull and isolated cellulose fibres Thus, from the above results it can be observed that cellulose microfibres isolated from P. pinnata seed hull exhibited enhanced morphological, thermal and crystalline properties after chlorination and alkaline treatment. Size and increase in crystallinity of the cellulose fibres obtained from different sources and isolation methods are summarized in Table 2. The size of the fibres obtained in the present work is comparable with that obtained from other sources by different isolation methods. However, percentage increase in crystallinity for the fibres isolated from P. pinnata seed hull after chlorination and alkaline treatment is higher than that for the fibres isolated from other sources by chemical treatment methods obtained by other researchers. As observed in Table 2, increase in crystallinity is lower in most of the cases in spite of additional mechanical treatments. Julie et al. (2016) have obtained around 97% increase in crystallinity of the fibres isolated from Arecanut husk fibres. However, they have adopted homogenization, a mechanical process after chemical treatment. Isolation of cellulose microfibres by chlorination and alkaline treatment is economical compared to others, as enormous amount of energy is consumed in the mechanical treatments. The chlorination and alkaline treatment on P. pinnata seed hull resulted in the isolation of crystalline cellulose fibres of 6–8 μm diameter. It is observed that the cellulose fibres isolated from P. pinnata seed hull show higher percentage increase in crystallinity when compared to cellulose fibres obtained from other resources by chemical treatments. Higher crystallinity of cellulose fibres accounts to higher tensile strength of the fibres (Alemdar and Sain 2008), which in turn is expected to enhance the mechanical properties of the cellulose fibre-reinforced composites. Table 2 Comparison of fibre size and crystallinity index (CI) of cellulose fibre isolated from different sources and isolation treatments Cellulose fibres were isolated from P. pinnata seed hull by sequential chlorination and alkaline process and the resultant microfibres were characterized by SEM, DLS, FTIR, NMR, TGA and XRD analyses. Cellulose microfibres were in diameter ranging from 6 to 8 μm and mean hydrodynamic diameter of 58.4 nm. NMR and FTIR analyses confirmed the removal of hemicellulose and lignin. Crystallinity of the fibres was increased by 72.79% after the treatment with CI of 47% for the isolated cellulose fibres. Thermal behaviour of the fibres had improved as evidenced by an increase of degradation temperature by 70 °C. Most potential observation to be considered was the degradation temperature of the isolated cellulose fibres being higher than 200 °C, which could broaden its application potential in the fields of biocomposite processing. A notable increase in crystallinity and the dimension similar to the cellulose fibres isolated from other resources by various chemical treatments was a significant feature of the resource and the isolation method adopted in the present study. Thus the present work substantiates the success of the sequential chlorination and alkaline extraction process solely contributing to obtain smaller diameter and crystalline cellulose microfibres from P. pinnata seed hull. These biofibres have potential application as filler, embroiling in the process of biodegradable composites to enhance their properties. Abe K, Yano H (2009) Comparison of the characteristics of cellulose microfibril aggregates of wood, rice straw and potato tuber. Cellulose 16:1017–1023 Abidin NAMZ, Aziz FA, Radiman S, Ismail A, Yunus WMZW, Nor NM, Sohaimi RM, Sulaiman AZ, Halim NA, Darius DDI (2015) Isolation of microfibrillated cellulose (MFC) from local hardwood waste, Resak (Vatica spp.). Mater Sci Forum 846:679–682. doi:10.4028/www.scientific.net/MSF.846.679 Abraham E, Deepa B, Pothan LA, Jacob M, Thomas S, Cvelbar U, Anandjiwala R (2011) Extraction of nanocellulose fibrils from lignocellulosic fibers: a novel approach. Carbohydr Polym 86(4):1468–1475 Alemdar A, Sain M (2008) Isolation and characterization of nanofibers from agricultural residues—wheat straw and soyhulls. Bioresour Technol 99(6):1664–1671 Bhattacharya D, Germinario LT, Winter WT (2008) Isolation, preparation and characterization of cellulose microfibers obtained from bagasse. Carbohydr Polym 73:371–377 Bondeson D, Mathew A, Oksman K (2006) Optimization of the isolation of nanocrystals from microcrystalline cellulose by acid hydrolysis. Cellulose 13:171–180 Chen Y, Liu C, Chang PR, Anderson DP, Huneault MA (2009) Pea starch-based composite films with pea hull fibers and pea hull fiber-derived nanowhiskers. Polym Eng Sci 49(2):369–378. doi:10.1002/pen.21290 Chen W, Yu H, Liu Y, Chen P, Zhang M, Hai Y (2011) Individualization of cellulose nanofibers from wood using high-intensity ultrasonication combined with chemical pretreatments. Carbohydr Polym 83:1804–1811 Cherian BM, Leão AL, de Souza SF, Thomas S, Pothan LA, Kottaisamy M (2010) Isolation of nanocellulose from pineapple leaf fibers by steam explosion. Carbohydr Polym 81:720–725 de Carvalho Mendes CA, Ferreira MS, Furtado CRG, de Sousa AMF (2015) Isolation and characterization of nanocrystalline cellulose from corn husk. Mater Lett 148:26–29 Demirbas A (2009) Heavy metal adsorption onto agro based waste materials: a review. J Hazard Mater 157(2–3):220–229 Dinand E, Chanzy H, Vignon RM (1999) Suspensions of cellulose microfibrils from sugar beet pulp. Food Hydrocolloid 13(3):275–283 Du C, Li H, Li B, Liu M, Zhan H (2016) Characteristics and properties of cellulose nanofibers prepared by TEMPO oxidation of corn husk. BioResources 11(2):5276–5284 Dufresne A, Cavaille JY, Vignon MR (1997) Mechanical behavior of sheets prepared from sugar beet cellulose microfibrils. J Appl Polym Sci 64:1185–1194 Elanthikkal S, Gopalakrishnapanicker U, Varghese S, Guthrie JT (2010) Cellulose microfibers produced from banana plant wastes: isolation and characterization. Carbohydr Polym 80:852–859 Espino E, Cakir M, Domenek S, Román-Gutiérrez AD, Belgacem N, Bras J (2014) Isolation and characterization of cellulose nanocrystals from industrial by-products of Agave tequilana and barley. Ind Crops Prod 62:552–559. doi:10.1016/j.indcrop.2014.09.017 Fiore V, Scalici T, Valenza A (2014) Characterization of a new natural fiber from Arundo donax L. as potential reinforcement of polymer composites. Carbohydr Polym 106:77–83 Haafiza MKM, Eichhornc SJ, Hassana A, Jawaid M (2013) Isolation and characterization of microcrystalline cellulose from oil palm biomass residue. Carbohydr Polym 93(2):628–634 Halonen H, Larsson PT, Iversen T (2013) Mercerized cellulose biocomposites: a study of influence of mercerization on cellulose supramolecular structure, water retention value and tensile properties. Cellulose 20:57–65 Horiba knowledgebase (2017) Horiba Instruments, Inc. https://www.horiba.com/fileadmin/uploads/Scientific/eMag/PSA/Guidebook/pdf/PSA_Guidebook.pdf. Accessed 05 Nov 2016 Hou X, Sun F, Zhang L, Luo J, Lu D, Yang Y (2014) Chemical-free extraction of cotton stalk fibers by steam flash explosion. BioResources 9(4):6950–6967 Hubbe MA, Rojas OJ, Lucia LA, Sain M (2008) Cellulosic nanocomposites: a review. BioResources 3:929–980 Johar N, Ahmad I, Dufresnec A (2012) Extraction, preparation and characterization of cellulose fibres and nanocrystals from rice husk. Ind Crops Prod 37:93–99 Julie CCS, George N, Narayanankutty SK (2016) Isolation and characterization of cellulose nanofibrils from arecanut husk fibre. Carbohydr Polym. doi:10.1016/j.carbpol.2016.01.015 Kalita E, Nath BK, Deb P, Agan F, Islam MR, Saikia K (2015) High quality fluorescent cellulose nanofibers from endemic rice husk: isolation and characterization. Carbohydr Polym 122:308–313 Kaushik A, Singh M (2011) Isolation and characterization of cellulose nanofibrils from wheat straw using steam explosion coupled with high shear homogenization. Carbohydr Res 346(1):76–85. Kavitha B, Kumar KS, Narsimlu N (2013) Synthesis and characterization of polyaniline nano-fibers. Indian J Pure Appl Phys 51:207–209 Li R, Fei J, Cai Y, Li Y, Fengand J, Yao J (2009) Cellulose whiskers extracted from mulberry: a novel biomass production. Carbohydr Polym 76(1):94–99 Li W, Zhang Y, Li J, Zhou Y, Li R, Zhou W (2015) Characterization of cellulose from banana pseudo-stem by heterogeneous liquefaction. Carbohydr Polym 132:513–519 Luduena L, Fasce D, Alverez VA, Stefani PM (2011) Nanocellulose from rice husk following alkaline treatment to remove silica. BioResources 6(2):1440–1453 Maheswari UC, Obi Reddy K, Muzenda E, Guduri BR, Varada Rajulu A (2012) Extraction and characterization of cellulose microfibrils from agricultural residue—Cocos nucifera L. Biomass Bioenerg 46: 555–563. doi 10.1016/j.biombioe.2012.06.039. http://www.sciencedirect.com/science/journal/0961953446:555–563 Mandal A, Chakrabarty D (2011) Isolation of nanocellulose from waste sugarcane bagasse (SCB) and its characterization. Carbohydr Polym 86:1291–1299 Mangal R, Saxena NS, Sreekala MS, Thomas S, Singh K (2003) Thermal properties of pineapple leaf fiber reinforced composites. Mater Sci Eng 339:281–285 Moran JI, Alvarez VA, Cyras VP (2008) Extraction of cellulose and preparation of nanocellulose from sisal fibers. Cellulose 15:149–159. doi:10.1007/s10570-007-9145-9 Motaung TE, Mtibe A (2015) Alkali treatment and cellulose nanowhiskers extracted from maize stalk residues. Mater Sci Appl 6:1022–1032. doi:10.4236/msa.2015.611102 Nadeem R, Ansari TM, Akhtar K, Khalid AM (2009) Pb (II) sorption by pyrolysed Pongamia pinnata pods carbon (PPPC). Chem Eng J 152:54–63. doi:10.1016/j.cej.2009.03.030 Novo LP, Bras J, García A, Belgacem N, Curvelo AA (2015) Subcritical water: a method for green production of cellulose nanocrystals. ACS Sustain Chem Eng 3:2839–2846 Oun AA, Rhim JW (2016) Characterization of nanocelluloses isolated from Ushar (Calotropis procera) seed fiber: effect of isolation method. Mater Lett 168:146–150 Qiao C, Chen G, Zhang J, Yao J (2016) Structure and rheological properties of cellulose nanocrystals suspension. Food Hydrocolloid 55:19–25 Reddy N, Yang Y (2005) Biofibers from agricultural byproducts for industrial applications. Trends Biotechnol 23(1):22–27 Reddy N, Yang Y (2009) Natural cellulose fibers from soybean straw. Bioresour Technol 100:3593–3598 Rosa MF, Medeiros ES, Malmonge JA, Gregorski KS, Wood DF, Mattoso LHC (2010) Cellulose nanowhiskers from coconut husk fibers: effect of preparation conditions on their thermal and morphological behavior. Carbohydr Polym 81(1):83–92 Rosa SM, Rehman N, de Miranda MIG, Nachtigall SM, Bica CI (2012) Chlorine-free extraction of cellulose from rice husk and whisker isolation. Carbohydr Polym 87:1131–1138 Saelee K, Yingkamhaeng N, Nimchua T, Sukyai P (2014) Extraction and characterization of cellulose from sugarcane bagasse by using environmental friendly method. In: Proceedings of The 26th Annual Meeting of the Thai Society for Biotechnology and International Conference, Mae Fah Lunag University (School of Science), Thailand, 26–29 November 2014 Saurabh CK, Mustapha A, Masri MM, Owolabi AF, Syakir MI, Dungani R, Paridah MT, Jawaid M, Khalil HPSA (2016) Isolation and characterization of cellulose nanofibers from Gigantochloa scortechinii as a reinforcement material. J Nanomater. doi:10.1155/2016/4024527 Sheltami RM, Abdullaha I, Ahmada I, Dufresnec A, Kargarzadeha H (2012) Extraction of cellulose nanocrystals from mengkuang leaves (Pandanus tectorius). Carbohydr Polym 88:772–779 Shin HK, Jeun JP, Kim HB, Kang PH (2012) Isolation of cellulose fibers from kenaf using electron beam. Radiat Phys Chem 81:936–940 Shwetha KC, Nagarajappa DP, Mamatha M (2014) Removal of copper from simulated wastewater using Pongamia pinnata seed shell as adsorbent. Int J Eng Res Appl 4(6):271–282 Srinivas CH, Srinivasu D, Kavitha B, Narsimlu N, Siva Kumar K (2012) Synthesis and characterization of nano size conducting polyaniline. IOSR J Appl Phys 1(5):12–15 Sun JX, Sun XF, Sun RC, Su YQ (2004a) Fractional extraction and structural characterization of sugarcane bagasse hemicelluloses. Carbohydr Polym 56:195–204 Sun JX, Sun XF, Zhao H, Sun RC (2004b) Isolation and characterization of cellulose from sugarcane bagasse. Polym Degrad Stabil 84:331–339 Sun XF, Sun RC, Su Y, Sun JX (2004c) Comparative study of crude and purified cellulose from wheat straw. J Agric Food Chem 52(4):839–847 Sun XF, Sun RC, Fowler P, Baird MS (2004d) Isolation and characterisation of cellulose obtained by a two-stage treatment with organosolv and cyanamide activated hydrogen peroxide from wheat straw. Carbohydr Polym 55:379–391 Xie J, Hse CY, De Hoop CF, Hu T, Qi J, Shupe TF (2016) Isolation and characterization of cellulose nanofibers from bamboo using microwave liquefaction combined with chemical treatment and ultrasonication. Carbohydr Polym 151:725–734 Yadav PP, Ahmed G, Maurya R (2004) Furanoflavonoids from Pongamia pinnata fruit. Phytochemistry 65(4):439–443 Zhong C, Wang C, Huang F, Jia H, Wei P (2013) Wheat straw cellulose dissolution and isolation by tetra-n-butylammonium hydroxide. Carbohydr Polym 94:38–45 Zuluaga R, Putaux JL, Cruz J, Velez J, Mondragon I, Ganan P (2009) Cellulose microfibrils from banana rachis: effect of alkaline treatments on structural and morphological features. Carbohydr Polym 76:51–59 PM, GS and KVS are the primary contributors, as this work is the result of Ph.D. work. All authors read and approved the final manuscript. Authors gratefully acknowledge University of Agricultural Sciences, Bengaluru, India for providing Pongamia pinnata seed hull. All authors declare that consent submitted for the final accepted version of the manuscript to be considered for publication in Bioresources and Bioprocessing journal. Department of Chemical Engineering, National Institute of Technology Karnataka, Surathkal, India Puttaswamy Manjula, Govindan Srinikethan & K. Vidya Shetty Puttaswamy Manjula Govindan Srinikethan K. Vidya Shetty Correspondence to Puttaswamy Manjula. Manjula, P., Srinikethan, G. & Shetty, K.V. Biofibres from biofuel industrial byproduct—Pongamia pinnata seed hull. Bioresour. Bioprocess. 4, 14 (2017). https://doi.org/10.1186/s40643-017-0144-x Revised: 16 January 2017 Cellulose microfibres Pongamia pinnata seed hull Hemicellulose Lignin
CommonCrawl
Confidence intervals for rate ratios between geographic units Li Zhu1Email authorView ORCID ID profile, Linda W. Pickle2 and James B. PearsonJr.2 Accepted: 30 November 2016 Ratios of age-adjusted rates between a set of geographic units and the overall area are of interest to the general public and to policy stakeholders. These ratios are correlated due to two reasons—the first being that each region is a component of the overall area and hence there is an overlap between them; and the second is that there is spatial autocorrelation between the regions. Existing methods in calculating the confidence intervals of rate ratios take into account the first source of correlation. This paper incorporates spatial autocorrelation, along with the correlation due to area overlap, into the rate ratio variance and confidence interval calculations. The proposed method divides the rate ratio variances into three components, representing no correlation, overlap correlation, and spatial autocorrelation, respectively. Results applied to simulated and real cancer mortality and incidence data show that with increasing strength and scales in spatial autocorrelation, the proposed method leads to substantial improvements over the existing method. If the data do not show spatial autocorrelation, the proposed method performs as well as the existing method. The calculations are relatively easy to implement, and we recommend using this new method to calculate rate ratio confidence intervals in all cases. Spatial autocorrelation Rate ratio Linked micromap plot Ratios of age-adjusted rates are of interest in public health research as a means of comparing rates in a set of geographic units with the rate in the overall area or in an area considered to be a "standard". They are useful in providing information to the general public on the health condition of the community, and to policy stake-holders on program planning and priority setting. These rates (and ratios) between geographic units are correlated due to two reasons—the first being that each region is a component (sub-region) of the overall area; and the second often referred to as the "First Law of Geography" [1], i.e., everything is related to everything else, but near things are more related than distant things. The second source of correlation is called spatial autocorrelation. Earlier work developed methods that estimated confidence intervals (CI) on age-adjusted rates and ratios of age-adjusted rates between non-overlapping regions [2] as well as between a sub-region and its parent region [3]. The latter took into account the correlation due to overlapping between the two regions, and showed that the F and the normal approximation were more efficient than the gamma approximation and the F interval [4]. Spatial autocorrelation is well known to be present in the distribution of diseases [5, 6] but has not yet been incorporated into rate ratio interval estimates published by the National Cancer Institute. The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute [7] is an authoritative source of information on cancer incidence and survival in the United States. SEER currently collects and publishes cancer incidence and survival data from population-based cancer registries covering approximately 30% of the US population. Cancer statistics, including rate ratios, are released annually at the national, registry, state and/or county level via the statistical software SEER*Stat [8]. Rate ratios are provided but their interval estimates are computed assuming non-overlapping regions and no spatial autocorrelation. In this paper, we present revised confidence intervals that take spatial autocorrelation into account and show that the interval coverage is more accurate and provides a higher statistical power. The proposed method considers both overlapping (non-spatial) and spatial autocorrelation. The three methods referred to throughout the paper are (1) the Tiwari method [3] that considers overlapping between a sub-region and its parent region; (2) the non-spatial method which is a special case in our proposal in that spatial autocorrelation is ignored and only overlap is accounted for; and (3) the spatial method that includes both overlap and spatial autocorrelation. Methods (1) and (2) are equivalent, only based on different probability distribution assumptions. Age-adjusted rates To study the health or disease status of a region, it is common to calculate the disease rate and compare it across geographic regions. A straightforward measure is the crude disease mortality (or incidence) rate which is calculated by dividing the total number of deaths (or new cases) in a given time period by the total number of people at risk during the same time period. A major problem with the crude rate is that it is an overall average rate that does not take into account possible confounding factors. The most common confounding factor for public health data is age, because many health conditions are age-related. Instead of the crude disease rate, the age-adjusted rate is usually calculated and reported to adjust for different age profiles between regions. Assuming that there are I geographic units and J age groups in the study area, and the data available are D ij , the number of deaths (or new cases), and n ij , the count of the population size from region i and age group j, then the age-specific rate, R ij , often expressed as number of cases per 100,000 people at risk, is calculated as $$ R_{ij} = \frac{{D_{ij} }}{{n_{ij} }} \times 100{,}000 $$ Age-adjustment could be done internally, where the age-specific rate of the standard (or reference) population is weighted by the proportions of each age group in the study population [9]. A more common age-adjustment is called the direct method where the age-adjusted rate R i of region i is calculated as $$ \begin{aligned} R_{i} & = \sum\limits_{j = 1}^{J} {w_{j} } \frac{{D_{ij} }}{{n_{ij} }} \times 100{,}000 \\ & = \sum\limits_{j = 1}^{J} {w_{j} } R_{ij} \\ \end{aligned} $$ where w j is the proportion of population size for age group j in the standard population and ∑ j=1 J w j = 1. Hence, the age-adjusted rate is the weighted average of age-specific rates, weighted by the standard population in order to minimize the effect of a difference in age distributions between regions. Let Ω denote the total region of interest, e.g., the entire U.S. Then the overall rate for Ω is computed by age adjusting after summing the number of deaths (numerator) and population (denominator) over all of the geographic regions, i.e., $$ \begin{aligned} R_{\varOmega } & = \sum\limits_{j = 1}^{J} {w_{j} } \frac{{\sum\nolimits_{i = 1}^{I} {D_{ij} } }}{{\sum\nolimits_{i = 1}^{I} {n_{ij} } }} \times 100{,}000 \\ & = \sum\limits_{j = 1}^{J} {w_{j} } R_{j} \\ \end{aligned} $$ For the rest of this paper, we use R i , R Ω and D i , D Ω to denote the random variables for the sub-regional and overall area age-adjusted rate and count respectively. The random variable of age-and region-specific rate, r ij , has expectation λ ij . Suppose \( \{ Z({\mathbf{s}}):{\mathbf{s}} \in {\mathbf{S}}\} \) represents a random process on surface \( {\mathbf{S}} \in {\mathbf{R}}^{{\mathbf{2}}} \), and Z(s 1), Z(s 2), …, Z(s I ) represents a partial realization of the random process. Z(·) is said to be second-order stationary if \( E[Z({\mathbf{s}})] = \mu \) for all \( {\mathbf{s}} \in {\mathbf{S}} \) (i.e., the mean of the process does not depend on location) and \( Cov\left[ {Z\left( {s_{i} } \right), Z\left( {s_{{i^{{\prime }} }} } \right)} \right] = C\left( {s_{i} - s_{{i^{{\prime }} }} } \right) \) for all \( s_{i} ,s_{i'} \in {\mathbf{S}} \). That is, the covariance function C(·), a measure of spatial correlation, depends only on the difference between locations s i and \( s_{{i^{{\prime }} }} \), not on the locations themselves. If \( Var\left[ {Z\left( {s_{i} } \right) - Z\left( {s_{{i^{{\prime }} }} } \right)} \right] = 2\gamma \left( {s_{i} - s_{{i^{{\prime }} }} } \right) \), then Z(·) is said to be intrinsically stationary and the function 2γ(·) is called a variogram. The semivariogram, γ(·), has a value \( r\left( {s_{i} - s_{{i^{{\prime }} }} } \right) \) which is a function of the difference between locations s i and \( s_{{i^{{\prime }} }} \) [10]. If, in addition, the covariance function C(·) does not depend on the direction between locations s i and \( s_{{i^{{\prime }} }} \), the process is called isotropic. For a stationary and isotropic spatial process, the semivariogram is a function of distance alone, i.e., \( \gamma ({\mathbf{h}}) \equiv \gamma (||{\mathbf{h}}||) \) where \( ||{\mathbf{h}}|| \) denotes the pairwise inter-point distance of vector \( {\mathbf{h}} \). A plot of this function against separation distance, \( ||{\mathbf{h}}|| \), conveys the spatial variability of the process (see Fig. 1). For a process with positive spatial autocorrelation, i.e., observations closer together are more alike than those further apart, the semivariogram value is non-decreasing with distance, indicating increasing variation with longer distance between two locations. Usually the semivariogram will approach a constant value (called the sill) at a large separation distance (called the range), beyond which the observations are considered spatially uncorrelated. The value of semivariogram at \( ||{\mathbf{h}}|| = 0 \) is referred to as nugget effect, and it represents the variation between two observations that are fairly close together. If the nugget effect is positive (larger than 0), it may be due to measurement error or a spatially discontinuous process. Illustrative semi-variogram plot. (credit: Samui and Sitharam [19]) For a second-order stationary spatial process Z(·), semivariogram is related to the covariance function as $$ \gamma ({\mathbf{h}}) = C({\mathbf{0}}) - C({\mathbf{h}}) $$ If \( C({\mathbf{h}}) \to 0 \) as \( ||{\mathbf{h}}|| \to \infty \), then \( r({\mathbf{h}}) \to C({\mathbf{0}}) \). So \( C({\mathbf{0}}) \) is the variance of \( Z({\mathbf{s}}) \) and the sill of the semivariogram. A partial sill is defined as the difference between the sill \( C({\mathbf{0}}) \) and the nugget effect, or \( C({\mathbf{0}}) - r({\mathbf{0}}) \). When we are comparing two spatial processes, it is useful to measure the correlation (spatial autocorrelation) instead of covariance. By definition, the spatial correlogram is $$ \rho ({\mathbf{h}}) = \frac{{C({\mathbf{h}})}}{C(0)} = \frac{{C({\mathbf{0}}) - \gamma ({\mathbf{h}})}}{C(0)} $$ \( \rho ({\mathbf{h}}) \) is analogous to a typical correlation in that \( |\rho ({\mathbf{h}})| \le 1 \). When the distance between two locations exceeds the range, \( r({\mathbf{h}}) = C({\mathbf{0}}) \) and \( \rho ({\mathbf{h}}) = 0 \), i.e., there is no spatial autocorrelation.There are a few commonly used parametric semivariogram models, including spherical, exponential, Gaussian, and power models. A plot of the observed semivariogram of our lung cancer data suggested an exponential model, which is expressed as $$ r({\mathbf{h}} ) = \left\{ {\begin{array}{*{20}l} {0,} \hfill &\quad {h = 0} \hfill \\ {c{}_{0} + c{}_{e}\left[ {1 - \exp ( - h/a{}_{e})} \right]} \hfill &\quad {h\text{ > }0} \hfill \\ \end{array} } \right. $$ where c 0 is the nugget effect, c e is the partial sill, and c 0 + c e is the sill. In this model, \( r({\mathbf{h}}) \) approaches the sill asymptotically and an effective range is defined as the distance at which the autocorrelogram is 0.05. Here, the effective range is 3a e (see Fig. 1). Replacing \( r({\mathbf{h}}) \) in (4) with (5), the spatial correlogram becomes $$ \rho ({\mathbf{h}}) = \left\{ {\begin{array}{*{20}l} 1 \hfill &\quad {h = 0} \hfill \\ {\frac{{c{}_{e}}}{{c{}_{0} + c{}_{e}}}\exp ( - h/a{}_{e})} \hfill &\quad {h\text{ > }0} \hfill \\ \end{array} } \right. $$ A larger proportion of partial sill to sill, \( \frac{{c{}_{e}}}{{c{}_{0} + c{}_{e}}} \), and longer range, a e , mean stronger spatial autocorrelation. Variance calculation Using notation similar to Tiwari et al. [2] where variances and confidence intervals of age-adjusted rates were derived, let \( {\mathbf{r}} = ({\mathbf{r}}_{1} , \ldots ,{\mathbf{r}}_{\text{I}} )^{{\prime }} \) denote the vector of age-specific rates for the regions 1 through I, and each component \( \text{r}_{\text{i}} = (r_{i1} , \ldots ,r_{iJ} )^{{\prime }} \) represent the rates for the J age groups in region i. Also let \( {\bar{\mathbf{R}}} = (R_{1} , \ldots ,R_{I} ,R_{\varOmega } )^{{\prime }} \) denote the vector of age-adjusted rates for regions 1 through I and the overall age-adjusted rate R Ω . Tiwari et al. [3] derived confidence intervals (and therefore the relevant variances and covariances) for an age-adjusted rate and of the difference and ratio of two age-adjusted rates, specifically R i and R Ω . as above. The calculation took into account the correlation due to the overlap between the sub-regions and the parent region. The derived 95% CI were shown to be more efficient than previously proposed methods [4]. However, both of these derivations ignored potential spatial autocorrelation among the area-specific rates. In this report, we will follow the development of Tiwari et al. [3] to derive the variance/covariance matrix for ln (R i /R Ω ), the logarithm of the rate ratio for region i relative to the overall rate, adding spatial autocorrelation as necessary. The age-adjusted rate vector is a linear combination of the age-specific rate vector, $$ \overline{{\mathbf{R}}} = A{\mathbf{r}} $$ $$ A = \left[ {\begin{array}{*{20}c} {w^{{\prime }} } & 0 & \cdots & {} & 0 \\ 0 & {w^{{\prime }} } & 0 & \cdots & 0 \\ \vdots & {} & \ddots & \vdots & \vdots \\ {} & {} & {} & \ddots & 0 \\ 0 & \cdots & \cdots & 0 & {w^{{\prime }} } \\ {g_{1}^{{\prime }} } & {g_{2}^{{\prime }} } & \cdots & \cdots & {g_{I}^{{\prime }} } \\ \end{array} } \right],\quad g_{i}^{{\prime }} = \left( {\frac{{n_{i1} w_{1} }}{{\xi_{1} n}},\frac{{n_{i2} w_{2} }}{{\xi_{2} n}}, \ldots ,\frac{{n_{iJ} w_{J} }}{{\xi_{J} n}}} \right) $$ and ξ j = n j /n = ∑ i n ij /∑ j ∑ i n ij . So \( Var(\overline{{\mathbf{R}}} ) = AVar({\mathbf{r}})A^{{\prime }} \); the dimensions of A are (I + 1) × (IJ) and the dimensions of the \( Var({\mathbf{r}}) \) variance/covariance matrix are (IJ) × (IJ), so that the dimensions of the \( Var(\overline{{\mathbf{R}}} ) \) matrix are (I + 1) × (I + 1). Using the Delta Method [11]: $$ Var(\ln \overline{{\mathbf{R}}} ) \approx diag\left( {\frac{1}{{R_{1}^{2} }}, \ldots ,\frac{1}{{R_{I}^{2} }},\frac{1}{{R_{\varOmega }^{2} }}} \right)Var(\overline{{\mathbf{R}}} ) = diag\left( {\frac{1}{{R_{1}^{2} }}, \ldots ,\frac{1}{{R_{I}^{2} }},\frac{1}{{R_{\varOmega }^{2} }}} \right)AVar({\mathbf{r}})A^{{\prime }} $$ Our goal is to find the variance of ln(Ri/RΩ), the logarithm of the ratio of the rate for area i to the overall rate: $$ Var(\ln (R_{i} /R_{\varOmega } )) = (\begin{array}{*{20}l} 1 \hfill & { - 1} \hfill \\ \end{array} )Var\left( {\begin{array}{*{20}c} {\ln\,\,R_{i} } \\ {\ln\,\,R_{\varOmega } } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 \\ { - 1} \\ \end{array} } \right) = Var(\ln R_{i} ) + Var(\ln R_{\varOmega } ) - 2Cov(\ln R_{i} ,\ln R_{\varOmega } ) $$ Therefore, we need to compute \( Var({\mathbf{r}}) \), multiply by the factors in the above equation to estimate Var(R Ω ), and then use the components of that result to compute the variance of the logarithm of the rate ratio. Recall that age-place-specific rates \( {\mathbf{r}} = (r_{11} ,r_{12} , \ldots ,r_{1J} ,r_{21} , \ldots ,r_{2J} , \ldots ,r_{I1} , \ldots ,r_{IJ} )^{{\prime }} \). Assuming that D ij are spatially dependent Poisson random variables with means n ij λ ij , we can write the variance of the age-place-specific rates as a matrix where the diagonal represents the variance of independent rates plus a matrix of off-diagonal terms representing the spatial autocorrelation. That is, $$ \text{var} (r_{ij} ) = \text{var} \left( {\frac{{D_{ij} }}{{n{}_{ij}}}} \right) = \frac{{\text{var} (D_{ij} )}}{{n_{ij}^{2} }} = \frac{{\lambda_{ij} }}{{n{}_{ij}}},\;{\text{and}}\;{\text{so}} $$ $$ Var({\mathbf{r}}) = diag\left( {\frac{{\lambda_{11} }}{{n_{11} }}, \ldots ,\frac{{\lambda_{1J} }}{{n_{1J} }}, \ldots ,\frac{{\lambda_{I1} }}{{n_{I1} }}, \ldots ,\frac{{\lambda_{IJ} }}{{n_{IJ} }}} \right) + \text{cov} \left( {\frac{{D_{ij} }}{{n{}_{ij}}},\frac{{D_{{i^{{\prime }} j^{{\prime }} }} }}{{n{}_{{i^{{\prime }} j^{{\prime }} }}}}} \right) $$ where (i, j) ≠ (i′, j′). The Tiwari method assumed independence of D ij ′s across both age and place, so that the 2nd (covariance) term in \( Var({\mathbf{r}}) \) above was a matrix of all zeroes. We will assume that the risk of disease is independent across age groups but the risk can be correlated among nearby places because of shared risk factors. That is, we assume independence across age but allow the possibility of spatial autocorrelation across places. We will assume that the structure of spatial autocorrelation in our data follows the exponential form. Therefore \( \text{cov} \left( {\frac{{D_{ij} }}{{n{}_{ij}}},\frac{{D_{{i^{{\prime }} j^{{\prime }} }} }}{{n{}_{{i^{{\prime }} j^{{\prime }} }}}}} \right) = \rho_{ii'} \sqrt {\text{var} \left( {\frac{{D_{ij} }}{{n{}_{ij}}}} \right)\text{var} \left( {\frac{{D_{{i^{{\prime }} j^{{\prime }} }} }}{{n{}_{{i^{{\prime }} j^{{\prime }} }}}}} \right)} = \rho_{ii'} \sqrt {\frac{{\lambda_{ij} }}{{n{}_{ij}}}\frac{{\lambda_{{i^{{\prime }} j^{{\prime }} }} }}{{n{}_{{i^{{\prime }} j^{{\prime }} }}}}} \) where \( \rho_{{ii^{{\prime }} }} \), according to formula (6), represents the spatial autocorrelation between areas i and i′ with a form \( \rho_{{ii^{{\prime }} }} = \frac{{c{}_{e}}}{{c{}_{0} + c{}_{e}}}\exp ( - h_{{ii^{{\prime }} }} /a{}_{e}) \). Here \( h_{{ii^{{\prime }} }} \) is the distance between the centroids of areas i and i′. In the case of positive spatial autocorrelation between two rates, \( Var({\mathbf{r}}) \) as calculated above will be the sum of two positive components. Therefore \( Var({\mathbf{r}}) \) will be a larger value than when assuming geographic independence, i.e., no spatial autocorrelation, as in the Tiwari method. Define a (IJ) × (IJ) matrix \( {\mathbf{C}} \) as $$ {\mathbf{C}} = \left[ {\begin{array}{*{20}c} 0 & {C_{12} } & \cdots & {C_{1I} } \\ {C_{12}^{{\prime }} } & 0 & \cdots & {C_{2I} } \\ \cdots & \cdots & \cdots & \cdots \\ {C_{1I}^{{\prime }} } & {C_{2I}^{{\prime }} } & \cdots & 0 \\ \end{array} } \right] $$ where \( C_{{ii^{{\prime }} }} = \frac{{c{}_{e}}}{{c{}_{0} + c{}_{e}}}\exp ( - h/a{}_{e})\sqrt {\frac{{r_{ij} }}{{n{}_{ij}}}\frac{{r_{{i^{{\prime }} j^{{\prime }} }} }}{{n{}_{{i^{{\prime }} j^{{\prime }} }}}}} \) for i ≠ i′, and C ii = 0, both are J × J matrix where the rows are indexed by j and the columns by j′. So $$ Var(\ln (\overline{{\mathbf{R}}} ) = diag\left( {\frac{1}{{R_{1}^{2} }}, \ldots ,\frac{1}{{R_{I}^{2} }},\frac{1}{{R_{\varOmega }^{2} }}} \right)\left\{ {A\left( {diag\left( {\frac{{\lambda_{11} }}{{n_{11} }}, \ldots ,\frac{{\lambda_{1J} }}{{n_{1J} }}, \ldots ,\frac{{\lambda_{I1} }}{{n_{I1} }}, \ldots \frac{{\lambda_{IJ} }}{{n_{IJ} }}} \right)} \right)A^{{\prime }} + A{\mathbf{C}}A^{{\prime }} } \right\} $$ where C is the block matrix defined above. Multiplying the first term components yields $$ diag\left( {\frac{1}{{R_{1}^{2} }}\sum\limits_{j} {\frac{{\lambda_{1j} w_{j}^{2} }}{{n_{1j} }},\frac{1}{{R_{2}^{2} }}\sum\limits_{j} {\frac{{\lambda_{2j} w_{j}^{2} }}{{n_{2j} }},} \ldots ,\frac{1}{{R_{I}^{2} }}\sum\limits_{j} {\frac{{\lambda_{Ij} w_{j}^{2} }}{{n_{Ij} }},} \frac{1}{{R_{\varOmega }^{2} }}\sum\limits_{i} {\sum\limits_{j} {\frac{{n_{ij} \lambda_{ij} w_{j}^{2} }}{{n^{2} \xi_{j}^{2} }}} } } } \right) $$ This matches the result of Tiwari et al. [2] Appendix A, for spatially independent places. Substituting for ξ j , the final term of this result can be rewritten as $$ \frac{1}{{R_{\varOmega }^{2} }}\sum\limits_{i} {\sum\limits_{j} {\frac{{n_{ij} \lambda_{ij} w_{j}^{2} }}{{n^{2} \xi_{j}^{2} }} = \frac{1}{{R_{\varOmega }^{2} }}\sum\limits_{i} {\sum\limits_{j} {\frac{{n_{ij} \lambda_{ij} w_{j}^{2} }}{{n^{2} }}(I + 1)(I + 1)} } } } $$ Multiplying the components of the second term A C A′ results in a square matrix of dimension I + 1 that provides the additional variation and correlation due to spatial autocorrelation. Components of this matrix are $$ \left\{ {\begin{array}{*{20}l} 0 \hfill & {for\;(i,i),\;i = 1,2, \ldots ,I} \hfill \\ {\frac{1}{{R_{i}^{2} }}\sum\limits_{{j^{{\prime }} }} {\sum\limits_{j} {w_{j} w_{{j^{{\prime }} }} \rho_{ii'} \sqrt {\frac{{\lambda_{{i^{{\prime }} j}} }}{{n{}_{{i^{{\prime }} j}}}}\frac{{\lambda_{{ij^{{\prime }} }} }}{{n{}_{{ij^{{\prime }} }}}}} } } } \hfill & {for\;(i,i^{{\prime }} \ne i),\;i = 1,2, \ldots ,I} \hfill \\ {\frac{1}{{R_{\varOmega }^{2} }}\sum\limits_{{i^{{\prime }} \ne i}} {\sum\limits_{{j^{{\prime }} }} {\sum\limits_{j} {\frac{{n_{{i^{{\prime }} j}} w_{j} w_{{j^{{\prime }} }} }}{{n_{j} }}\rho_{{ii^{{\prime }} }} \sqrt {\frac{{\lambda_{{i^{{\prime }} j}} }}{{n{}_{{i^{{\prime }} j}}}}\frac{{\lambda_{{ij^{{\prime }} }} }}{{n{}_{{ij^{{\prime }} }}}}} } } } } \hfill & {for\;(I + 1,i^{{\prime }} ),\;i^{{\prime }} = 1,2, \ldots ,I} \hfill \\ {\frac{1}{{R_{\varOmega }^{2} }}\sum\limits_{{i^{{\prime }} }} {\sum\limits_{{i \ne i^{{\prime }} }} {\sum\limits_{{j^{{\prime }} }} {\sum\limits_{j} {\frac{{n_{ij} n_{{i^{{\prime }} j^{{\prime }} }} w_{j} w_{{j^{{\prime }} }} }}{{n_{j} n_{{j^{{\prime }} }} }}\rho_{{ii^{{\prime }} }} \sqrt {\frac{{\lambda_{ij} }}{{n{}_{ij}}}\frac{{\lambda_{{i^{{\prime }} j^{{\prime }} }} }}{{n{}_{{i^{{\prime }} j^{{\prime }} }}}}} } } } } } \hfill &\quad {for\;(I + 1,I + 1)} \hfill \\ \end{array} } \right. $$ Combining relevant terms from the above results, we see that $$ \begin{aligned} & Var(\ln (R_{i} /R_{\varOmega } )) = Var(\ln R_{i} ) + Var(\ln R_{\varOmega } ) - 2\text{cov} (\ln R_{i} ,\ln R_{\varOmega } ) \\ & \quad = \frac{{Var(R_{i} )}}{{R_{i}^{2} }} + \frac{{Var(R_{\varOmega } )}}{{R_{\varOmega }^{2} }} - 2\frac{{\text{cov} (R_{i} ,R_{\varOmega } )}}{{R_{i} R_{\varOmega } }} \\ \quad = \frac{1}{{R_{i}^{2} }}\sum\limits_{i} {\frac{{\lambda_{ij} w_{j}^{2} }}{{n_{ij} }}} + \frac{1}{{R_{\varOmega }^{2} }}\left( {\sum\limits_{i} {\sum\limits_{j} {\frac{{n_{ij} \lambda_{ij} w_{j}^{2} }}{{n_{j}^{2} }} + \sum\limits_{{i^{{\prime }} }} {\sum\limits_{{i \ne i^{{\prime }} }} {\sum\limits_{{j^{{\prime }} }} {\sum\limits_{j} {\frac{{n_{ij} n_{{i^{{\prime }} j^{{\prime }} }} w_{j} w_{{j^{{\prime }} }} }}{{n_{j} n_{{j^{{\prime }} }} }}\rho_{{ii^{{\prime }} }} \sqrt {\frac{{\lambda_{ij} \lambda_{{i^{{\prime }} j^{{\prime }} }} }}{{n_{ij} n_{{i^{{\prime }} j^{{\prime }} }} }}} } } } } } } } \right) \\ \quad \quad - 2\frac{1}{{R_{\varOmega } R_{i} }}\left( {\sum\limits_{j} {\frac{{w_{j}^{2} \lambda_{ij} }}{{n_{j} }} + \sum\limits_{{i \ne i^{{\prime }} }} {\sum\limits_{{j^{{\prime }} }} {\sum\limits_{j} {\frac{{n_{{i^{{\prime }} j}} w_{j} w_{{j^{{\prime }} }} }}{{n_{j} }}\rho_{ii'} \sqrt {\frac{{\lambda_{{i^{{\prime }} j}} \lambda_{{ij^{{\prime }} }} }}{{n_{{i^{{\prime }} j}} n_{{ij^{{\prime }} }} }}} } } } } } \right) \\ \end{aligned} $$ Formula (7) shows that the variance of a rate ratio is comprised of three components. If there were no correlation at all between the area i and standard rates, i.e., area i is not a sub-region of the standard and there is no spatial autocorrelation among the area rates, then the variance would simply be the first two terms in Eq. (7), \( \frac{1}{{R_{i}^{2} }}\sum\nolimits_{i} {\frac{{\lambda_{ij} w_{j}^{2} }}{{n_{ij} }}} + \frac{1}{{R_{\varOmega }^{2} }}\sum\nolimits_{i} {\sum\nolimits_{j} {\frac{{n_{ij} \lambda_{ij} w_{j}^{2} }}{{n_{j}^{2} }}} } \), representing the case of no correlation. The variance due to overlap between region i and the overall region will be reduced by \( 2\frac{1}{{R_{\varOmega } R_{i} }}\sum\nolimits_{j} {\frac{{w_{j}^{2} \lambda_{ij} }}{{n_{j} }}} \), the fourth term of Eq. (7), representing the case of area overlap. The additional variance due to spatial autocorrelation is $$ \frac{1}{{R_{\varOmega }^{2} }}\sum\limits_{{i^{{\prime }} }} {\sum\limits_{{i \ne i^{{\prime }} }} {\sum\limits_{{j^{{\prime }} }} {\sum\limits_{j} {\frac{{n_{ij} n_{{i^{{\prime }} j^{{\prime }} }} w_{j} w_{{j^{{\prime }} }} }}{{n_{j} n_{{j^{{\prime }} }} }}\rho_{{ii^{{\prime }} }} \sqrt {\frac{{\lambda_{ij} \lambda_{{i^{{\prime }} j^{{\prime }} }} }}{{n_{ij} n_{{i^{{\prime }} j^{{\prime }} }} }}} } } } } - 2\frac{1}{{R_{\varOmega } R_{i} }}\sum\limits_{{i \ne i^{{\prime }} }} {\sum\limits_{{j^{{\prime }} }} {\sum\limits_{j} {\frac{{n_{{i^{{\prime }} j}} w_{j} w_{{j^{{\prime }} }} }}{{n_{j} }}\rho_{{ii^{{\prime }} }} \sqrt {\frac{{\lambda_{{i^{{\prime }} j}} \lambda_{{ij^{{\prime }} }} }}{{n_{{i^{{\prime }} j}} n_{{ij^{{\prime }} }} }}} } } } $$ the third and fifth terms of Eq. (7). The first component in formula (8) is the additional variance in Var(ln R Ω ) and the second component is the additional covariance in Cov(ln R i , ln R Ω ) due to spatial autocorrelation. In most cases, adding spatial autocorrelation makes the additional variance (8) a negative value since the term representing −2 times the covariance dominates the sum of the two terms. When a sub-region accounts for a large proportion of the population in the parent region, the correlation between the sub-region and parent region is large because of a large overlap in the population, and so the additional covariance due to spatial correlation is relatively small. In that case, the first term in formula (8) dominates the overall value. Thus, although adding spatial autocorrelation will usually reduce the total variance (7), a high correlation between some sub-regions and the overall region can result in a larger variance for the rate ratio. The full term in formula (7) is referred to as the spatial method, and subtracting the extra variance (8) due to spatial autocorrelation, we get the non-spatial version of the variance of the logarithm of rate ratio. To transform back to rate ratios, we apply the delta method again to get $$ Var(R_{i} /R_{\varOmega } ) = (R_{i} /R_{\varOmega } )^{2} Var[\ln (R_{i} /R_{\varOmega } )] $$ All age-place-specific rate (λ ij ) terms in the variance calculation will be approximated by the observed rates R ij . To explore the impact of spatial autocorrelation in the confidence intervals of rate ratios, we will apply the methods with and without spatial autocorrelation to cancer incidence data from the SEER Program and cancer mortality data from the National Centers for Health Statistics [12]; both incidence and mortality data are available via the SEER*Stat software. The current SEER*Stat software provides age-adjusted rates with associated standard errors, confidence intervals, and between-geographic-area rate ratios with associated intervals and the p value (to test the rate ratio equals to 1). The rate ratio intervals are calculated using the Tiwari et al. [2] method (non-overlapping) but when rate ratio between two overlapping regions are requested, an alert message pops up that reads "The algorithms for the confidence intervals and the significance testing assume non-overlapping groups. Please use caution when interpreting these results." All age-adjusted rates in this paper are calculated using the 2000 US Standard Population [13] using the direct method, and the unit of the age-adjusted rates is per 100,000 people at risk. In addition to the real datasets, corresponding simulated rate ratios were created under the assumption that the logarithm of rate ratios comes from a spatial Gaussian process. The variation of the mean vector and variance–covariance structure will be described below. The simulation was implemented using the spam package [14] in R [15] with 10,000 realizations were created for each simulation. State-level data on different cancers For rate ratios between a state and the US, the same data as in Tiwari et al. [3] were chosen to test the method developed above. Specifically, age-adjusted mortality rates for tongue, esophagus, and lung cancer of the 50 states and the District of Columbia in year 2004 [16] were obtained and the ratios between the state rates and the US rate were calculated in SEER*Stat software. These three cancer sites are selected to represent the spectrum of cancer incidence, from rare cancer (age-adjusted mortality rate of 0.62 for tongue cancer in the US), to moderate (age-adjusted mortality rate of 4.35 for esophagus), and to common cancer (age-adjusted mortality rate of 53.30 for lung). The data also represent different levels of spatial autocorrelation in the state-level rate ratios (Fig. 2). Tongue cancer and esophagus cancer do not show a spatial pattern, but lung cancer has a clear spatial autocorrelation pattern. To estimate the variance–covariance structure between states, the observed semivariogram values (points in Fig. 3) for lung cancer mortality were modeled and plotted against the estimated values (curve in Fig. 3) using an exponential model. The exponential model produced an estimate of 0.06 for the partial sill and 0.065 for the sill, which implied that about 92% of the variation in the data can be attributed to spatial autocorrelation. The empirical range estimate was 1792 km, about one-third of the maximum state-to-state distance in the US. The semivariogram plot was cut off at half of the maximum distance, about 2500 km, since values beyond the range are uninformative. The parameter estimates from these models were used for the calculation of covariance matrix. Simulated data were created using the observed rate ratios as the mean and the estimated covariance matrix for each cancer site. Lengths of 95% CI and statistical power are compared between the Tiwari normal-based method, non-spatial and spatial method developed according to formulas (7)–(9). In the esophagus cancer data, the partial sill to sill ratio was 97% and the empirical range was only 98 km. The tongue cancer data had nugget effect = 0, and the empirical range was 104 km. Both situations indicate that states further than 100 km apart are basically considered spatially unrelated. We varied the strength and range in simulation studies, and the resulting data did not show any detectable spatial correlation for either cancer site, most likely due to the small number of cases. Rate ratio for tongue (top), esophagus (middle), and lung (bottom) cancer mortality in the US States, 2004 Empirical and observed semivariogram for lung cancer mortality in the US States, 2004 Since lung cancer data show a clear spatial pattern, we focus on the results from lung cancer mortality State-to-US rate ratios in this sub-section. Figures 4 and 5 compare the Tiwari method, non-spatial and spatial methods in terms of the length of 95% CI for rate ratios and statistical power for the simulated state-level lung cancer data. It can be seen that the non-spatial method is very close to the Tiwari method in both plots. The spatial method is better than the Tiwari or non-spatial methods in that it provides shorter 95% CI and hence more accurate estimates, as well as higher statistical power. California is an anomaly in that adding correlation from both sources increases its variance estimate and resulting confidence interval length (see Fig. 6). The population in California accounts for about 12% of the US population, so the correlation between the state and US rates due to the overlap is much larger than the spatial autocorrelation. However, due to the large population size, the interval estimates for the rates (and rate ratios) were already very accurate and the statistical power was high, so the revised interval estimates and power were virtually unchanged. Length of 95% CI of State-to-US rate ratios, non-spatial and spatial method versus Tiwari method for simulated state-level lung cancer mortality data in the US Statistical power of non-spatial and spatial method versus Tiwari method for simulated state-level lung cancer mortality data in the US Linked micromap of percentage of reduction in the length of the 95% confidence intervals from the Tiwari method to the spatial method for lung cancer mortality rate ratios A linked micromap plot [17] reveals the spatial pattern and associations between multiple variables simultaneously. To explore factors that contribute to the higher precision (and shorter confidence intervals) associated with the spatial method, we used a linked micromap plot (Fig. 6) to show the percent of length reduction in the 95% CI of lung cancer rate ratios from the Tiwari method to the spatial method in the US, expressed as percentages of the length of the Tiwari 95% CI for each state. The panels in the plot (from left to the right) are the maps of states with the highest to lowest % Reduction (from top to bottom row), the state names, the values of the % Reduction, the population size in each state and the estimated rate ratio with its 95% CI using the spatial method. The region with the highest % Reduction is in the mid-west, and expands to the east, west, and south, and finally to New England, Florida, and the Pacific West. Except for the few states with really large populations (e.g., New York, Texas, Florida, and California), there is a positive correlation between the population size and the % Reduction. In other words, the higher a state's population size, the larger benefit it gains from the spatial method. Many of the larger states have such precise rate ratio estimates that the CI line is completely covered by the estimate's dot. The only state that has a larger confidence interval using the spatial method is California, due to a relatively large proportion of the population of the state of the US total and hence a large overlap, as explained above. By adding the spatial correlation, the variance of the logarithm of the US rate, \( \text{var} (\ln (R)) \), increases from 6.4E−6 to 1.0E−4, a 15-fold change. For esophagus and tongue cancer data, spatial autocorrelation is weak and the advantage of the spatial method is minimal. The results of the three measures are very close across the Tiwari, non-spatial, and spatial methods for esophagus and tongue cancer (data not shown). State-level data with varying autocorrelation strength and scale To further explore the impact of spatial autocorrelation on the confidence intervals of rate ratios, we applied the competing methods to a set of simulated state-level data. The observed rate ratios of 2004 lung cancer mortality were taken as the mean, and the variance–covariance matrix was set with a variety of autocorrelation strength and scale. As shown in formula (6), for an exponential semivariogram model, spatial autocorrelation depends on the proportion of partial sill to sill, \( \frac{{c{}_{e}}}{{c{}_{0} + c{}_{e}}} \), as well as the range, a e . The partial sill to sill proportion ranges between 0 and 1, with a higher value representing stronger spatial autocorrelation. The range is the distance between two regions beyond which the observations are considered spatially uncorrelated. Of the 50 US states and District of Columbia, the pairwise distance ranges from the minimum at 25.7 km (between District of Columbia and Maryland) to the median at 1608.0 km (between Florida and Oklahoma), and the maximum at 4984.0 km (between Maine and Hawaii). We set the range a e in formula (6) at three different values, 500, 1700, and 4000 km, to represent local, regional, and global scale of spatial autocorrelation. The partial sill to sill proportion, \( \frac{{c{}_{e}}}{{c{}_{0} + c{}_{e}}} \), was set at 0.1, 0.5, and 0.9, to represent weak, moderate, and strong spatial autocorrelation. A total of nine simulated datasets were created, and percent of length reduction from non-spatial method to spatial method was calculated for each state and averaged across the whole US. Since the non-spatial method turns out to be very close to the Tiwari method, this section only compares the spatial method to the non-spatial method. Table 1 presents the percent of length reduction for the 95% CI from the non-spatial method to the spatial method, averaged across the 50 US states and District of Columbia, for varying scale and strength of spatial autocorrelation. At the local scale, when spatial autocorrelation strength increases from weak to strong, on average, the spatial method reduces the length of 95% CI by 0.48–4.5%, which is minimal. At the regional scale, the range of the semivariogram is 1700 km, which is about the average pairwise distance between US states. In other words, at the regional spatial autocorrelation scale, every state is related to about half of all other states. When the strength of the correlation increases from weak to strong, the spatial methods reduces the length of 95% CI by 1.7–17.7%. If the autocorrelation is on a global scale when every state is related to all other state, spatial methods reduces the length of 95% CI by 2.7–29.3%. Percent of length reduction for 95% CI for rate ratios, from non-spatial method to spatial method with varying spatial autocorrelation scale (measured by range in semivariogram) and strength (measured by partial sill to sill proportion in semivariogram) Scale of spatial autocorrelation measured by range (km) 500 (local) 1700 (regional) 4000 (global) Strength of spatial autocorrelation 0.1 (weak) (0, 0.004, 0.095) (0.005, 0.039, 0.098) 0.5 (moderate) (0, 0.02, 0.47) (0.027, 0.19, 0.49) (0.14, 0.33, 0.50) 0.9 (strong) (0, 0.036, 0.85) Numbers in parentheses are the minimum (between Maine and Hawaii), median (between Florida and Oklahoma), and maximum (between Maryland and District of Columbia) pairwise correlation coefficients between regions County-level data The state of Kentucky has the highest cancer rates for both incidence and mortality. Cigarette smoking prevalence is high and cancer screening rate is low, especially in the southeast area of the state which is part of the Central Appalachia region [18]. We calculated the age-adjusted incidence rates of lung cancer for males for the 5-year period between 2006 and 2010. The state rate is 129.94 per 100,000 and the rates vary considerably among the 120 counties, from 57.01 to 207.21, resulting in the county-to-state rate ratios between 0.45 and 1.63 (Fig. 7). There is also a spatial pattern, with higher rates in the southeast mountain area, and lower rates in the north and central areas. County-to-State ratio of age-adjusted rates for male lung cancer incidence in Kentucky, 2006–2010 A simulation study was performed based on the Kentucky county-level male lung cancer incidence rates. The simulation study serves two purposes. First, we would like to establish the relationship among the three interwoven factors—county to state rate ratios, county population size, and statistical power in testing the hypothesis of rate ratios equal to 1. Then we would compare the non-spatial and spatial methods. To serve the first purpose, we simulated a set of county rates (according to the population size of each county in Kentucky) with the county-to-state rate ratios ranging between 0.2 and 2.0, with an increment of 0.05. This range is broader than that in a realistic situation and should provide a complete picture of the multi-dimensional relationship between rates, rate ratios, population size, and power. However, this is a hypothetical scenario since it is impossible to observe that all counties have the same rate ratio at a certain fixed level, so the analysis in this scenario only helps with understanding the impact of rate ratio and population size on the statistical power. It does not estimate how much improvement the spatial method will provide in a real world situation. To serve the second purpose, we created the simulated data assuming the hypothesized mean rate ratios and the variance–covariance structure were the same as the observed values. Then we compare the statistical power and coverage probability of the 95% CI between the non-spatial and spatial method. Table 2 lists the statistical power and coverage probability of the non-spatial and spatial methods for the simulated Kentucky incidence data at a few hypothetical fixed rate ratios. The power of both non-spatial and spatial methods decreases from both small and large rate ratios toward 1.0, and power of the spatial method is consistently higher than that of the non-spatial method. The power increase is the largest when the county to state rate ratio is close to 1.0, and diminishes when the rate ratio turns further away from 1.0. For cancer data with a similar level of rates, the county to state rate ratio needs to be smaller than 0.7 or greater than 1.4 to reach a statistical power of 80% (data not shown). The coverage probability of the spatial method is very close to the nominal value of 95%, consistently better than of non-spatial method across all rate ratio values, which is around 97%. Power and coverage probability of the non-spatial and spatial methods in the simulation study for Kentucky county-level male lung cancer incidence data Coverage probability Non-spatial (%) Spatial (%) Simulation under hypothetical fixed rate ratios Figure 8 compares the statistical power of the non-spatial and spatial methods for the same simulated data assuming the hypothesized mean rate ratios and the variance–covariance structure is the same as the observed values. It is clear that the spatial method has a higher power than the non-spatial method in all but Jefferson County, which accounts for about one-fifth of the population in Kentucky. The coverage probability of the spatial method is consistently better than that of the non-spatial method (not shown). Figure 9 reveals the relationship between statistical power vs rate ratio and population size in the simulated situation for the spatial method. Power is smallest when the rate ratio is close to 1.0, and increases as the rate ratio is further from 1.0. The counties with high power (greater than 0.8) have large populations (>70,000) and the rate ratios are either <0.8 or >1.2. Jefferson County's population is the highest in the state, with a 5-year total of 1.76 million, which supports a high statistical power to detect its rate ratio of 0.87 as significantly different from 1.0. Two smaller counties (Monroe County with a 5-year population size of 27,000 and Breathitt County with a 5-year population size of 35,000) have rate ratios close to 1.4; their high rate ratios, rather than population size, drive their statistical power higher. Statistical power of spatial method versus non-spatial method for simulated county level male lung cancer incidence data in Kentucky Statistical power of spatial method versus rate ratio (a) or population size (b) for simulated county level male lung cancer incidence data in Kentucky Presenting confidence intervals along with point estimates provides a measure of uncertainty of the estimate. Failure to include spatial autocorrelation when it is present can lead to errors in the estimates and subsequent inferential errors. We have extended a previous method of calculating confidence intervals for the rate ratio of a sub-region to the full region to include spatial autocorrelation. Tiwari et al. [3] proposed a method to compute a 95% confidence interval around the ratio of an area-specific rate to the rate for a larger area which includes that area. This method accounts for correlation between the single area rate and that of the larger area, but does not account for likely spatial autocorrelation between rates of neighboring areas. Our method includes components of total variance due to the standard distributional assumption, the correlation between the single area rate and the larger area rate, and spatial autocorrelation among the areas. The rate ratio is a common measure of the relative ranking of areas, e.g., counties within a state, and is easily computed. The earlier non-overlapping method of Tiwari et al. [2] is now implemented in SEER*Stat, the National Cancer Institute's popular software to disseminate cancer statistics. Our goals were to improve upon the Tiwari [3] methods by including spatial autocorrelation and to develop a resulting method that could replace the current method in SEER*Stat. Therefore, we used a similar approach as Tiwari to develop our confidence interval and applied the new method to the same three cancer datasets so as to make the results most comparable. Several limitations exist in this study. First, isotropy is assumed for spatial autocorrelation. It may be argued that direction as well as distance between regions will have an impact on disease rates. Considering anisotropy will greatly increase the complexity of model and computation. Secondly, the exponential semivariogram model that fit our lung cancer data well may not be the best choice for every outcome variable, although experts believe that the choice of semivariogram model is less important than the inclusion of spatial autocorrelation at all (see p. 379 of [10]). We were limited in our ability to assess the impact of adding spatial autocorrelation to the rate ratio variance calculations for uncommon and rare cancers like esophagus and tongue cancers. Even when we simulated data with very strong spatial autocorrelation, the resulting data did not show much of a detectable spatial pattern. This is probably due to the small number of cases or the Modifiable Areal Unit Problem (MAUP). Assessing the impact of small numbers or MAUP to spatial autocorrelation is beyond the scope of this paper; our method was developed to account for spatial autocorrelation without regard to how it came about. Another limitation of our approach is that we have underestimated the uncertainty of the rate ratio variance by assuming no error in the estimation of the covariance parameter estimates. It will be very important to explore approaches that fully account for the uncertainty in the spatial autocorrelation estimation in future work. However, because users are typically not interested in computing a rate ratio for areas with small populations, knowing that the confidence intervals will be large and uninformative in this situation, we suspect that uncertainty in the rate ratio estimate due to the estimation process will be small relative to uncertainty due to unmeasured spatial autocorrelation. Therefore, we believe that while we have shown that our method is superior to the Tiwari et al. method when spatial autocorrelation is present, and equivalent to it when area rates are uncorrelated, any further improvement resulting from a hierarchical (Bayesian) model that can compute the estimation process variation across many replicates will be relatively small. We have initiated a follow-up study to confirm this belief. It is impractical to implement a Bayesian estimation method that typically requires computation of thousands of replicates in a server- or web-based statistical system such as SEER*Stat, and therefore it is important to verify that the method proposed here, one that can easily be implemented, has captured nearly all of the variation in the rate ratio estimate. We have developed a method that takes into account spatial autocorrelation along with area overlap in calculating variances and confidence intervals of rate ratios between a sub-region and the total region. Our variance is comprised of three components, representing no correlation, area overlap, and spatial autocorrelation, respectively. We have shown that calculating the variance of the rate ratio including the possibility of spatial autocorrelation among the area-specific rates can lead to substantial improvements over the Tiwari method. For U.S. state-level cancer data, confidence intervals were shorter and power was greater than the Tiwari method. The Tiwari method accounted for correlation due to area overlap but not for spatial autocorrelation among the areas. Application to simulated state-level data showed that the advantage of the proposed method is directly related to the strength and scale of the spatial autocorrelation. Improved results were also seen at the county level using simulated data based on population patterns in Kentucky counties. We did note two instances where results were not as good as in the Tiwari method, both where one region's population constituted a large proportion of the population for the entire aggregated area (California compared to the U.S. and Jefferson County compared to Kentucky). In these cases, the correlation due to the population overlap was much larger than any observed spatial autocorrelation, eliminating any advantage for our method over methods that ignore this source of correlation. Because of the large population size in these regions, though, the variance and interval estimates were already precise, and adding spatial autocorrelation does not practically impact the interval estimates or the statistical power. One approach to calculating confidence intervals for the rate ratio might be to assess the degree and scale of spatial autocorrelation among the areas and then decide on whether or not to include the spatial autocorrelation. However, these correlations are on a continuum, so it would be difficult to set a cutpoint beyond which the spatial method should be used. We have shown that our method provides the same results as the Tiwari method when there is very little spatial autocorrelation, and it does account for overlap between a sub-region and its parent region. Thus for little additional computational cost, we obtain an estimate equal to that currently used for SEER data if area-specific rates are independent, but in the presence of increasing spatial autocorrelation we can obtain substantial improvements over the existing method. Since the calculations are relatively easy to implement, we recommend using the new method in all cases. LZ, and LP developed the method and study design. JBP developed the computing code for the analysis and mapping. All authors read and approved the final manuscript. The authors acknowledge colleagues Eric J. Feuer and Benmei Liu who have provided comments that improved the quality of this manuscript. Availability of data and material The software along with datasets supporting the conclusions of this article are included as supporting materials. This work was partly supported by the Statistical Methodology and Applications Branch of the National Cancer Institute through a contract HHSN261201400193P. Surveillance Research Program, Division of Cancer Control and Population Sciences, National Cancer Institute, National Institutes of Health, 9609 Medical Center Dr., Suite 4E346, Rockville, MD 20850, USA StatNet Consulting, LLC, Gaithersburg, MD 20882, USA Tobler W. A computer movie simulating urban growth in the Detroit region. Econ Geogr. 1970;46(2):234–40.View ArticleGoogle Scholar Tiwari RC, Clegg LX, Zou Z. Efficient interval estimation for age-adjusted cancer rates. Stat Methods Med Res. 2006;15(6):547–69.View ArticlePubMedGoogle Scholar Tiwari RC, Li Y, Zou Z. Interval estimation for ratios of correlated age-adjusted rates. J Data Sci. 2010;8:471–82.PubMedPubMed CentralGoogle Scholar Fay MP, Feuer EJ. Confidence intervals for directly standardized rates: a method based on the gamma distribution. Stat Med. 1997;16(7):791–801.View ArticlePubMedGoogle Scholar Kulldorff M, et al. Breast cancer clusters in the northeast United States: a geographic analysis. Am J Epidemiol. 1997;146(2):161–70.View ArticlePubMedGoogle Scholar Pickle L, et al. Atlas of United States mortality. Hyattsville: National Center for Health Statistics; 1996.Google Scholar National Cancer Institute. Surveillance Epidemiology and End Results (SEER) Program. 2016. http://seer.cancer.gov/. National Cancer Institute. SEER*Stat Software. 2016. http://seer.cancer.gov/seerstat/. Fleiss JL, Levin B, Paik MC. Statistical methods for rates and proportions. 3rd ed. New York: Wiley; 2003.View ArticleGoogle Scholar Waller LA, Gotway CA. Applied spatial statistics for public health data. Hoboken: Wiley; 2004.View ArticleGoogle Scholar Bishop YM, Fienberg SE, Holland PW. Discrete multivariate analysis: theory and practice. Cambridge: MIT Press; 1975.Google Scholar Centers for Disease Control and Prevention. Mortality data. http://www.cdc.gov/nchs/deaths.htm. Zhu L, et al. Predicting US- and state-level cancer counts for the current calendar year. Cancer. 2012;118(4):1100–9.View ArticlePubMedGoogle Scholar Furrer R, Sain SR. spam: a sparse matrix R package with emphasis on MCMC methods for Gaussian Markov random fields. J Stat Softw. 2010;36(10):1–25.View ArticleGoogle Scholar R Development Core Team. The R Project for statistical computing. 2016. https://www.r-project.org/. Surveillance Epidemiology and End Results (SEER) Program, Mortality—All COD. Aggregated with State, Total U.S. (1969–2004). N.C.f.H.S. (NCHS), editor. Bethesda: National Cancer Institute SEER Program; 2007.Google Scholar Pickle LW, Pearson JB, Carr DB. micromapST: exploring and communicating geospatial patterns in U.S. state data. J Stat Softw. 2015;63(2):1–25.Google Scholar Christian WJ, et al. Exploring geographic variation in lung cancer incidence in Kentucky using a spatial scan statistic: elevated risk in the Appalachian coal-mining region. Public Health Rep. 2011;126(6):789–96.PubMedPubMed CentralGoogle Scholar Samui P, Sitharam TG. Application of geostatistical models for estimating spatial variability of rock depth. Engineering. 2011;3:886–94.View ArticleGoogle Scholar
CommonCrawl
Connecting Disjoint Nodes Through a UAV-Based Wireless Network for Bridging Communication Using IEEE 802.11 Protocols Hanif Ullah ORCID: orcid.org/0000-0001-8675-83461, Mamun Abu-Tair1, Sally McClean1, Paddy Nixon1, Gerard Parr2 & Chunbo Luo3 Cooperative aerial wireless networks composed of small unmanned aerial vehicles(UAVs) are easy and fast to deploy and provide on the fly communication facilities in situations where part of the communication infrastructure is destroyed and the survivors need to be rescued on emergency basis. In this article, we worked on such a cooperative aerial UAV-based wireless network to connect the two participating stations. The proposed method provides on the fly communication facilities to connect the two ground stations through a wireless access point (AP) mounted on a UAV using the IEEE 802.11a/b/g/n. We conducted our experiments both indoor and outdoor to investigate the performance of IEEE 802.11 protocol stack including a/b/g/n. We envisioned two different cases: line of sight (LoS) and non-line of sight (NLoS). In LoS, we consider three different scenarios with respect to UAV altitude and performed the experiments at different altitudes to measure the performance and applicability of the proposed system in catastrophic situations and healthcare applications. Similarly, for NLoS, we performed a single set of experiments in an indoor environment. Based on our observations from the experiments, 802.11n at 2.4 GHz outperforms the other IEEE protocols in terms of data rate followed by 802.11n at 5 GHz band. We also concluded that 802.11n is the more suitable protocol that can be practiced in disastrous situations such as rescue operations and healthcare applications. The use of unmanned aerial vehicles (UAVs) as wireless communication platforms for facilitating communication on the fly has gained significant importance recently [1–4]. Vehicles that provide such facilities are vital in terrible situations in order to help the rescue teams on emergency basis to reduce the casualties and avoid further destruction in the affected area. The earthquake in 2005 hit the north part of Pakistan and Pakistani administered Kashmir and perished more than 80,000 people, while more than four million were left homeless. Similarly, the 2010 flood in Pakistan affected almost twenty million people and destroyed almost the entire communication infrastructure in all parts of the country. Providing timely rescue services in such disasters may help to reduce casualties and may save the life of many people. Cooperative wireless networks composed of small UAVs are cost-effective and easy to deploy and can facilitate communication on the go through self-managed ad hoc Wi-Fi networks to help the rescue teams in tragic events [5]. Such networks can also be deployed in border surveillance and patrolling [6, 7], wildfire monitoring [8, 9], and extending the coverage of ad hoc networks by using the UAV as a relay [10–12], with many other applications listed in [13, 14]. On top of that, unmanned aerial base stations (UABSs) are used in natural disasters for public safety communication to save lives, property, and national infrastructure [15]. Similarly, some other application areas with latest trends are discussed in [16–18]. For example, UAVs in a 5G/Internet of Things (IoT)-enabled platforms are used for multimedia and video streaming purposes in industry-oriented applications [16]. Moreover, drones are also used in IoT-based electronic health system to showcase its significance in healthcare industry with a special concentration on the use of small UAVs which can benefit IoT-based healthcare industry and applications [19–23]. Apart from the UAV application areas, security issues and challenges in IoT-based healthcare applications and environments are explored in order to protect such platforms from unauthorised access [18, 22]. Finally, latest research problems and challenges with respect to UAV applications in terms of wireless networks are highlighted in [24, 25] in order to update the research community with the latest trends and issues in the aforementioned areas. The main contribution of this paper is to extend the work of [26] and to measure the capabilities of IEEE 802.11 protocol stack (a/b/g/n). In [26], we only investigate the performance of IEEE 802.11n in a UAV-based network with a fixed distance of approximately 10 meters between the AP mounted on UAV and the antenna fixed on a USB adapter. The experiments were performed in an outdoor environment and different performance metrics were calculated. The main contribution of this paper is listed below: ∙ In this paper, we particularly consider a network where a single UAV will bridge communication between two ground stations through an AP mounted on a UAV using 802.11 a/b/g over 2.4 and 802.11n at both 2.4 and 5 GHz band. ∙ We consider two cases with respect to LoS and NLoS communication. ∙ In LoS communication, we analyse three different scenarios with respect to UAV height from the ground stations: in scenario 1, we calculate the data rate, signal strength, and SNR between UAV and ground stations at 10 meter height. ∙ In scenario 2, we calculate the same characteristics for the communication links between UAV and ground stations at a height of 15 m, while in scenario 3, we revise the same experiment at a height of 20 m to analyse the same performance metrics. ∙ The reason for such low altitudes is to provide the best communication facilities to the ground users as we are considering our scenarios for disaster management situations and more specifically for search and rescue operations and providing first-aid equipment and facilities on immediate basis in order to help the survivors and rescue team members. Also, the limited flight time of the UAV (8 to 10 min maximum) restrict the UAV to be flown at higher altitudes. ∙ Similarly, in case of NLoS communication, we consider a single scenario to check the performance of IEEE802.11 protocol stack. ∙ We conduct our experiments in both indoor and outdoor environments with a UAV, an air-lifted AP mounted on the UAV, and two ground stations. ∙ The ground stations are here working as a client and server, where the client send data of size 10 MB to the server through a communication link provided by the IEEE 802.11 protocol stack. The rest of this paper is organised as follows: Section 2 describes the related work. Section 3 presents the experimental setup including the hardware and software components used in the experiments. Section 4 discusses the results and discussion, while Section 5 draws the conclusion and discusses future work. In [27], the authors proposed an aerial wireless network based on drones to cover a large amount of area through their wireless system. Two different modes were envisioned: infrastructure mode and ad hoc operational mode. A Galileo board was configured to work as an AP and intermediate hop in both infrastructure and ad hoc operational mode respectively. The board was also equipped with a wireless AC 7620 card to provide support for connections up to 867 Mbps by using 802.11 protocol standards (a/b/g/n/ac). The authors mainly concentrated on providing a theoretical overview of the UAV-coverage area in an outdoor environment and to experimentally check the performance evaluation of the configured board both in lab as well as in real aerial deployment in order to study both the infrastructure mode and ad hoc operational mode of the IEEE 802.11. Energy consumption of the Galileo board with respect to different WiFi modes was also part of the study. Moreover, Performance evaluation of the entire system was studied in terms of coverage range, transmission rates, and energy efficiency [27]. Similarly, a performance evaluation of radio links between a UAV having a wireless radio and an AP on ground through field experiments were analysed in [28]. Field experiments were carried out by using a 802.11a wireless interface fixed on both the UAV and AP along with two directional antennas. A series of experiments was performed with various antenna setups to evaluate the effect of altitude and yaw of the UAV on different performance metrics. Path loss exponent for air-to-ground links was estimated using the received signal strength (RSS) values in both open field and campus environment scenarios. User Datagram Protocol (UDP) throughput of air-to-ground links along with aerial view of the given area were also measured using the UAV onboard cameras in the presence of high capacity links in downward direction. The authors concluded their experiments with respect to different antenna orientations and summarised how poor the results could be in terms of throughput and RSS if the right antenna orientation is not deployed on the UAV [28]. This work was further studied in a three-dimension space and positioning with the extension of sample antenna to 802.11 devices in the context of aerial nodes in [29]. Communication issues in 3D space were handled with a proposed solution based on an 802.11 system with multiple antennas fixed on small-scale quadrocopters. Path loss and fading features particularly in terms of Nakagami fading using RSS samples of the radio channel between UAV and ground station were also analysed through real-time experiments at 5 GHz. The authors addressed the network performance issues with respect to throughput and number of re-transmissions and concluded that a throughput of 12 Mbps could be achieved at distances in the order of 300 m [29]. Moreover, the work in [29] was further extended by introducing the concept of two-hop networks, where multiple UAVs were used to measure the performance of the proposed network in terms of throughput and link quality [30]. Three different scenarios were studied from a system architecture perspective: (i) standard one-hop communication from UAV to ground station, (ii) two-hop communication between UAV and ground station through another UAV having an AP, and (iii) mesh networking through 802.11s extension with two UAVs and a ground station. Through experimental results, the authors claimed that stable throughput could be achieved in the second case where all traffic goes through a UAV having the AP and should be preferred to two-hop communication in a scenario with low jitter [30]. A similar study was carried out to measure the performance of 802.11a wireless links between UAV and ground stations with various antenna orientations in [31]. The authors addressed the issues of performance degradation/upgradation of wireless links with respect to antenna types (omni/directional), position, orientation, and ground effects such as interference because of reflected signals. A series of field experiments was performed, and it was concluded that horizontal dipole antennas with a perpendicular direction to the UAV flight path produce the highest throughput [31]. In [32], four different issues in multipoint-to-point UAV communication with IEEE 80211n/ac were investigated. Throughput results for 802.11ac were shown in a UAV setting, while it was demonstrated that 802.11a could have much higher throughput over longer ranges. Further, the fairness in a multi-sender aerial network was analysed and, by using two mobile UAVs that were sending data to a single receiver, was also tested in a real-world coverage scenario. The aim of the entire study was to address the above issues and to develop and propose a system consisting of multiple UAVs, where the ground nodes/clients and the UAVs can have the capability of joining the network in an ad hoc manner. High throughput 802.11 wireless LAN technologies were implemented, and a series of experiments were performed in indoor and outdoor environment to verify the applicability and performance of the proposed multi-device, multi-sender network. The authors claimed that high throughput could be achieved in both infrastructure and mesh modes in terms of 802.11n, while high data rates and improved throughput could be achieved by using 802.11ac compared to 802.11n in an indoor environment. However, in outdoor experiments, very low RSS and transmit data were recorded in terms of IEEE 802.11ac [32]. Furthermore, in [33] the authors addressed the issues of wireless communication between UAVs equipped with cameras in search and rescue missions. An experimental study was conducted in a real testbed based on 802.11n and XBee-PRO 802.15.4 to check the quality of aerial UAV-to-UAV links in terms of mutual distance and speed under varying context parameters. The main purpose of this study was to introduce a hybrid network-based system architecture for bulk data transfer and to study the effect of different metrics on link quality and networking performance by conducting real-time experiments in an outdoor environment. It was summarised that the calculated throughput of 802.11n is far from the theoretical maximum and also varies drastically even at a constant distance between UAVs. The work of [33] was further extended in [34], where the proposed system architecture based on Wireless Local Area Network (WLAN) 802.11n and XBee-PRO 802.15.4 hybrid network for bulk data transfer was extensively explored in order to summarise the implications of embedded hardware restrictions. An analytical model was also presented to estimate the expected time for large-sized image data transfer in aerial transmission. The authors concluded that expected quality of communication could only be guaranteed if the UAV's antenna position is perfect. The authors also indicated that there is a need for some new features in 802.11 in order to assure the high speed and reliable communication between UAVs [34]. Similarly in [35], a quadrotor UAV-based communication relay system was proposed to address the issues of beyond-line-of-sight (BLoS) communication and short-range communication restrictions. The communication relay system was developed and tested to verify the radio communication relayed from one quadrotor to another. The main hardware platform consisted of a ground control system (GCS) and two UAVs named as 'Mom', and 'Son', that were mounted with a Pixhawk flight controller and a Raspberry Pi 2 Model B microprocessor. The communication was basically relayed from the Son UAV through the Mom UAV to the GCS. A software platform was also developed to facilitate the communication between the two UAVs and GCS. A series of experiments were performed in order to check the data transmission rate and reliable communication performance in both indoor and outdoor environments, respectively [35]. The use of extra hardware may decrease the UAV flight time and could affect the entire mission. In [36], the authors addressed the issue of inter-UAV communication by developing a specified evaluation methodology. This evaluation methodology along with a tool developed to automate the process was tested in a controlled testbed environment to verify the applicability of the proposed approach. The methodology consisted of two main elements, a testing tool and a data analysing tool: the test tool automates the communication performance tests and controls the environment, while the data analytics tool analyses the data and generates the proper graphs based on certain scripts. The tool was named the Dronning tool and was developed to simplify and automate performance tests. The tool can run on Raspberry Pi devices along with standard PC, and can connect different application instances through sockets over IEEE 802.11 based ad hoc network. The methodology was evaluated over a 2.4 GHz band using 802.11g wireless interface by performing real tests. The authors addressed only the issue of UAV-to-UAV communication, while the communication between UAV and ground nodes was not considered which may prevent the use of the tool in real-time applications. Furthermore, in [37], the authors performed real-time experiments to exemplify air-to-ground wireless channels between UAV and ground users over a range of frequencies including 900 MHz and 1800 MHz (cellular), and 5 GHz (WiFi) with respect to LoS and NLoS scenarios. The authors also investigated the viability of using drone-based beamforming technology through IEEE 802.11-like signalling. Based on this beamforming technology, the authors concluded that the throughput can be improved up to 73.6% and 120.1% in both LoS and NLoS scenarios, respectively. In addition to this, the emerging 5G communication technologies that are discussed in [38–40] can be utilised in the context of UAV communication and specifically in situations where ultra-high-reliable and ultra low-latency communication is required. The authors explored the key building blocks of 5G communication in the context of vehicular communication and machine-to-machine (M2M) communication. Also, how 5G can address the issues with the existing infrastructures and how the performance can be improved by using 5G technology within the aforementioned domains were discussed [41]. Finally, some work has also been done to address the issue of minimising the number of drones and maximising the coverage while monitoring a specific environment [42]. Also, routing protocols for wireless multimedia sensor networks and IoT rule in different fields with respect to technological aspects are discussed in [43, 44]. To conclude, a number of shortcomings of the solutions discussed earlier in the literature has pointed out. In some cases, the potential solutions only provide simulation-based results that may not be applicable in real-time situations, while in some other cases, the throughput/date-rate claimed may not be practicable in real-time scenarios and critical infrastructure development. A short summary of the related work including key contribution, research gap, and similarities/differences with our work is listed in Table 1. Table 1 Summary of related work Experimental setup In this section, we consider a UAV-based wireless network that employs 802.11a/b/g at 2.4 GHz and 802.11n at 2.4 and 5 GHz to analyse typical performance metrics, namely data rate, signal strength, and SNR for the communication links between UAV and the ground station, as illustrated in Fig. 1. Experimentation is carried out in an outdoor environment with a single UAV acting as an AP for communication bridge between two ground stations operating as client and server, respectively. The work is based on a mathematical model presented in [45] and is an extension of the research described in [26]. A glossary of mathematical notation used in this paper is provided in Table 2. Experimental setup for both LoS and NLoS scenarios Table 2 Mathematical terms and symbols In UAV LoS communication, the UAV will always be direct communication with available ground stations. In such a situation, the electromagnetic waves that propagate between the UAV and participating nodes can be expressed mathematically as: $$ P_{r}=\frac{P_{t} G_{t}(\theta_{t},\phi_{t})G_{r}(\theta_{r},\phi_{r})\lambda^{2}}{(4\pi d)^{2}} $$ where Pt and Gt are the power and gain of the transmitted antenna along with the elevation angle θt and azimuth angle ϕt, respectively. λ is the wavelength and d is the distance between the UAV and available ground stations. Considering λ=c/f, Eq. (2) will become: $$ P_{r}=P_{t} G_{t}(\theta_{t},\phi_{t})G_{r}(\theta_{r},\phi_{r})\bigg(\frac{c}{4\pi df}\bigg)^{2} $$ We can also calculate the distance between the UAV and appropriate ground stations transmitter (Tx) and receiver (Rx) from Fig. 1 as follows. $$ d_{1}=\sqrt{b^{2}+a^{2}} $$ $$ d_{2}=\sqrt{(l-b)^{2}+a^{2}} $$ where l is the distance between two ground stations, a is the height of the UAV, and b is the distance in fraction from each ground station to the midway point. In a real-life deployment, signal power may depend on many factors including environmental, e.g. wind, with corresponding fading. The Rayleigh distribution is often used in such cases of signal variation with fading. To compute such fading, we assume that |h| represents fading between the UAV and ground stations and can be described as follows: $$ |h|^{2}\sim\frac{1}{\sigma^{2}_{\ell}}\exp\bigg(-\frac{|h|^{2}}{\sigma^{2}_{\ell}}\bigg) $$ For a full digital signal, we must also figure out the SNR between UAV and ground stations using the following equation. $$ SNR=\frac{|\sqrt{P_{1}}|^{2} \ |x_{1}|^{2}}{|n|^{2}} \quad {\text{where}}\quad n\sim N\bigg(0,\sigma^{2}_{n}\bigg) $$ In bandwidth restricted channels, we only consider instant and average capacity of the fading channel that can be computed from using the following equations. $$ C_{1}=\log(1+{SNR}_{1})=\log\bigg(1+\frac{P_{1}|h_{1}|^{2}}{(x^{2}+a^{2})\sigma^{2}_{n}}\bigg) $$ $$ \overline{C_{1}}=\int^{\infty}_{0} C_{1} p_{1}(|h_{1}|^{2})\ d|h_{1}|^{2} $$ where p1(|h1|2) is the probability density function (PDF) and can be used in the case of a fading channel. Putting the value of C1 in Eq. (8) will result the next equation as follows: $$ \begin{aligned} \overline{C_{1}}=\int^{\infty}_{0}\log\left(1+\frac{P_{1}|h_{1}|^{2}}{(x^{2}+a^{2})\sigma^{2}_{n}}\right) \frac{1}{\sigma^{2}_{\ell}}\exp\left(-\frac{|h_{1}|^{2}}{\sigma^{2}_{\ell}}\right)\ d|h_{1}|^{2} \end{aligned} $$ In the equation above, only |h1|2 is a variable, while the remainder are considered to be deterministic during the integration, as described in [45], leading to the simplified final equation given below. $$ \overline{C_{1}}=\text{Ei}\left(-\frac{(x^{2}+a^{2})\sigma^{2}_{n}}{P_{1}\sigma^{2}_{\ell}}\right) \exp\left(\frac{(x^{2}+a^{2})\sigma^{2}_{n}}{P_{1}\sigma^{2}_{\ell}}\right) $$ From Eq. (10), the UAV position can easily be computed through variable x and a, where x is the transmitted signal and a is the altitude/distance between a UAV and the ground stations. Real-time experiments were carried out based on the above mathematical model using the IEEE 802.11 protocols in order to validate the use of the above model in real-life situations. Our testbed consists of two ground stations, i.e. a client machine and a server machine, and both of these machines are connected to each other through an AP that is mounted on a UAV. Both the machines are ≈ 15 m away from each other. Iperf version 2.0.5 is used on both machines to receive and transfer the traffic flows within the machines through the AP. All the experiments are repeated 10 times for each network configuration transferring a transmission control protocol (TCP) iperf measurement of 10MB (Megabytes). The experiments are performed at 10, 15, and 20 m of UAV altitude from the ground stations using the IEEE 802.11 protocol stack including a/b/g/n at 2.4 GHz and 5 GHz(n) band in an outdoor environment for LoS scenario, while at a 5 m height with the same setup in an indoor environment for NLoS scenario. Our testbed consists of two different machines having the same specifications. The machine we used in our experiments are Apple machines equipped with Intel Core i5 processor with dual independent cores on a single silicon chip, 3 MB shared level 3 cache, 8 GB of onboard SDRAM, 256 GB Flash Storage, integrated intel Iris graphics, and having the latest Macintosh operating system X EI Capitan version 10.11.6. Connectivity of the system includes 802.11ac WiFi that supports all the IEEE 802.11 standards, Bluetooth 4.0, and with some USB 3.0 and Thunderbolt 2.0 ports. The rest of the hardware components of our testbed are discussed in the following section. Solo 3DR The main hardware part of the experiments is the small quadrocopter drone from Solo 3DR that holds the AP which connects the two ground stations to bridge communication. Solo is powered by two 1 GHz computers out of which one is running on copter and the other one is installed on controller, and both computers control the entire functionality of the UAV such as navigation, altitude, and inflight communication to exchange data between UAV and the UAV-controller. Figure 2 provide a full picture of different components of Solo 3DR. Solo is also powered by four motors along with four propellers that helps the UAV in inflight activities. With its powerful dedicated WiFi signal carried by the 3DR link, it provides connectivity between UAV and solo App to exchange real-time data and videos between UAV, controller, and other ground stations [46]. Flight time is 25 min, and with payload it is \(\dashrightarrow \) 8–10 min, range is almost half a mile, maximum speed during flight is almost 55 mph (miles/hour) while in ascent it is 10 m/s if the UAV is in stabilise mode, and 5 m/s if it is in fly mode. The maximum altitude the UAV can fly in compliance with the civil aviation authority UK and federal aviation administration UK & USA is 400 ft, but the user can adjust it to 122 m [47]. Another part of the solo 3DR is the UAV-controller which controls the UAV movement during flight and shows the inflight telemetry including GPS signal, height, battery power, and the position where the UAV will be landed back. Two antennas are also fitted on the controller to manage the communication over the radio link [48]. Solo quadrocopter with its controller and AP along with a real snapshot of the testbed environment The wireless AP The wireless AP is basically a portable router that helps to connect the two ground stations to facilitate communication through IEEE 802.11a/b/g over 2.4 and 802.11n over 2.4 and 5 GHz band. The AC750 portable WiFi router from D-Link provide the bridging facility in our experiments and is mounted on the UAV as shown in Fig. 2. The device is fully equipped with latest technology that provides speed up to 750 Mbps over 2.4 and 5 GHz band, and supports all the IEEE 802.11 protocols [49]. Its built-in rechargeable battery facilitates the UAV using its own battery for maximum time during the flight, while its low weight makes it easy to mount on the accessory bay part of the UAV. Software setup (IPerf) Iperf is a well-known tool that can create TCP and UDP data streams and can measure the maximum bandwidth and throughput of a network. The software is coded in C and is freely available to everyone. Iperf can be used to measure the end-to-end network performance between the two users. This open-source software is compatible with different operating systems including Windows, MAC OS, Linux, and Unix [50]. In our experiments, we used Iperf to send the TCP data streams of 10 MB from the client machine over the communication link provided by IEEE 802.11 standards to the server machine. The same tool is used on the server side to receive the data streams and to evaluate the data for different metrics of interest such as data rate, SNR, and signal strength. Based on the available data, different results are generated in terms of graphs that will be discussed in detail in the upcoming sections. This section will provide a detail about the results obtained based on the experimental parameters listed in Table 3. The results mentioned here are obtained from the experiments performed in an indoor and outdoor environment. The results are generated in terms of graphs for different metrics such as data rate, signal strength, and SNR at 10, 15, and 20 m of UAV altitude and at a distance of ≈15 m between the participating ground stations in an outdoor environment for LoS scenario, while at a 5 m height with the same distance between the stations in an indoor environment for NLoS scenario. To provide a detail overview of the metrics, 10 quantities have been summarised as a standard boxplot (minimum whisker, 25th percentile, median, 75th percentile, maximum whisker). For each set of 10 measurements, where the offset to the right of the boxplot (purple lines) presents the mean value with a whisker showing the 95th percentile and 99th percentile, respectively. The remaining details for each metric are given in the following subsections. Table 3 Experimental parameters for our testbed Line-of-sight (LoS) scenario In LoS scenario, we performed three set of experiments with respect to UAV altitude in an outdoor environment to investigate the performance of our proposed network. The results for different metrics are generated in the form of graphs which will be explained in the following sections. Data rate can be defined as the rate at which the data is transferred from one ground station to another ground station through an air-lifted AP. Figure 3 illustrates the data rate of 802.11a/b/g/n at 2.4 GHz and 802.11n at 5 GHz band using a 20 MHz channel at 10 m (Fig. 3a), 15 m (Fig. 3b), and 20 m (Fig. 3c), respectively, for LoS scenario. The data rate captured at all UAV altitudes using 802.11a/b/g at 2.4 GHz band is quite low and is not practicable in real-time scenarios during any rescue operation. The data rate in all these three cases at 11a/b/g ranges from a minimum of 2 Mbps and goes to a maximum of 13 Mbps as shown in Fig. 3. Instead, the data rate captured at 10, 15, and 20 m using 802.11n at both 2.4 and 5 GHz band is quite impressive and ranges from a minimum of 5 Mbps to a maximum of 30 Mbps. The data rate at 10 m UAV altitude is pretty good, but once the UAV starts moving up, the data rate starts decreasing and reaches as low as 5 Mbps as visible from Fig. 3c. 802.11n at 2.4 GHz claims the highest data rate in all three scenarios followed by 802.11n at 5 GHz band. The data rate or throughput we gained in our experiments is much better than the throughput claimed by the authors in [31]. The average throughput the authors claimed in [31] in both infrastructure mode and ad hoc mode is ∼3–5 Mbps, while in our case, the average data rate/throughput is ∼ 5–20 Mbps in all three scenarios, which means that our proposed system is more suited for real-time applications than the one proposed in [31] for LoS communication. Data rate of 802.11 Protocols @ 10-, 15,- and 20-m altitude SNR can be defined as the ratio of the signal power to the noise power interrupting the signal. Figure 4 shows the SNR of both client and server stations at 10, 15, and 20 m for 802.11a/b/g at 2.4 GHz and for 802.11n at both 2.4 and 5 GHz. SNR of 802.11a ranges from ∼– 98 to ∼– 96 dBm at both client and server side in all three cases as shown in Fig. 4. Figure a, c, and e represent The SNR at client machine, while b, d, and f represents the SNR at the server side, respectively. Similarly, for 802.11b, the SNR ranges from ∼– 101 to – 91 dBm, while for 11g, it ranges from ∼– 98 to ∼-91 dBm. Moreover, for 11n at 2.4 GHz, the SNR is quite high at both 10 and 15 m altitude on both client and server machines, but at 20 m, the SNR is quite low and ranges from – 98 to ∼– 87 dBm on both ground stations, while the SNR for 802.11n at 5 GHz remains almost the same in all three cases for both client and server. In terms of SNR, 11n at 2.4 GHz (20 m) outperforms the others followed by 11g at 15 m and 11b at 10 m in terms of LoS communication. Based on the facts shown in Fig. 4, we concluded that 802.11b remains constant in terms of SNR and does not varies a lot. The main reason for such a high SNR is the interference because of high wind in our outdoor testbed as mentioned in Table 3. SNR of 802.11 standards @ 10, 15, and 20 m for both client and server Signal strength is a phenomenon which depends on how well the AP is listing to both client and server machines during the communication between UAV and ground stations. Figure 5 shows the signal strength for both clients a, c, and e and servers b, d, and f machines at 10, 15, and 20 m altitude using 802.11a/b/g at 2.4 GHz and 802.11n at both 2.4 and 5 GHz band. Signal strength of 802.11a at 10 m ranges from – 60 to – 54 dBm, while at 15 and 20 m, the signal strength remains the same and is rounded to ∼– 59 dBm at both client and server sides. Similarly, for 11b, it ranges from – 60 to ∼– 51 dBm in all three cases, while for 11g, it varies a lot and ranges from ∼– 70 to ∼– 59 dBm as shown in Fig. 5. Moreover, for 11n at 2.4 GHz, the signal strength varies slightly and ranges from ∼– 59 to ∼– 53 dBm in all three scenarios, while in terms of 802.11n at 5 GHz, it ranges from ∼– 74 to – 60 dBm at all 10, 15, and 20 m UAV altitude. In terms of signal strength, 11b performs slightly better than the other IEEE standards followed by 802.11a at both client and server sides and are more suited for LoS communication. Signal strength (SS) of 802.11 standards @ 10, 15, and 20 m for both client and server Non-line-of-sight (NLoS) scenario In this section, we will explain the results we obtained from our indoor experiments to check the feasibility of the proposed system in situations where the UAV is not in a direct sight/communication with the ground stations. We performed a single set of experiment that are conducted in a way that the AP mounted on a UAV is at 5 m height from the ground communicating with the ground stations having a glass wall in the middle as shown in Fig. 1b. We obtained the results in terms of graphs with respect to data rate, signal strength, and SNR that are detailed in the succeeding section. Data rate, signal strength, and SNR (NLoS) Figure 6a shows the data rate of 802.11a/b/g/n at 2.4 GHz and 802.11n at 5 GHz band having a 20 MHz channel at 5 m height in an indoor environment for NLoS scenario. The data rate captured using 802.11a/b/g/n at 2.4 GHz is very low (1 Mbps to a maximum of 10 Mbps) and is not applicable in real-time situations in terms of disaster management. But the data rate captured at 802.11n at 5 GHz is quite impressive (up to 20 Mbps) and can be operable in real-time NLoS scenarios. Similarly, Fig. 6b, and c shows the SNR for both client and server using 802.11a/b/g/n at both 2.4 and 5 GHz band for NLoS scenario. In all cases, the SNR ranges from ∼– 99 to ∼–83 dBm at both client and server sides. The reason for such a high SNR is mainly the shadowing and refraction of signal in NLoS communication. Moreover, Fig. 6d, and e shows the signal strength for both client and server respectively using the same protocol stack. On both sides, i.e. client and server side, the signal strength ranges from – 65 to ∼– 56 dBm, which is quite good in the case of NLoS communication. Data rate, SNR, and signal strength (SS) of 802.11 standards for NLoS scenario Conclusion and future work In this paper, we have tested IEEE 802.11a/b/g at 2.4 GHz and IEEE 802.11n at both 2.4 and 5 GHz bands using a 20 MHz channel in both indoor (NLoS) and outdoor (LoS) environment to check the performance of communication links between UAV and ground stations connected through an air-lifted AP on a UAV in terms of data rate, SNR, and signal strength for both scenarios. In our testbed, we find that IEEE 802.11n at 2.4 GHz outperforms the other IEEE 802.11 standards in terms of data rate reaching to a maximum of 30 Mbps followed by IEEE 802.11n at 5 GHz in the case of LoS, while for NLoS scenario, 802.11n at 5 GHz performs much better than the other protocols. Similarly, based on the SNR, IEEE 802.11b performs slightly better than the others followed by 802.11n at 2.4 GHz (for LoS scenario), while 802.11n at both 2.4 and 5 GHz performs well as compared to others in the case of NLoS. Moreover, in terms of signal strength, again 802.11b and 802.11n at both 2.4 and 5 GHz are slightly better than the other IEEE standards for both LoS and NLoS scenarios respectively. Based on the facts and figures, we concluded that IEEE 802.11n at both 2.4 and 5 GHz is practicable in real-time applications in the context of disaster management and healthcare applications for both scenarios. As stated in the introduction, we restrict our experiments up to 20 m height only because of the limited flight time of the UAV. Also, the UAV can search only for a short period of time (2 to 3 min), while providing the communication facility for the rest of the time. In future, we are planning to extend our experiments in terms of more UAV-altitudes, up to the maximum height the UAV can fly. We are also planning to use multiple moving nodes instead of just two static nodes in order to perform some more realistic experiments. We also intend to implement a frontier-based search algorithm to search the target area in a rescue operation along with an optimisation algorithm to improve the position of the UAV and to provide the best possible communication facilities to the participating ground stations. Moreover, we are planning to integrate the emerging 5G communication technology with UAV communication in critical infrastructure development. Data sharing is not applicable to this article as no datasets were generated or analysed during the current study. UAV: Unmanned aerial vehicle UABS: Unmanned aerial base station IoT: Internet-of-Things Genetic algorithm BAHN: Broadband and UAV-assisted heterogeneous network BLoS: Beyond-line-of-sight GCS: Ground control system LoS: Line-of-sight NLoS: Non-line-of-sight M2M: SNR: Received signal strength User Datagram Protocol A. Al-Hourani, S. Kandeepan, A. Jamalipour, in 2014 IEEE Global Communications Conference. Modeling air-to-ground path loss for low altitude platforms in urban environments, (2014), pp. 2898–2904. https://doi.org/10.1109/glocom.2014.7037248. R. I. Bor-Yaliniz, A. El-Keyi, H. Yanikomeroglu, in 2016 IEEE International Conference on Communications (ICC). Efficient 3-D placement of an aerial base station in next generation cellular networks, (2016), pp. 1–5. https://doi.org/10.1109/icc.2016.7510820. M. Mozaffari, W. Saad, M. Bennis, M. Debbah, Unmanned aerial vehicle with underlaid device-to-device communications: performance and tradeoffs. IEEE Trans. Wirel. Commun.15:, 3949–3963 (2016). M. Mozaffari, W. Saad, M. Bennis, M. Debbah, Efficient deployment of multiple unmanned aerial vehicles for optimal wireless coverage. IEEE Commun. Letters. 20:, 1647–1650 (2016). S. Rosati, K. Kruelecki, G. Heitz, D. Floreano, B. Rimoldi, Dynamic routing for flying ad hoc networks. IEEE Trans. Veh. Technol.65:, 1690–1700 (2016). S. Berrahal, J. -H. Kim, S. Rekhis, N. Boudriga, D. Wilkins, J. Acevedo, Border surveillance monitoring using quadcopter UAV-aided wireless sensor networks. J. Commun. Softw. Syst.12:, 67–82 (2016). D. Bein, W. Bein, A. Karki, B. B. Madan, in 12th International Conference on Information Technology New Generations. Optimizing border patrol operations using unmanned aerial vehicles (Las Vegas, 2015), pp. 479–484. https://doi.org/10.1109/itng.2015.83. R. C. Skeele, G. A. Hollinger, in Results of the 10th International Conference on Field and Service Robotics, ed. by D. S. Wettergreen, T. D. Barfoot. Aerial vehicle path planning for monitoring wildfire frontiers, (2016), pp. 455–467. https://doi.org/10.1007/978-3-319-27702-8_30. C. Barrado, R. Messeguer, J. Lopez, E. Pastor, E. Santamaria, P. Royo, Wildfire monitoring using a mixed air-ground mobile network. IEEE Pervasive Comput.9:, 24–32 (2010). H. Wang, D. Huo, B. Alidaee, Position unmanned aerial vehicles in the mobile ad hoc network. J. Intell. Robot. Syst.74:, 455–464 (2014). S. Rohde, M. Putzke, C. Wietfeld, Ad hoc self-healing of OFDMA networks using UAV-based relays. Ad Hoc Netw.11:, 1893–1906 (2013). I. Rubin, R. Zhang, in MILCOM 2007 - IEEE Military Communications Conference. Placement of UAVs as communication relays aiding mobile ad hoc wireless networks, (2007), pp. 1–7. https://doi.org/10.1109/milcom.2007.4455114. S. Hayat, E. Yanmaz, R. Muzaffar, Survey on unmanned aerial vehicle networks for civil applications: a communications viewpoint. IEEE Commun. Surv. Tutor.18(4), 2624–2661 (2016). L. Gupta, R. Jain, G. Vaszkun, Survey of important issues in UAV communication networks. IEEE Commun. Surv. Tutor.18(2), 1123–1152 (2016). A. Merwaday, A. Tuncer, A. Kumbhar, I. Guvenc, Improved throughput coverage in natural disasters: unmanned aerial base stations for public-safety communications. IEEE Veh. Technol. Mag.11(4), 53–60 (2016). F. Al-Turjman, S. Alturjman, 5G/IOT-enabled UAVs for multimedia delivery in industry-oriented applications. Multimedia Tools Appl. (2018). https://doi.org/10.1007/s11042-018-6288-7. F. Al-Turjman, A novel approach for drones positioning in mission critical applications. Trans. Emerg. Telecommun. Technol.0(0), 3603 (2019). https://doi.org/10.1002/ett.3603. e3603 ETT-18-0435.R1. https://onlinelibrary.wiley.com/doi/pdf/10.1002/ett.3603. I. Yaqoob, I. A. T. Hashem, A. Ahmed, S. M. A. Kazmi, C. S. Hong, Internet of things forensics: recent advances, taxonomy, requirements, and open challenges. Future Gener. Comput. Syst.92:, 265–275 (2019). https://doi.org/10.1016/j.future.2018.09.058. R. Ali, Y. A. Qadri, Y. Bin Zikria, T. Umer, B. -S. Kim, S. W. Kim, Q-learning-enabled channel access in next-generation dense wireless networks for IOT-based eHealth systems. EURASIP J. Wirel. Commun. Netw.2019(1), 178 (2019). https://doi.org/10.1186/s13638-019-1498-x. F. Campioni, S. Choudhury, F. Al- Turjman, Scheduling RFID networks in the IOT and smart health era. J. Ambient Intell. Humanized Comput. (2019). https://doi.org/10.1007/s12652-019-01221-5. M. Peuster, M. Marchetti, G. García de Blas, H. Karl, Automated testing of NFV orchestrators against carrier-grade multi-pop scenarios using emulation-based smoke testing. EURASIP J. Wirel. Commun. Netw.2019(1), 172 (2019). https://doi.org/10.1186/s13638-019-1493-2. F. Al-Turjman, S. Alturjman, Context-sensitive access in industrial internet of things (IIOT) healthcare applications. IEEE Trans. Ind. Inform.14(6), 2736–2744 (2018). https://doi.org/10.1109/TII.2018.2808190. C. Wang, B. Hu, S. Chen, Joint user association and interference mitigation for drone-assisted heterogeneous wireless networking. EURASIP J. Wirel. Commun. Netw.2019(1), 163 (2019). https://doi.org/10.1186/s13638-019-1483-4. M. Mozaffari, W. Saad, M. Bennis, Y. Nam, M. Debbah, A tutorial on UAVs for wireless networks: applications, challenges, and open problems. Commun. Surv. Tutor. IEEE, 1–1 (2019). https://doi.org/10.1109/COMST.2019.2902862. U. Challita, W. Saad, C. Bettstetter, Interference management for cellular-connected UAVs: a deep reinforcement learning approach. IEEE Trans. Wirel. Commun.18(4), 2125–2140 (2019). https://doi.org/10.1109/TWC.2019.2900035. H. Ullah, M. Abu-Tair, S. McClean, P. Nixon, G. Parr, C. Luo. An unmanned aerial vehicle based wireless network for bridging communication, (2017), pp. 179–184. https://doi.org/10.1109/ispan-fcst-iscc.2017.65. A. Guillen-Perez, R. Sanchez-Iborra, M. D. Cano, J. C. Sanchez-Aarnoutse, J. Garcia-Haro, in 2016 ITU kaleidoscope: ICTs for a sustainable world (ITU WT). WiFi networks on drones, (2016), pp. 1–8. https://doi.org/10.1109/itu-wt.2016.7805730. E. Yanmaz, R. Kuschnig, C. Bettstetter, in 2011 IEEE GLOBECOM Workshops (GC Wkshps). Channel measurements over 802.11a-based UAV-to-ground links, (2011), pp. 1280–1284. https://doi.org/10.1109/glocomw.2011.6162389. E. Yanmaz, R. Kuschnig, C. Bettstetter, in 2013 Proceedings IEEE INFOCOM. Achieving air-ground communications in 802.11 networks with three-dimensional aerial mobility, (2013), pp. 120–124. https://doi.org/10.1109/infcom.2013.6566747. E. Yanmaz, S. Hayat, J. Scherer, C. Bettstetter, in 2014 IEEE Wireless Communications and Networking Conference (WCNC). Experimental performance analysis of two-hop aerial 802.11 networks, (2014), pp. 3118–3123. https://doi.org/10.1109/wcnc.2014.6953010. C. Chen-mou, H. Pai-hsiang, H Kung, D. Vlah, in Proceedings of 15th International Conference on Computer Communications and Networks. Performance measurement of 802.11a wireless links from UAV to ground nodes with various antenna orientations, (2006), pp. 303–308. https://doi.org/10.1109/icccn.2006.286291. S. Hayat, E. Yanmaz, C. Bettstetter, in 2015 IEEE 26th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC). Experimental analysis of multipoint-to-point UAV communications with IEEE 802.11n and 802.11ac, (2015), pp. 1991–1996. https://doi.org/10.1109/pimrc.2015.7343625. M. Asadpour, D. Giustiniano, K. A. Hummel, S. Heimlicher, in Proceedings of the Second ACM MobiHoc Workshop on Airborne Networks and Communications. ANC '13. Characterizing 802.11n aerial communication (ACMNew York, 2013), pp. 7–12. M. Asadpour, D. Giustiniano, K. A. Hummel, in Proceedings of the 8th ACM International Workshop on Wireless Network Testbeds, Experimental Evaluation and Characterization. WiNTECH '13. From ground to aerial communication: dissecting WLAN 802.11n for the drones (ACMNew York, 2013), pp. 25–32. B. Li, Y. Jiang, J. Sun, L. Cai, C. -Y. Wen, Development and testing of a two-UAV communication relay system. Sensors (Basel, Switzerland). 16:, 1696 (2016). F. Fabra, C. T. Calafate, J. C. Cano, P. Manzoni, in 2017 14th IEEE Annual Consumer Communications Networking Conference (CCNC). A methodology for measuring UAV-to-UAV communications performance, (2017), pp. 280–286. https://doi.org/10.1109/ccnc.2017.7983120. Y. Shi, R. Enami, J. Wensowitch, J. Camp, in 2018 IEEE Wireless Communications and Networking Conference (WCNC). Measurement-based characterization of LOS and NLOS drone-to-ground channels, (2018), pp. 1–6. https://doi.org/10.1109/wcnc.2018.8377104. S. A. A. Shah, E. Ahmed, M. Imran, S. Zeadally, 56. 5G for Vehicular Communications, (2018), pp. 111–117. F. M. Al-Turjman, M. Imran, S. T. Bakhsh, 5. Energy efficiency perspectives of femtocells in internet of things: recent advances and challenges, (2017), pp. 26808–26818. https://doi.org/10.1109/access.2017.2773834. Y. Mehmood, N. Haider, M. Imran, A. Timm-Giel, M. Guizani, 55. M2m communications in 5G: state-of-the-art architecture, recent advances, and research challenges, (2017), pp. 194–201. https://doi.org/10.1109/mcom.2017.1600559. S. Ullah, K. -I. Kim, K. H. Kim, M. Imran, P. Khan, E. Tovar, F. Ali, UAV-enabled healthcare architecture: issues and challenges. Futur. Gener. Comput. Syst.97:, 425–432 (2019). https://doi.org/10.1016/j.future.2019.01.028. F. Al-Turjman, H. Zahmatkesh, I. Al-Oqily, R. Daboul, Optimized unmanned aerial vehicles deployment for static and mobile targets' monitoring. Comput. Commun.149:, 27–35 (2020). F. Al-Turjman, A. Radwan, Data delivery in wireless multimedia sensor networks: challenging and defying in the IOT era. IEEE Wirel. Commun.24(5), 126–131 (2017). U. Deniz Ulusar, F. Al-Turjman, G. Celik, in 2017 International Conference on Computer Science and Engineering (UBMK). An overview of internet of things and wireless communications, (2017), pp. 506–509. https://doi.org/10.1109/ubmk.2017.8093446. H. Ullah, S. McClean, P. Nixon, G. Parr, C. Luo, in 2017 15th International Conference on ITS Telecommunications (ITST). An optimal UAV deployment algorithm for bridging communication, (2017), pp. 1–7. https://doi.org/10.1109/itst.2017.7972194. 3D-Robotics(3DR), Solo user manual V9, ÂⒸ 2015 3D Robotics Inc. (2017). https://3dr.com/support/articles/2083-96893/usermanual/. Accessed Mar 2019. 3D-Robotics(US). https://3dr.com/blog/solo-specs-just-the-facts-14480cb55722/. Accessed: Mar 2019. 3D-Robotics(3DR). https://3dr.com/solo-drone/. Accessed: Mar 2019. DLink. ftp://ftp.dlink.eu/Products/dir/dir-510l/documentation/DIR-510L_A1_Manual_v1-00.pdf. Accessed: Mar 2019. Iperf, The ultimate speed test tool for TCP, UDP and SCTP. https://iperf.fr/. Accessed: Mar 2019. The authors would like to thank Ulster University for supporting this work through Vice Chancellor's Research Scholarship (VCRS). The authors also would like to thank Invest NI and British Telecom (BT) for supporting this work through BT Ireland Innovation centre (BTIIC). No funding is associated with this work. School of Computing, Ulster University, Jordanstown, UK Hanif Ullah, Mamun Abu-Tair, Sally McClean & Paddy Nixon School of Computing Sciences, University of East Anglia, Norwich, UK Gerard Parr Department of Computer Science, University of Exeter, Exeter, UK Chunbo Luo Hanif Ullah Mamun Abu-Tair Sally McClean Paddy Nixon H.U. and M.A. conceived, designed, and performed the experiments; H.U. wrote the paper. S.M, P.N., G.P, and C.L. reviewed the paper. The authors read and approved the final manuscript. Correspondence to Hanif Ullah. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Ullah, H., Abu-Tair, M., McClean, S. et al. Connecting Disjoint Nodes Through a UAV-Based Wireless Network for Bridging Communication Using IEEE 802.11 Protocols. J Wireless Com Network 2020, 142 (2020). https://doi.org/10.1186/s13638-020-01727-z UAV-based wireless network Bridging communication Cooperative aerial wireless networks IEEE 802.11 standards 2.4 and 5 GHz band
CommonCrawl
K2SN-MSS: An Efficient Post-Quantum Signature (Full Version) Sabyasachi Karati and Reihaneh Safavi-Naini Abstract: With the rapid development of quantum technologies, quantum-safe cryptography has found significant attention. Hash-based signature schemes have been in particular of interest because of (i) the importance of digital signature as the main source of trust on the Internet, (ii) the fact that the security of these signatures relies on existence of one-way functions, which is the minimal assumption for signature schemes, and (iii) they can be efficiently implemented. Basic hash-based signatures are for a single message, but have been extended for signing multiple messages. In this paper we design a Multi-message Signature Scheme (MSS) based on an existing One-Time Signature (OTS) that we refer to as KSN-OTS. KSN uses SWIFFT, an additive homomorphic lattice-based hash function family with provable one-wayness property, as the one-way-function and achieves a short signature. We prove security of our proposed signature scheme in a new strengthened security model (multi-target multi-function) of MSS, determine the system parameters for 512 bit classical (256 bit quantum) security, and compare parameter sizes of our scheme against XMSS, a widely studied hash based MSS that has been a candidate for NIST standardization of post-quantum signature scheme. We give an efficient implementation of our scheme using Intel SIMD (Single Instruction Multiple Data) instruction set. For this, we first implement SWIFFT computation using a SIMD parallelization of Number Theoretic Transform (NTT) of elements of the ring $\mathbb{Z}_p[X]/(X^\n+1)$, that can support different levels of parallelization. We compare efficiency of this implementation with a comparable (security level) implementation of XMSS and show its superior performance on a number of efficiency parameters. Category / Keywords: implementation / OTS, Merkle Tree, NTT, SWIFFT, Cover-Free Family, SIMD. Contact author: sabyasachi karati at gmail com
CommonCrawl
What is the difference between linearly dependent and linearly correlated? Please explain what is the difference between if two variables are linearly dependent or linearly correlated. I looked up the wikipedia article but didn't get a proper example. Please explain it with example. correlation non-independent MånsT Happy MittalHappy Mittal Two variables are linearly dependent if one can be written as a linear function of the other. If two variable are linearly dependent the correlation between them is 1 or -1. Linearly correlated just means that two variables have a non-zero correlation but not necessarily having an exact linear relationship. Correlation is sometimes called linear correlation because the Pearson product moment correlation coefficient is a measure of the strength of the linearity in the relationship between the variables. Michael R. ChernickMichael R. Chernick $\begingroup$ +1. Though, I'd rather say the Pearson coef. "is a measure of strength of linear relationship" instead of is a measure of the degree of linearity in [= of?] the relationship $\endgroup$ – ttnphns Jun 28 '12 at 9:59 $\begingroup$ @ttnphns Okay that sounds more appropriate. $\endgroup$ – Michael R. Chernick Jun 28 '12 at 10:51 $\begingroup$ Perhaps $\rho^2$ rather than $\rho$ would be a better measure since we don't need to hassle with $\rho$ close to $-1$ meaning a strong linear relationship (albeit with negative slope). Also, consider how much variance is explained versus non-explained, and that $\rho = 0.51$ does not provoke the statistician into turning cartwheels and doing handstands in celebration whereas $\rho^2 > 1/\sqrt{2} \approx 70\%$ is much better evidence of a positive (read, publishable) result. $\endgroup$ – Dilip Sarwate Jun 28 '12 at 20:22 In $\mathbf{R}^2$ linear dependence implies that one vector is a linear function of the other: $$ \textbf{v}_{1}=a\textbf{v}_2. $$ It's clear from this definition that the two variables would move in lock-step, implying a correlation of $1$ or $-1$ depending on the value of $a$. To more fully understand the differences and connections between the concepts, however, I think it's beneficial to consider the geometry involved. The graph below shows an example of the formula for linear dependence. You can see that the vectors are linearly dependent because one is simply a multiple of the other. This is in contrast to linear independence, which in $\mathbf{R}^2$ is described by: $$ \textbf{v}_{1}\neq a\textbf{v}_2 $$ for vectors $\textbf{v}_1, \textbf{v}_2 \neq \textbf{0}.$ An example of linear independence can be seen in the graphic below. The most extreme version of linear independence is orthogonality, defined for vectors $\textbf{v}_1, \textbf{v}_2$ as: $$ \textbf{v}_{1}^T \textbf{v}_{2} = 0. $$ When graphed in $\mathbf{R}^2$, orthogonality corresponds to the vectors $\textbf{v}_{1}$ and $\textbf{v}_2$ being perpendicular to one another: Now, consider Pearson's correlation coefficient: $$ \rho_{\textbf{v}_{1}\textbf{v}_{2}} = \frac{(\textbf{v}_{1}-\bar{v}_{1}\textbf{1})^T(\textbf{v}_{2}-\bar{v}_{2}\textbf{1})}{\sigma_{\textbf{v}_{1}}\sigma_{\textbf{v}_{2}}}. $$ Note that if the vectors $(\textbf{v}_{1}-\bar{v}_{1}\textbf{1})$ and $(\textbf{v}_{2}-\bar{v}_{2}\textbf{1})$ are orthogonal then the numerator of Pearson's coefficient is zero, implying that the variables $\textbf{v}_{1}$ and $\textbf{v}_{2}$ are uncorrelated. This illustrates an interesting connection between linear independence and correlation: linear dependence between the centered versions of the variables $\textbf{v}_{1}$ and $\textbf{v}_{2}$ corresponds to a correlation of $1$ or $-1$, non-orthogonal linear independence between the centered versions of $\textbf{v}_{1}$ and $\textbf{v}_{2}$ corresponds to a correlation between $0$ and $1$ in absolute value, and orthogonality between the centered versions of $\textbf{v}_{1}$ and $\textbf{v}_{2}$ corresponds to a correlation of $0$. Thus, if two vectors are linearly dependent the centered versions of the vectors will also be linearly dependent, i.e. the vectors are perfectly correlated. When two linearly independent vectors (orthogonal or not) are centered the angle between the vectors may or may not change. Thus for linearly independent vectors the correlation may be positive, negative, or zero. tjneltjnel Let f(x) and g(x) be functions. For f(x) and g(x) to be linearly independent we must have a*f(x) + b*g(x) = 0 if and only if a=b=0. In other words there is no c such that a or b is not zero but a*f(c) + b*g(c) = 0 If there is such a c, then we say that f(x) and g(x) are linearly dependent. f(x) = sin(x) and g(x) = cos(x) are linearly independent f(x) = sin(x) and g(x) = sin(2x) are not linearly dependent (Why?) $\begingroup$ With the definition you're using there, there can be a $c$ such that $a f(c) + b g(c) = 0$; they're only linearly dependent if it happens for all $x$ in the considered domain; for example, consider your second example, with $c=\pi/3$. (Also, I think there's a problem with your first example) $\endgroup$ – Glen_b Nov 15 '13 at 0:53 Not the answer you're looking for? Browse other questions tagged correlation non-independent or ask your own question. Does a statistically significant correlation always give predictive power? Why does Covariance measure only Linear dependence? Does high correlation imply proportionality? Correlated/dependent random variables Does correlation imply a (relatively) constant difference or ratio between the two variables? How to test for a significant difference between several dependent correlation coefficients? What is the relationship between orthogonal, correlation and independence? Set of uncorrelated but linearly dependent variables Is it okay to use Multivariate Multiple Regression on correlated independent and dependent variables? What is the relation between the effect size and correlation?
CommonCrawl
Monostable wavefronts in cooperative Lotka-Volterra systems with nonlocal delays Guo Lin, Wan-Tong Li and Shigui Ruan This paper is concerned with traveling wavefronts in a Lotka-Volterra model with nonlocal delays for two cooperative species. By using comparison principle, some existence and nonexistence results are obtained. If the wave speed is larger than a threshold which can be formulated in terms of basic parameters, we prove the asymptotic stability of traveling wavefronts by the spectral analysis method together with squeezing technique. Guo Lin, Wan-Tong Li, Shigui Ruan. Monostable wavefronts in cooperative Lotka-Volterra systems with nonlocaldelays. Discrete & Continuous Dynamical Systems - A, 2011, 31(1): 1-23. doi: 10.3934/dcds.2011.31.1. Estimates on the number of limit cycles of a generalized Abel equation Naeem M. H. Alkoumi and Pedro J. Torres We prove new results about the number of isolated periodic solutions of a first order differential equation with a polynomial nonlinearity. Such results are applied to bound the number of limit cycles of a family of planar polynomial vector fields which generalize the so-called rigid systems. Naeem M. H. Alkoumi, Pedro J. Torres. Estimates on the number of limit cyclesof a generalized Abel equation. Discrete & Continuous Dynamical Systems - A, 2011, 31(1): 25-34. doi: 10.3934/dcds.2011.31.25. Non-trivial non-negative periodic solutions of a system of doubly degenerate parabolic equations with nonlocal terms Genni Fragnelli, Paolo Nistri and Duccio Papini The aim of the paper is to provide conditions ensuring the existence of non-trivial non-negative periodic solutions to a system of doubly degenerate parabolic equations containing delayed nonlocal terms and satisfying Dirichlet boundary conditions. The employed approach is based on the theory of the Leray-Schauder topological degree theory, thus a crucial purpose of the paper is to obtain a priori bounds in a convenient functional space, here $L^2(Q_T)$, on the solutions of certain homotopies. This is achieved under different assumptions on the sign of the kernels of the nonlocal terms. The considered system is a possible model of the interactions between two biological species sharing the same territory where such interactions are modeled by the kernels of the nonlocal terms. To this regard the obtained results can be viewed as coexistence results of the two biological populations under different intra and inter specific interferences on their natural growth rates. Genni Fragnelli, Paolo Nistri, Duccio Papini. Non-trivial non-negative periodic solutions of a system of doubly degenerate parabolic equations with nonlocal terms. Discrete & Continuous Dynamical Systems - A, 2011, 31(1): 35-64. doi: 10.3934/dcds.2011.31.35. Classification of local asymptotics for solutions to heat equations with inverse-square potentials Veronica Felli and Ana Primo Asymptotic behavior of solutions to heat equations with spatially singular inverse-square potentials is studied. By combining a parabolic Almgren type monotonicity formula with blow-up methods, we evaluate the exact behavior near the singularity of solutions to linear and subcritical semilinear parabolic equations with Hardy type potentials. As a remarkable byproduct, a unique continuation property is obtained. Veronica Felli, Ana Primo. Classification of local asymptoticsfor solutions to heat equations withinverse-square potentials. Discrete & Continuous Dynamical Systems - A, 2011, 31(1): 65-107. doi: 10.3934/dcds.2011.31.65. Strichartz estimates for Schrödinger operators with a non-smooth magnetic potential We prove Strichartz estimates for the absolutely continuous evolution of a Schrödinger operator $H = (i\nabla + A)^2 + V$ in $R^n$, $n \ge 3$. Both the magnetic and electric potentials are time-independent and satisfy pointwise polynomial decay bounds. The vector potential $A(x)$ is assumed to be continuous but need not possess any Sobolev regularity. This work is a refinement of previous methods, which required extra conditions on ${\rm div}\,A$ or $|\nabla|^{\frac12}A$ in order to place the first order part of the perturbation within a suitable class of pseudo-differential operators. Michael Goldberg. Strichartz estimates for Schr\u00F6dinger operators with a non-smoothmagnetic potential. Discrete & Continuous Dynamical Systems - A, 2011, 31(1): 109-118. doi: 10.3934/dcds.2011.31.109. Global attractors for strongly damped wave equations with displacement dependent damping and nonlinear source term of critical exponent A. Kh. Khanmamedov In this paper the long time behaviour of the solutions of the 3-D strongly damped wave equation is studied. It is shown that the semigroup generated by this equation possesses a global attractor in $H_{0}^{1}(\Omega )\times L_{2}(\Omega )$ and then it is proved that this is also a global attractor in $(H^{2}(\Omega )\cap H_{0}^{1}(\Omega ))\times H_{0}^{1}(\Omega )$. A. Kh. Khanmamedov. Global attractors for strongly damped waveequations with displacement dependent damping and nonlinear source term ofcritical exponent. Discrete & Continuous Dynamical Systems - A, 2011, 31(1): 119-138. doi: 10.3934/dcds.2011.31.119. Existence of multiple spike stationary patterns in a chemotaxis model with weak saturation Kazuhiro Kurata and Kotaro Morimoto We are concerned with a multiple boundary spike solution to the steady-state problem of a chemotaxis system: $P_t=\nabla \cdot \big( P\nabla ( \log \frac{P}{\Phi (W)})\big)$, $W_t=ε^2 \Delta W+F(P,W)$, in $\Omega \times (0,\infty)$, under the homogeneous Neumann boundary condition, where $\Omega\subset \mathbb{R}^N$ is a bounded domain with smooth boundary, $P(x,t)$ is a population density, $W(x,t)$ is a density of chemotaxis substance. We assume that $\Phi(W)=W^p$, $p>1$, and we are interested in the cases of $F(P,W)=-W+\frac{PW^q}{\alpha+\gamma W^q}$ and $F(P,W)=-W+\frac{P}{1+ k P}$ with $q>0, \alpha, \gamma, k\ge 0$, which has a saturating growth. Existence of a multiple spike stationary pattern is related to a weak saturation effect of $F(P,W)$ and the shape of the domain $\Omega$. In this paper, we assume that $\Omega$ is symmetric with respect to each hyperplane $\{ x_1=0\},\cdots ,\{ x_{N-1}=0\}$. For two classes of $F(P,W)$ above with saturation effect, we show the existence of multiple boundary spike stationary patterns on $\Omega$ under a weak saturation effect on parameters $\alpha,\gamma$ and $k$. Based on the method developed in [14] and [10], we shall present some technique to construct a multiple boundary spike solution to some reduced nonlocal problem on such domains systematically. Kazuhiro Kurata, Kotaro Morimoto. Existence of multiple spike stationary patterns in a chemotaxis model with weak saturation. Discrete & Continuous Dynamical Systems - A, 2011, 31(1): 139-164. doi: 10.3934/dcds.2011.31.139. Resurgence of inner solutions for perturbations of the McMillan map A sequence of "inner equations" attached to certain perturbations of the McMillan map was considered in [5], their solutions were used in that article to measure an exponentially small separatrix splitting. We prove here all the results relative to these equations which are necessary to complete the proof of the main result of [5]. The present work relies on ideas from resurgence theory: we describe the formal solutions, study the analyticity of their Borel transforms and use Écalle's alien derivations to measure the discrepancy between different Borel-Laplace sums. Pau Mart\u00EDn, David Sauzin, Tere M. Seara. Resurgence of inner solutions for perturbations of the McMillan map. Discrete & Continuous Dynamical Systems - A, 2011, 31(1): 165-207. doi: 10.3934/dcds.2011.31.165. Existence and non-existence of global solutions for a discrete semilinear heat equation Keisuke Matsuya and Tetsuji Tokihiro Existence of global solutions to initial value problems for a discrete analogue of a $d$-dimensional semilinear heat equation is investigated. We prove that a parameter $\alpha$ in the partial difference equation plays exactly the same role as the parameter of nonlinearity does in the semilinear heat equation. That is, we prove non-existence of a non-trivial global solution for $0<\alpha \le 2/d$, and, for $\alpha > 2/d$, existence of non-trivial global solutions for sufficiently small initial data. Keisuke Matsuya, Tetsuji Tokihiro. Existence and non-existence of global solutions for a discrete semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2011, 31(1): 209-220. doi: 10.3934/dcds.2011.31.209. Orbital stability of periodic waves for the Klein-Gordon-Schrödinger system Fábio Natali and Ademir Pastor This article deals with the existence and orbital stability of a two--parameter family of periodic traveling-wave solutions for the Klein-Gordon-Schrödinger system with Yukawa interaction. The existence of such a family of periodic waves is deduced from the Implicit Function Theorem, and the orbital stability is obtained from arguments due to Benjamin, Bona, and Weinstein. F\u00E1bio Natali, Ademir Pastor. Orbital stability of periodic waves for theKlein-Gordon-Schr\u00F6dinger system. Discrete & Continuous Dynamical Systems - A, 2011, 31(1): 221-238. doi: 10.3934/dcds.2011.31.221. Attractors for the three-dimensional incompressible Navier-Stokes equations with damping Xue-Li Song and Yan-Ren Hou In this paper, we show that the strong solution of the three-dimensional Navier-Stokes equations with damping $\alpha|u|^{\beta-1}u\ (\alpha>0, \frac{7}{2}\leq \beta\leq 5)$ has global attractors in $V$ and $H^2(\Omega)$ when initial data $u_0\in V$, where $\Omega\subset \mathbb{R}^3$ is bounded. Xue-Li Song, Yan-Ren Hou. Attractors for the three-dimensional incompressible Navier-Stokes equationswith damping. Discrete & Continuous Dynamical Systems - A, 2011, 31(1): 239-252. doi: 10.3934/dcds.2011.31.239. Macroscopic discrete modelling of stochastic reaction-diffusion equations on a periodic domain Wei Wang and Anthony Roberts Dynamical systems theory provides powerful methods to extract effective macroscopic dynamics from complex systems with slow modes and fast modes. Here we derive and theoretically support a macroscopic, spatially discrete, model for a class of stochastic reaction-diffusion partial differential equations with cubic nonlinearity. Dividing space into overlapping finite elements, a special coupling condition between neighbouring elements preserves the self-adjoint dynamics and controls interelement interactions. When the interelement coupling parameter is small, an averaging method and an asymptotic expansion of the slow modes show that the macroscopic discrete model will be a family of coupled stochastic ordinary differential equations which describe the evolution of the grid values. This modelling shows the importance of subgrid scale interaction between noise and spatial diffusion and provides a new rigorous approach to constructing semi-discrete approximations to stochastic reaction-diffusion partial differential equations. Wei Wang, Anthony Roberts. Macroscopic discrete modelling of stochastic reaction-diffusion equations on a periodic domain. Discrete & Continuous Dynamical Systems - A, 2011, 31(1): 253-273. doi: 10.3934/dcds.2011.31.253. Preservation of homoclinic orbits under discretization of delay differential equations Yingxiang Xu and Yongkui Zou In this paper, we propose a nondegenerate condition for a homoclinic orbit with respect to a parameter in delay differential equations. Based on this nondegeneracy we describe and investigate the regularity of the homoclinic orbit together with parameter. Then we show that a forward Euler method, when applied to a one-parameteric system of delay differential equations with a homoclinic orbit, also exhibits a closed loop of discrete homoclinic orbits. These discrete homoclinic orbits tend to the continuous one by the rate of $O(\varepsilon)$ as the step-size $\varepsilon$ goes to $0$. And the corresponding parameter varies periodically with respect to a phase parameter with period $\varepsilon$ while the orbit shifts its index after one revolution. We also show that at least two homoclinic tangencies occur on this loop. By numerical simulations, the theoretical results are illustrated, and the possibility of extending theoretical results to the implicit and higher order numerical schemes is discussed. Yingxiang Xu, Yongkui Zou. Preservation of homoclinic orbits under discretization ofdelay differential equations. Discrete & Continuous Dynamical Systems - A, 2011, 31(1): 275-299. doi: 10.3934/dcds.2011.31.275.
CommonCrawl
Essential issues on solving optimal power flow problems using soft-computing NACO Home A note on the stability of a second order finite difference scheme for space fractional diffusion equations 2014, 4(4): 327-340. doi: 10.3934/naco.2014.4.327 Minimax problems for set-valued mappings with set optimization Yu Zhang 1, and Tao Chen 2, College of Statistics and Mathematics, Yunnan University of Finance and Economics, Kunming 650221, China College of Public Foundation, Yunnan Open University, Kunming 650223, China Received September 2014 Revised December 2014 Published December 2014 In this paper, we introduce a class of set-valued mappings with some set order relations, which is called uniformly same-order. For this sort of mappings, we obtain some existence results of saddle points and depict the structures of the sets of saddle points. Moreover, we obtain a minimax theorem and establish an equivalent relationship between the minimax theorem and a saddle point theorem for the scalar set-valued mappings, in which the minimization and the maximization of set-valued mappings are taken in the sense of set optimization. Keywords: set optimization, vector optimization, saddle point, Minimax theorem, set-valued mapping.. Mathematics Subject Classification: Primary: 49J35, 49K35; Secondary: 90C4. Citation: Yu Zhang, Tao Chen. Minimax problems for set-valued mappings with set optimization. Numerical Algebra, Control & Optimization, 2014, 4 (4) : 327-340. doi: 10.3934/naco.2014.4.327 Q. H. Ansari, Y. C. Lin and J. C. Yao, General KKM theorem with applications to minimax and variational inequalities,, J. Optim. Theory Appl., 104 (2000), 17. doi: 10.1023/A:1004620620928. Google Scholar G. Y. Chen, A generalized section theorem and a minimax inequality for a vector-valued mapping,, Optimization, 22 (1991), 745. doi: 10.1080/02331939108843716. Google Scholar J. W. Chen, Z. P. Wang and Y. J. Cho, The existence of solutions and well-posedness for bilevel mixed equilibrium problems in Banach spaces,, Taiwanese J. Math., 17 (2013), 725. doi: 10.11650/tjm.17.2013.2337. Google Scholar Y. J. Cho, S. S. Chang, J. S. Jung, S. M. Kang and X. Wu, Minimax theorems in probabilistic metric spaces,, Bull. Austral. Math. Soc., 51 (1995), 103. doi: 10.1017/S0004972700013939. Google Scholar Y. J. Cho, M. R. Delavar, S. A. Mohammadzadeh and M. Roohi, Coincidence theorems and minimax inequalities in abstract convex spaces,, J. Inequal. Appl., 2011 (2011), 1. Google Scholar C. S. Chuang and L. J. Lin, New existence theorems for quasi-equilibrium problems and a minimax theorem on complete metric spaces,, J. Glob. Optim., 57 (2013), 533. doi: 10.1007/s10898-012-0004-3. Google Scholar F. Ferro, A minimax theorem for vector-valued functions,, J. Optim. Theory Appl., 60 (1989), 19. doi: 10.1007/BF00938796. Google Scholar F. Ferro, A minimax theorem for vector-valued functions, Part 2,, J. Optim. Theory Appl., 68 (1991), 35. doi: 10.1007/BF00939934. Google Scholar X. H. Gong, The strong minimax theorem and strong saddle points of vector-valued functions,, Nonlinear Anal., 68 (2008), 2228. doi: 10.1016/j.na.2007.01.056. Google Scholar X. H. Gong, Strong vector equilibrium problems,, J. Glob. Optim., 36 (2006), 339. doi: 10.1007/s10898-006-9012-5. Google Scholar E. Hernández and L. Rodríguez-Marín, Nonconvex scalarization in set optimization with set-valued maps,, J. Math. Anal. Appl., 325 (2007), 1. Google Scholar J. Jahn and T. X. D. Ha, New order relations in set optimization,, J. Optim. Theory Appl., 148 (2011), 209. doi: 10.1007/s10957-010-9752-8. Google Scholar D. Kuroiwa, Some duality theorems of set-valued optimization with natural criteria,, in Nonlinear Analysis and Convex Analysis (eds. W. Takahashi and T. Tanaka), (1999), 221. Google Scholar X. B. Li, S. J. Li and Z. M. Fang, A minimax theorem for vector valued functions in lexicographic order,, Nonlinear Anal., 73 (2010), 1101. doi: 10.1016/j.na.2010.04.047. Google Scholar S. J. Li, G. Y. Chen and G. M. Lee, Minimax theorems for set-valued mappings,, J. Optim. Theory Appl., 106 (2000), 183. doi: 10.1023/A:1004667309814. Google Scholar S. J. Li, G. Y. Chen, K. L. Teo and X. Q. Yang, Generalized minimax inequalities for set-valued mappings,, J. Math. Anal. Appl., 281 (2003), 707. doi: 10.1016/S0022-247X(03)00197-5. Google Scholar Y. C. Lin, Q. H. Ansari and H. C. Lai, Minimax theorems for set-valued mappings under cone-convexities,, Abstr. Appl. Anal., 2012 (2012), 1. Google Scholar Y. C. Lin and H. J. Chen, Solving the set equilibrium problems,, Fixed Point Theory Appl., 2011 (2011), 1. Google Scholar Y. C. Lin, The hierarchical minimax theorems,, Taiwan. J. Math., 18 (2014), 451. doi: 10.11650/tjm.18.2014.3503. Google Scholar Y. C. Lin, On generalized vector equilibrium problems,, Nonlinear Anal., 70 (2009), 1040. doi: 10.1016/j.na.2008.01.030. Google Scholar X. J. Long and J. W. Peng, Generalized B-well-posedness for set optimization,, J. Optim. Theory Appl., 157 (2013), 612. doi: 10.1007/s10957-012-0205-4. Google Scholar D. T. Luc, Theory of Vector Optimization: Lecture Notes in Economics and Mathematical Systems,, Springer-Verlag, (1989). Google Scholar J. W. Nieuwenhuis, Some minimax theorems in vector-valued functions,, J. Optim. Theory Appl., 40 (1983), 463. doi: 10.1007/BF00933511. Google Scholar M. Patriche, Minimax theorems for set-valued maps without continuity assumptions, preprint,, , (). Google Scholar D. S. Shi and C. Ling, Minimax theorems and cone saddle points of uniformly same-order vector-valued functions,, J. Optim. Theory Appl., 84 (1995), 575. doi: 10.1007/BF02191986. Google Scholar M. G. Yang, J. P. Xu, N. J. Huang and S. J. Yu, Minimax theorems for vector-valued mappings in abstract convex spaces,, Taiwanese J.Math., 14 (2010), 719. Google Scholar Q. B. Zhang, C. Z. Cheng and X. X. Li, Generalized minimax theorems for two set-valued mappings,, J. Ind. Manag. Optim., 9 (2013), 1. doi: 10.3934/jimo.2013.9.1. Google Scholar Q. B. Zhang, M. J. Liu and C. Z. Cheng, Generalized saddle points theorems for set-valued mappings in locally generalized convex spaces,, Nonlinear Anal., 71 (2009), 212. doi: 10.1016/j.na.2008.10.040. Google Scholar W. Y. Zhang, S. J. Li and K. L. Teo, Well-posedness for set optimization problems,, Nonlinear Anal., 71 (2009), 3769. doi: 10.1016/j.na.2009.02.036. Google Scholar Y. Zhang, S.J. Li and S.K. Zhu, Mininax problems for set-valued mappings,, Numer. Funct. Anal. Optim., 33 (2012), 239. doi: 10.1080/01630563.2011.610915. Google Scholar Y. Zhang and S. J. Li, Minimax problems of uniformly same-order set-valued mappings,, Bull. Korean Math. Soc., 50 (2013), 1639. doi: 10.4134/BKMS.2013.50.5.1639. Google Scholar Y. Zhang and S. J. Li, Minimax theorems for scalar set-valued mappings with nonconvex domains and applications,, J. Glob. Optim., 57 (2013), 1359. doi: 10.1007/s10898-012-9992-2. Google Scholar Y. Zhang, S. J. Li and M. H. Li, Mininax inequalities for set-valued mappings,, Positivity, 16 (2012), 751. doi: 10.1007/s11117-011-0144-6. Google Scholar Guolin Yu. Global proper efficiency and vector optimization with cone-arcwise connected set-valued maps. Numerical Algebra, Control & Optimization, 2016, 6 (1) : 35-44. doi: 10.3934/naco.2016.6.35 Ying Gao, Xinmin Yang, Jin Yang, Hong Yan. Scalarizations and Lagrange multipliers for approximate solutions in the vector optimization problems with set-valued maps. Journal of Industrial & Management Optimization, 2015, 11 (2) : 673-683. doi: 10.3934/jimo.2015.11.673 Zhenhua Peng, Zhongping Wan, Weizhi Xiong. Sensitivity analysis in set-valued optimization under strictly minimal efficiency. Evolution Equations & Control Theory, 2017, 6 (3) : 427-436. doi: 10.3934/eect.2017022 Qingbang Zhang, Caozong Cheng, Xuanxuan Li. Generalized minimax theorems for two set-valued mappings. Journal of Industrial & Management Optimization, 2013, 9 (1) : 1-12. doi: 10.3934/jimo.2013.9.1 Yihong Xu, Zhenhua Peng. Higher-order sensitivity analysis in set-valued optimization under Henig efficiency. Journal of Industrial & Management Optimization, 2017, 13 (1) : 313-327. doi: 10.3934/jimo.2016019 Zhiang Zhou, Xinmin Yang, Kequan Zhao. $E$-super efficiency of set-valued optimization problems involving improvement sets. Journal of Industrial & Management Optimization, 2016, 12 (3) : 1031-1039. doi: 10.3934/jimo.2016.12.1031 Qilin Wang, Liu He, Shengjie Li. Higher-order weak radial epiderivatives and non-convex set-valued optimization problems. Journal of Industrial & Management Optimization, 2019, 15 (2) : 465-480. doi: 10.3934/jimo.2018051 C. R. Chen, S. J. Li. Semicontinuity of the solution set map to a set-valued weak vector variational inequality. Journal of Industrial & Management Optimization, 2007, 3 (3) : 519-528. doi: 10.3934/jimo.2007.3.519 Roger Metzger, Carlos Arnoldo Morales Rojas, Phillipe Thieullen. Topological stability in set-valued dynamics. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1965-1975. doi: 10.3934/dcdsb.2017115 Dante Carrasco-Olivera, Roger Metzger Alvan, Carlos Arnoldo Morales Rojas. Topological entropy for set-valued maps. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3461-3474. doi: 10.3934/dcdsb.2015.20.3461 Geng-Hua Li, Sheng-Jie Li. Unified optimality conditions for set-valued optimizations. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1101-1116. doi: 10.3934/jimo.2018087 Xing Wang, Nan-Jing Huang. Stability analysis for set-valued vector mixed variational inequalities in real reflexive Banach spaces. Journal of Industrial & Management Optimization, 2013, 9 (1) : 57-74. doi: 10.3934/jimo.2013.9.57 Hsien-Chung Wu. Solving the interval-valued optimization problems based on the concept of null set. Journal of Industrial & Management Optimization, 2018, 14 (3) : 1157-1178. doi: 10.3934/jimo.2018004 Jiawei Chen, Shengjie Li, Jen-Chih Yao. Vector-valued separation functions and constrained vector optimization problems: optimality and saddle points. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-18. doi: 10.3934/jimo.2018174 Zengjing Chen, Yuting Lan, Gaofeng Zong. Strong law of large numbers for upper set-valued and fuzzy-set valued probability. Mathematical Control & Related Fields, 2015, 5 (3) : 435-452. doi: 10.3934/mcrf.2015.5.435 Sina Greenwood, Rolf Suabedissen. 2-manifolds and inverse limits of set-valued functions on intervals. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5693-5706. doi: 10.3934/dcds.2017246 Mariusz Michta. Stochastic inclusions with non-continuous set-valued operators. Conference Publications, 2009, 2009 (Special) : 548-557. doi: 10.3934/proc.2009.2009.548 Guolin Yu. Topological properties of Henig globally efficient solutions of set-valued problems. Numerical Algebra, Control & Optimization, 2014, 4 (4) : 309-316. doi: 10.3934/naco.2014.4.309 Jiawei Chen, Guangmin Wang, Xiaoqing Ou, Wenyan Zhang. Continuity of solutions mappings of parametric set optimization problems. Journal of Industrial & Management Optimization, 2020, 16 (1) : 25-36. doi: 10.3934/jimo.2018138 Jiawei Chen, Zhongping Wan, Liuyang Yuan. Existence of solutions and $\alpha$-well-posedness for a system of constrained set-valued variational inequalities. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 567-581. doi: 10.3934/naco.2013.3.567 Yu Zhang Tao Chen
CommonCrawl
Further improvement of factoring $ N = p^r q^s$ with partial known bits Optimal information ratio of secret sharing schemes on Dutch windmill graphs February 2019, 13(1): 101-119. doi: 10.3934/amc.2019006 Maximum weight spectrum codes Tim Alderson 1, and Alessandro Neri 2, Dept. of Mathematics and Statistics, University of New Brunswick Saint John, Saint John, NB, E5S 2A6, Canada Institut für Mathematik, Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich, Switzerland Received April 2018 Revised July 2018 Published December 2018 Fund Project: The first author acknowledges the support of the NSERC of Canada Discovery Grant program. The second author acknowledges the support of Swiss National Science Foundation grant n.169510. In the recent work [9], a combinatorial problem concerning linear codes over a finite field $\mathbb{F}_q$ was introduced. In that work the authors studied the weight set of an $[n,k]_q$ linear code, that is the set of non-zero distinct Hamming weights, showing that its cardinality is bounded above by $\frac{q^k-1}{q-1}$. They showed that this bound was sharp in the case $ q = 2 $, and in the case $ k = 2 $. They conjectured that the bound is sharp for every prime power $ q $ and every positive integer $ k $. In this work we quickly establish the truth of this conjecture. We provide two proofs, each employing different construction techniques. The first relies on the geometric view of linear codes as systems of projective points. The second approach is purely algebraic. We establish some lower bounds on the length of codes that satisfy the conjecture, and the length of the new codes constructed here are discussed. Keywords: Linear codes, Hamming weights, algebraic coding theory, weight spectrum, projective geometry. Mathematics Subject Classification: Primary: 11T71; Secondary: 05D99, 94B05. Citation: Tim Alderson, Alessandro Neri. Maximum weight spectrum codes. Advances in Mathematics of Communications, 2019, 13 (1) : 101-119. doi: 10.3934/amc.2019006 T. L. Alderson and A. A. Bruen, Coprimitive sets and inextendable codes, Des. Codes Cryptogr., 47 (2008), 113-124. doi: 10.1007/s10623-007-9079-0. Google Scholar S. Ball, Finite Geometry and Combinatorial Applications, volume 82. Cambridge University Press, 2015. doi: 10.1017/CBO9781316257449. Google Scholar P. Delsarte, Four fundamental parameters of a code and their combinatorial significance, Information and Control, 23 (1973), 407-438. doi: 10.1016/S0019-9958(73)80007-5. Google Scholar H. Enomoto, P. Frankl, N. Ito and K. Nomura, Codes with given distances, Graphs and Combinatorics, 3 (1987), 25-38. doi: 10.1007/BF01788526. Google Scholar A. Haily and D. Harzalla, On binary linear codes whose automorphism group is trivial, Journal of Discrete Mathematical Sciences and Cryptography, 18 (2015), 495-512. doi: 10.1080/09720529.2014.927650. Google Scholar J. MacWilliams, A theorem on the distribution of weights in a systematic code, The Bell System Technical Journal, 42 (1963), 79-94. doi: 10.1002/j.1538-7305.1963.tb04003.x. Google Scholar J. T. Schwartz, Fast probabilistic algorithms for verification of polynomial identities, Journal of the ACM (JACM), 27 (1980), 701-717. doi: 10.1145/322217.322225. Google Scholar M. Shi, X. Li, A. Neri and P. Solé, How many weights can a cyclic code have?, arXiv: 1807.08418, 15, November, 2018. Google Scholar M. Shi, H. Zhu, P. Solé and G. D. Cohen, How many weights can a linear code have?, Des. Codes Cryptogr., (2018). doi: 10.1007/s10623-018-0488-z. Google Scholar D. Slepian, A class of binary signaling alphabets, Bell Labs Technical Journal, 35 (1956), 203-234. doi: 10.1002/j.1538-7305.1956.tb02379.x. Google Scholar M. A. Tsfasman and S. G. Vlăduţ, Algebraic-geometric Codes, volume 58 of Mathematics and its Applications (Soviet Series), Kluwer Academic Publishers Group, Dordrecht, 1991. Translated from the Russian by the authors. doi: 10.1007/978-94-011-3810-9. Google Scholar O. Veblen and J. W. Young, Projective Geometry, Vol. 1, Blaisdell Publishing Co. Ginn and Co. New York-Toronto-London, 1965. Google Scholar Anuradha Sharma, Saroj Rani. Trace description and Hamming weights of irreducible constacyclic codes. Advances in Mathematics of Communications, 2018, 12 (1) : 123-141. doi: 10.3934/amc.2018008 Alonso sepúlveda Castellanos. Generalized Hamming weights of codes over the $\mathcal{GH}$ curve. Advances in Mathematics of Communications, 2017, 11 (1) : 115-122. doi: 10.3934/amc.2017006 Sergio R. López-Permouth, Steve Szabo. On the Hamming weight of repeated root cyclic and negacyclic codes over Galois rings. Advances in Mathematics of Communications, 2009, 3 (4) : 409-420. doi: 10.3934/amc.2009.3.409 Petr Lisoněk, Layla Trummer. Algorithms for the minimum weight of linear codes. Advances in Mathematics of Communications, 2016, 10 (1) : 195-207. doi: 10.3934/amc.2016.10.195 Nupur Patanker, Sanjay Kumar Singh. Generalized Hamming weights of toric codes over hypersimplices and squarefree affine evaluation codes. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021013 Vito Napolitano, Ferdinando Zullo. Codes with few weights arising from linear sets. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020129 Peter Beelen, Kristian Brander. Efficient list decoding of a class of algebraic-geometry codes. Advances in Mathematics of Communications, 2010, 4 (4) : 485-518. doi: 10.3934/amc.2010.4.485 Olav Geil, Stefano Martin. Relative generalized Hamming weights of q-ary Reed-Muller codes. Advances in Mathematics of Communications, 2017, 11 (3) : 503-531. doi: 10.3934/amc.2017041 Chengju Li, Sunghan Bae, Shudi Yang. Some two-weight and three-weight linear codes. Advances in Mathematics of Communications, 2019, 13 (1) : 195-211. doi: 10.3934/amc.2019013 Dandan Wang, Xiwang Cao, Gaojun Luo. A class of linear codes and their complete weight enumerators. Advances in Mathematics of Communications, 2021, 15 (1) : 73-97. doi: 10.3934/amc.2020044 David Keyes. $\mathbb F_p$-codes, theta functions and the Hamming weight MacWilliams identity. Advances in Mathematics of Communications, 2012, 6 (4) : 401-418. doi: 10.3934/amc.2012.6.401 Daniele Bartoli, Adnen Sboui, Leo Storme. Bounds on the number of rational points of algebraic hypersurfaces over finite fields, with applications to projective Reed-Muller codes. Advances in Mathematics of Communications, 2016, 10 (2) : 355-365. doi: 10.3934/amc.2016010 Alexander A. Davydov, Massimo Giulietti, Stefano Marcugini, Fernanda Pambianco. Linear nonbinary covering codes and saturating sets in projective spaces. Advances in Mathematics of Communications, 2011, 5 (1) : 119-147. doi: 10.3934/amc.2011.5.119 Shudi Yang, Xiangli Kong, Xueying Shi. Complete weight enumerators of a class of linear codes over finite fields. Advances in Mathematics of Communications, 2021, 15 (1) : 99-112. doi: 10.3934/amc.2020045 Toshiharu Sawashima, Tatsuya Maruta. Nonexistence of some ternary linear codes with minimum weight -2 modulo 9. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021052 Irene Márquez-Corbella, Edgar Martínez-Moro. Algebraic structure of the minimal support codewords set of some linear codes. Advances in Mathematics of Communications, 2011, 5 (2) : 233-244. doi: 10.3934/amc.2011.5.233 Tonghui Zhang, Hong Lu, Shudi Yang. Two-weight and three-weight linear codes constructed from Weil sums. Mathematical Foundations of Computing, 2022 doi: 10.3934/mfc.2021041 Alex L Castro, Wyatt Howard, Corey Shanbrom. Bridges between subriemannian geometry and algebraic geometry: Now and then. Conference Publications, 2015, 2015 (special) : 239-247. doi: 10.3934/proc.2015.0239 Joaquim Borges, Josep Rifà, Victor Zinoviev. Completely regular codes by concatenating Hamming codes. Advances in Mathematics of Communications, 2018, 12 (2) : 337-349. doi: 10.3934/amc.2018021 Carlos Durán, Diego Otero. The projective Cartan-Klein geometry of the Helmholtz conditions. Journal of Geometric Mechanics, 2018, 10 (1) : 69-92. doi: 10.3934/jgm.2018003 Tim Alderson Alessandro Neri
CommonCrawl
Theoretical Computer Science Meta Theoretical Computer Science Stack Exchange is a question and answer site for theoretical computer scientists and researchers in related fields. It only takes a minute to sign up. Is predicting (in the limit) computable sequences as hard as the halting problem? Question: Is predicting (as defined below) computable sequences as hard as the halting problem? Elaboration: "Predict" means successfully predict, which means make only finitely many errors on the task of trying to predict the n-th bit of the sequence given access to the previous n-1 bits (starting from the first bit and going through the entire infinite computable sequence). There's a simple diagonalization argument (due to Legg 2006) that for any Turing machine predictor p, there's a computable sequence on which it makes infinitely many errors. (Construct a sequence that has as its nth term the opposite of what p predicts given the previous n-1 terms in the sequence.) So there is no computable predictor that predicts every computable sequence. A halting oracle would allow construction of such a predictor. But can you show that having such a predictor allows you to solve the halting problem? More elaboration Definition (Legg) A predictor p is a Turing machine that tries to predict the n-th bit of a sequence S given access to the previous n-1 bits. If the prediction fails to match the n-th bit of the sequence, we call this a mistake. We will say that p predicts S if p only makes finitely many mistakes on S. In other words, p predicts S if there is some number M in the sequence s.t. for every m>M, p correctly predicts the m-th bit of S given access to the first m-1 bits. Formally, we could define a predictor machine as having three tapes. The sequence is entered as input bit-by-bit on one tape, the predictions for the next bit are made on a second tape (the machine can only move right across this tape), and then there is a work tape on which the machine can move in both directions. Simple results By the above definition, there's a predictor that predicts all the rational numbers. (Use the standard zig-zag enumeration of the rationals. Start by predicting the 1st rational in the list, if there's a mistake, move to the next rational.). By a similar argument, there's a predictor s.t. given access to N, is able to predict all sequences of Kolomogorov complexity less than or equal to N. (Run all the N-bit machines in parallel and take the prediction of the machine that halts first. You can only make finitely many errors). Citation Shane Legg 2006 http://www.vetta.org/documents/IDSIA-12-06-1.pdf (not the author of this post) computability edited Nov 12, 2010 at 5:22 Bob UnwinBob Unwin Actually this is easier than solving the halting problem. Let $f:\mathbb N\rightarrow\mathbb N$ be a function that dominates all computable functions, i.e., for all total computable functions $g:\mathbb N\rightarrow\mathbb N$, we have that for all but finitely many $n$, $g(n)\le f(n)$. It is a standard fact that there exist such functions that have strictly lower Turing degree than the halting problem, see e.g. Soare's book Recursively Enumerable Sets and Degrees. These are called the high Turing degrees. Let $\varphi_e$, $e\in\mathbb N$ be a standard list of the partial computable functions from $\mathbb N$ to $\{0,1\}$. Now, using $f$ we construct a predictor $p$. $p(a_0,\ldots,a_{k-1})$ is chosen as some number $a_{k}\in \{0,1\}$ so as to make the sequence $a_0,\ldots,a_k$ agree with $\varphi_t(0),\ldots,\varphi_t(k)$ for the minimal possible $t\le k$. Since we cannot wait for possibly hanging computations to halt, we only monitor the computations until stage $f(k)$ ($f(k)$ many computations steps). If there is no such $t$ we just set $a_{k}$ arbitrarily (say $=0$). Now suppose $q$ is minimal such that $\varphi_q$ computes the computable sequence that is actually observed. Then we will for all but finitely many $k$ use $t=q$ and hence choose the correct $a_{k}$, because $f$ dominates the running time function for $\varphi_q$, namely $s(n)=$ the least stage where $\varphi_q(n)$ has halted. Bjørn Kjos-Hanssen♦Bjørn Kjos-Hanssen $\begingroup$ This is in fact uniformly equivalent to computing a dominating function. ​ ​ $\endgroup$ – user6973 Thanks for contributing an answer to Theoretical Computer Science Stack Exchange! Is predicting (in the limit) computable sequences as hard as a dominating function? What do we know about restricted versions of the halting problem Is there any research on approximation of reals with computable numbers Compressing information about the halting problem for oracle Turing machines Equilibrium in a Halting Game Complexity relative to the graph of the Busy-Beaver function
CommonCrawl
21.14: Calculating \(K_\text{a}\) and \(K_\text{b}\) Calculating \(K_\text{a}\) and \(K_\text{b}\) The pH meter was invented because Florida orange growers needed a way to test the acidity of their fruit. The first meter was invented by Arnold Beckman, who went on to form Beckman Instruments. Beckman's business was very successful and he used much of his fortune to fund science education and research. The Beckman family donated $40 million to build the Beckman Institute at the University of Illinois. The numerical value of \(K_\text{a}\) and \(K_\text{b}\) can be determined from an experiment. A solution of known concentration is prepared and its pH is measured with an instrument called a pH meter. Figure 21.14.1: A pH meter is a laboratory device that provides quick, accurate measurements of the pH of solutions. Example 21.14.1 A \(0.500 \: \text{M}\) solution of formic acid is prepared and its pH is measured to be 2.04. Determine the \(K_\text{a}\) for formic acid. Step 1: List the known values and plan the problem. Initial \(\left[ \ce{HCOOH} \right] = 0.500 \: \text{M}\) pH \(= 2.04\) \(K_\text{a} = ?\) First, the pH is used to calculate the \(\left[ \ce{H^+} \right]\) at equilibrium. An ICE table is set up in order to determine the concentrations of \(\ce{HCOOH}\) and \(\ce{HCOO^-}\) at equilibrium. All concentrations are then substituted into the \(K_\text{a}\) expression and the \(K_\text{a}\) value is calculated. Step 2: Solve. \[\left[ \ce{H^+} \right] = 10^{-\text{pH}} = 10^{-2.04} = 9.12 \times 10^{-3} \: \text{M}\] Since each formic acid molecule that ionizes yields one \(\ce{H^+}\) ion and one formate ion \(\left( \ce{HCOO^-} \right)\), the concentrations of \(\ce{H^+}\) and \(\ce{HCOO^-}\) are equal at equilibrium. We assume that the initial concentrations of each ion are zero, resulting in the following ICE table. \[\begin{array}{l|ccc} & \ce{HCOOH} & \ce{H^+} & \ce{HCOO^-} \\ \hline \text{Initial} & 0.500 & 0 & 0 \\ \text{Change} & -9.12 \times 10^{-3} & +9.12 \times 10^{-3} & +9.12 \times 10^{-3} \\ \text{Equilibrium} & 0.491 & 9.12 \times 10^{-3} & 9.12 \times 10^{-3} \end{array}\] Now substituting into the \(K_\text{a}\) expression gives: \[K_\text{a} = \frac{\left[ \ce{H^+} \right] \left[ \ce{HCOO^-} \right]}{\left[ \ce{HCOOH} \right]} = \frac{\left( 9.12 \times 10^{-3} \right) \left( 9.12 \times 10^{-3} \right)}{0.491} = 1.7 \times 10^{-4}\] Step 3: Think about your result. The value of \(K_\text{a}\) is consistent with that of a weak acid. Two significant figures are appropriate for the answer, since there are two digits after the decimal point in the reported pH. Similar steps can be taken to determine the \(K_\text{b}\) of a base. For example, a \(0.750 \: \text{M}\) solution of the weak base ethylamine \(\left( \ce{C_2H_5NH_2} \right)\) has a pH of 12.31. \[\ce{C_2H_5NH_2} + \ce{H_2O} \rightleftharpoons \ce{C_2H_5NH_3^+} + \ce{OH^-}\] Since one of the products of the ionization reaction is the hydroxide ion, we need to first find the \(\left[ \ce{OH^-} \right]\) at equilibrium. The pOH is \(14 - 12.31 = 1.69\). The \(\left[ \ce{OH^-} \right]\) is then found from \(10^{-1.69} = 2.04 \times 10^{-2} \: \text{M}\). The ICE table is then set up as shown below. \[\begin{array}{l|ccc} & \ce{C_2H_5NH_2} & \ce{C_2H_5NH_3^+} & \ce{OH^-} \\ \hline \text{Initial} & 0.750 & 0 & 0 \\ \text{Change} & -2.04 \times 10^{-2} & +2.04 \times 10^{-2} & +2.04 \times 10^{-2} \\ \text{Equilibrium} & 0.730 & 2.04 \times 10^{-2} & 2.04 \times 10^{-2} \end{array}\] Substituting into the \(K_\text{b}\) expression yields the \(K_\text{b}\) for ethylamine. \[K_\text{b} = \frac{\left[ \ce{C_2H_5NH_3^+} \right] \left[ \ce{OH^-} \right]}{\left[ \ce{C_2H_5NH_2} \right]} = \frac{\left( 2.04 \times 10^{-2} \right) \left( 2.04 \times 10^{-2} \right)}{0.730} = 5.7 \times 10^{-4}\] Calculations of \(K_\text{a}\) and \(K_\text{b}\) are described. 21.13: Strong and Weak Bases and Base Ionization Constant \(\left( K_\text{b} \right)\) 21.15: Calculating pH of Weak Acid and Base Solutions
CommonCrawl
Skip to main content Skip to sections Nano-Micro Letters October 2018 , 10:76 | Cite as A Self-Powered Breath Analyzer Based on PANI/PVDF Piezo-Gas-Sensing Arrays for Potential Diagnostics Application Yongming Fu Haoxuan He Tianming Zhao Yitong Dai Wuxiao Han Lili Xing Xinyu Xue First Online: 12 November 2018 2k Downloads The increasing morbidity of internal diseases poses serious threats to human health and quality of life. Exhaled breath analysis is a noninvasive and convenient diagnostic method to improve the cure rate of patients. In this study, a self-powered breath analyzer based on polyaniline/polyvinylidene fluoride (PANI/PVDF) piezo-gas-sensing arrays has been developed for potential detection of several internal diseases. The device works by converting exhaled breath energy into piezoelectric gas-sensing signals without any external power sources. The five sensing units in the device have different sensitivities to various gas markers with concentrations ranging from 0 to 600 ppm. The working principle can be attributed to the coupling of the in-pipe gas-flow-induced piezoelectric effect of PVDF and gas-sensing properties of PANI electrodes. In addition, the device demonstrates its use as an ethanol analyzer to roughly mimic fatty liver diagnosis. This new approach can be applied to fabricating new exhaled breath analyzers and promoting the development of self-powered systems. Polyaniline Polyvinylidene fluoride Piezoelectric Sensor array Diagnostics Breath analyzer The online version of this article ( https://doi.org/10.1007/s40820-018-0228-y) contains supplementary material, which is available to authorized users. 1 Highlights A self-powered breath analyzer based on polyaniline/polyvinylidene fluoride (PANI/PVDF) piezo-gas-sensing arrays was developed for a potential diagnostics application. The device works by converting energy from exhaled breath into electrical sensing signals without any external power sources. The working principle can be attributed to the coupling of in-pipe gas-flow-induced piezoelectric effect of PVDF bellows and gas-sensing effect of PANI electrodes. In recent years the increasing morbidity of internal diseases induced by unwholesome diet and working behaviors has posed a serious threat to human health and quality of life [1, 2, 3]. Early detection and treatment play key roles in improving the cure rate of the patients [4]. Although conventional blood examination methods have been developed with good sensitivity and stability [5], a noninvasive, portable, and convenient diagnostic method is urgently needed for a wide range of early diagnoses and high-risk population screening [6]. As an important physiological process for human beings, breath is one of the most important ways to exchange substances between the human body and outside world. Various studies suggest that exhaled breath, containing a large number of metabolic products, includes gas species and concentrations closely associated with human health as indicators of certain diseases [7, 8, 9, 10, 11, 12]. For example, ethanol in exhaled breath is recognized as a gas marker of fatty liver; oxynitride (NOx) is a gas marker of airway inflammation; acetone is a gas marker of diabetes; methane (CH4) is a gas marker of liver cirrhosis; and carbon monoxide (CO) is a gas marker of asthma. However, exhaled breath analyzers are restricted by possible limitations, such as high cost, structure complexity, need for high-quality materials, and reliance on external power sources. Self-powered systems aimed at powering portable and wearable electronics with human motion have been developed based on piezoelectric or triboelectric nanogenerators [13, 14, 15, 16, 17, 18, 19]. In addition, conducting polymer-based gas sensors are widely used in room-temperature detection of exhaled gas markers [20, 21, 22, 23]. Compared with triboelectric nanogenerators [24, 25], piezoelectric nanogenerators can work without the contact–separation process between two materials, which is more suitable for constructing portable self-powered devices. By simultaneously carrying out power generation and gas sensing, the integration of the piezoelectric nanogenerator and gas sensor in a single device may be a feasible way to realize a self-powered exhaled breath analyzer. A novel and distinct device architecture should be developed adapting to the convenience and compatibility of exhaled breath analysis. In this paper, we report a self-powered breath analyzer based on polyaniline/polyvinylidene fluoride (PANI/PVDF) piezo-gas-sensing arrays for a potential diagnostics application. PANI is an easily synthesized conducting polymer that is widely used in room-temperature gas-sensing applications [23]. PVDF is a piezoelectric polymer with a high piezoelectric coefficient and flexibility [26]. Based on coupling of the in-pipe gas-flow-induced piezoelectric effect of PVDF and gas-sensing properties of PANI electrodes, the exhaled breath analyzer can convert energy from exhaled breath into piezoelectric gas-sensing signals. The device consists of five different sensing units, with each sensing unit having favorable selectivity to a particular gas marker. The sensing signals of every sensing unit hold a proportional relationship with gas concentration in a wide range (from 0 to 600 ppm), along with outstanding room-temperature response/recovery kinetics. This work launches a new working principle in the exhaled breath detection field and greatly advances the applicability of self-powered systems. 3 Experimental Procedures 3.1 Fabrication of PANI Electrodes First, a piece of copper foil (5 cm × 5 cm × 10 μm) was covered with a photoresist pattern by photolithography. Then, the copper foil was wet-etched by soaking with aqueous sodium persulfate (0.5 mol L−1) at 50 °C for 2 min, followed by immersion into developer for 30 s to remove the residual photoresist. Second, PANI derivatives were deposited on Cu substrate by electrochemical polymerization. The growing solution contained equal molar concentrations (0.2 mol L−1) of dopant and aniline monomer. Sodium sulfate, sodium dodecylbenzene sulfonate, sodium oxalate, camphorsulfonic acid, and nitric acid were used as dopant sources in each of the five PANI derivatives. The Pt wafer, Ag/AgCl electrode, and Pt wire served as the working, reference, and counter electrodes, respectively. The electrochemical reaction was employed at 1.2 V with 0.05 V s−1 for 200 s to polymerize PANI. Finally, twist pattern PANI electrodes were obtained by etching in aqueous sodium persulfate (0.5 mol L−1) at 50 °C for 2 min to remove the copper substrate. 3.2 Device Fabrication PVDF gel was obtained by adding 1 g of PVDF powder into 10 mL of acetone and stirring at 60 °C for 30 min. Then, the PVDF gel was spin-coated on PANI electrodes at 200 rpm for 30 s and dried at room temperature for 2 h to form PVDF/PANI film. To enhance the piezoelectricity of PVDF, the film was polarized under an electric field of 20 kV mm−1 at 80 °C for 30 min. A 100-nm Cu film was deposited on the back of PVDF by electron beam evaporation. Finally, PANI/PVDF bellows was obtained by extrusion molding at 60 °C for 24 h in a 3D-printed model. 3.3 Characterization and Measurements For accurate measurement, five PANI electrodes were separately glued to a copper wire as working electrodes, and a copper wire was glued to the back of the Cu film as a shared counter electrode. During the test, the device was placed at one end of a gas pipe connected to a gas cylinder and air compressor. The rate and concentration of gas flow were controlled by gas flowmeters. The gas-sensing performances were studied by measuring the output current in the circuit under different conditions. The gas flow rate was held at 8 m s−1 unless specified otherwise. The output current was measured by a low-noise current preamplifier (SR570, Stanford Research Systems) and collected by a data acquisition card (PCI-1712, Advantech) in the computer. The morphology and composition of PANI derivatives were investigated by a scanning electron microscope (SEM; Hitachi S4800) with an energy-dispersive spectrometer (EDS). 4 Results and Discussion Figure 1a is a schematic diagram of the proposed working mechanism of a self-powered exhaled breath analyzer. As a human subject blows into the analyzer, the current signal generated by PANI/PVDF piezoelectric bellows can be measured through a current amplifier and displayed on a computer. Amplitude of the current signal is related to the gases detected in exhaled breath, such that the characteristic of exhaled breath is deduced by calculating the variation of measured signals. The photograph of un-rolled PANI/PVDF film and PANI/PVDF bellows is shown in Fig. 1b. The basic structure of the device mainly consists of three functional parts: Twist PANI patterns function as both working electrodes and gas-sensing material; PVDF film works as the power source through the in-pipe gas-flow-induced piezoelectric effect; and Cu film on the back works as the shared counter electrode. As the bellows configuration was widely used in mechanical energy conversion by extending/retracting along the radial direction [27, 28], the PVDF film is formed into bellows to productively enhance the efficiency of converting breath energy into piezoelectricity. In the as-fabricated device, the thickness of PVDF and PANI is uniform and the 100-nm Cu film is deposited on the whole back surface of PVDF as the shared counter electrode. There are five individual PANI electrodes in one as-fabricated device, and each electrode can be separately connected with an external circuit for electrical measurements, as shown in Fig. S1. The twist pattern structure of PANI electrodes is employed to improve the lifetime and stability as the bellows vibrates with high frequency under blowing. The detailed fabrication progress of the device is shown in Fig. 1c. In brief, a piece of Cu foil is etched to a twist pattern by photolithography; five PANI derivatives are separately deposited on twisted Cu electrodes by electrochemical polymerization; PANI/PVDF film is obtained by spin-coating the PVDF gel on PANI electrodes; PANI/PVDF bellows is shaped up from PANI/PVDF film through extrusion forming. Figure 1d shows a typical SEM image of selected region in one sensing unit (nitric acid-doped PANI), demonstrating that the width of PANI electrode is ~ 2 mm. Figure 1e is a SEM image of the PANI/PVDF interface enlarged from Fig. 1d, showing that the height of PANI is lower than that of PVDF. Figure 1f is a SEM image of the PVDF film, demonstrating that the PVDF surface is smooth, and no pores or fractures can be observed. Structure and fabrication of the device. a Proposed concept. b Photographs of un-rolled PANI/PVDF film and as-fabricated bellows. c Fabrication process of the device. d SEM image of PANI/PVDF. e High-magnification SEM image of PANI/PVDF interface. f SEM image of PVDF film. g Lumped parameter equivalent circuit model of the device. h A few cycles of the output current To propose the working principle, a lumped parameter equivalent circuit model of the self-powered exhaled breath analyzer can be derived from three circuit elements in a circuit (Fig. 1g) [29, 30, 31, 32, 33]: The first one is the voltage term, which originates from the generated piezoelectric polar dipoles in PVDF and can be represented by an ideal voltage source (V); the second one is a capacitance term, which originates from the inherent capacitance of PVDF between the two electrodes and can be represented by a capacitor (C); the third one is a resistance term, which originates from the variation of PANI derivatives influenced by the atmosphere and can be represented by a resistance (R). The capacitor and voltage source of PVDF are parallel-connected to each other and series-connected with the resistance of PANI. Figure 1h shows the generated output current in a few cycles, confirming that the output current is an AC electrical signal. One can observe that there are five twist PANI electrodes in a single device, and each PANI derivative is doped with different dopant sources. The sensing units are labeled as PANI(SS), PANI(SDS), PANI(SO), PANI(CA), and PANI(NA) with respect to the dopants (sodium sulfate, sodium dodecylbenzene sulfonate, sodium oxalate, camphorsulfonic acid, and nitric acid, respectively). Figure 2a–e shows the SEM images of PANI(SS), PANI(SDS), PANI(SO), PANI(CA), and PANI(NA), respectively, demonstrating the distinctness of the surface morphology of each sample. Figure 2f shows the EDS spectra of the five PANI derivatives. C, N, and O elements can be clearly found in the five samples, while the S element can only be found in PANI(SS), PANI(SDS), and PANI(CA) samples. The absence of the Na element verifies that acid group anions are grafted with PANI chains as a dopant source during the polymerization process [23]. Figure 2g shows Raman spectra of the five PANI derivatives. As observed, the peaks around 1580 cm−1 can be assigned to the C=C stretching vibrations of quinoid ring; the peaks around 1500 cm−1 can be assigned to the C=C stretching vibrations of benzene ring; and the band around 1350 cm−1 provides the information on C~N+· vibrations of delocalized polaronic structures [34]. These results indicate that diverse PANI derivatives are successfully synthesized through electrochemical polymerization and that molecular conformation of PANI can be affected by changing dopants. I–V curves of the five PANI derivatives are shown in Fig. S2, indicating that the conductivity is significantly influenced by the dopant through charge redistribution in PANI chains [35, 36]. Figure 2h illustrates the XRD pattern of the PVDF film. The dominating peak around 20.72° is narrow and strong, which belongs to (110) and (200) of beta-phase PVDF [37]. Figure 2i is the FTIR spectrum of the PVDF film. The typical peaks around 1400 and 840 cm−1 can be indexed to stretching of beta-phase PVDF; the peaks around 1070 and 760 cm−1 can be indexed to stretching of alpha-phase PVDF; and the peak around 880 cm−1 can be indexed to stretching of amorphous-phase PVDF [38]. These results indicate that the polarized PVDF film is a mixed phase with beta-phase is predominant phase, making the fabricated PVDF an effective piezoelectric film. Characterization of the PANI/PVDF. a–e SEM images of PANI(SS), PANI(SDS), PANI(SO), PANI(CA), and PANI(NA), respectively. f EDS spectra of the five PANI derivatives. g Raman spectra of the five PANI derivatives. h XRD pattern of PVDF film. i FTIR spectrum of PVDF film Figure 3 shows the sensing performance of one PANI(SS) sensing unit in different ambient atmospheres with gas concentrations ranging from 0 to 600 ppm. The gas flow rate is held constant at 8 m s−1. Figure S3 shows the continuous output current of the PANI(SS) sensing unit under an airflow rate of 8 m s−1 for 12 h, exhibiting a slight decrease in the output current after long-time service (the mechanical fatigue of PVDF). Figure 3a–e illustrates that the output current significantly decreases with an increasing concentration of acetone, CO, ethanol, and CH4 with discernible differences, but in turn increases with increasing NOx concentration. Video S1 shows the real-time sensing process of the PANI(SS) unit against 600 ppm ethanol gas flow. To calculate the relationship between output current and gas concentration, the gas response (Rg) of the device can be simply defined as [39, 40, 41]: $$ R_{\text{g}} \% = \frac{{\left| {I_{\text{a}} - I_{\text{g}} } \right|}}{{I_{\text{a}} }} \times 100\% , $$ where Ia and Ig represent the output current of the device in air and the output current with a particular concentration of gas marker, respectively. The response curves with respect to acetone, CO, ethanol, CH4, and NOx are shown in Fig. 3f. With the gas concentration ranging from 100 to 600 ppm in steps of 100 ppm, the response to acetone is 11.0%, 15.1%, 25.7%, 41.8%, 57.0%, and 68.2%; the response to CO is 6.4%, 11.6%, 16.7%, 22.1%, 28.5%, and 30.3%; the response to ethanol is 11.9%, 15.6%, 21.3%, 24.5%, 29.2%, and 31.6%; the response to NOx is 1.1%, 3.3%, 7.1%, 10.6%, 12.7%, and 15.8%; and the response to CH4 is 5.7%, 8.9%, 12.2%, 17.1%, 23.2%, and 26.4%, respectively. As a maximum, the response to 600 ppm acetone is 68.2%, indicating that the PANI(SS) sensing unit has a promising selectivity against acetone. It can be observed that the gas response is not linear, which may be induced by the nonlinear resistance variation of PANI. Similar nonlinear response curves are often observed in other resistive-type gas sensors, such as metal oxide-based sensors and conducting polymer-based sensors [42, 43]. Performances of PANI(SS) sensing unit. a–e The output current of PANI(SS) in response to acetone, CO, ethanol, CH4, and NOx with concentrations ranging from 0 to 600 ppm. f The relationship between the response and gas concentration It has been reported that PANI derivatives doped with diverse dopants demonstrate different selectivities toward various gas markers [23]. The sensing performance of sensing units with different PANI derivatives in their particular gas atmospheres is shown in Fig. 4. With a concentration from 0 to 600 ppm of particular gas markers, the output current of PANI(SDS) in response to ethanol (Fig. 4a) is 42.7, 38.4, 35.0, 30.1, 24.2, 20.8, and 18.4 nA; the output current of PANI(SO) in response to CO (Fig. 4b) is 47.1, 42.0, 38.1, 34.9, 30.5, 27.4, and 24.9 nA; the output current of PANI(CA) in response to NOx (Fig. 4c) is 51.0, 60.2, 73.9, 78.7, 85.4, 95.7, and 104.6 nA; and the output current of PANI(NA) in response to CH4 (Fig. 4d) is 62.0, 52.8, 46.2, 42.7, 38.9, 35.9, and 28.8 nA, respectively. Figure 4e shows the relationship of the four sensing units between the output current and the concentration of gas markers. The response of the five sensing units to 600 ppm of particular gas markers is shown in Fig. 4f, with the gas marker of PANI(SS), PANI(SDS), PANI(SO), PANI(CA), and PANI(NA) as acetone, ethanol, CO, NOx, and CH4, respectively. The maximum response of each sensing unit to 600 ppm of gas marker is 68.2%, 56.9%, 47.1%, 105.1%, and 53.5%, respectively. Similar results are found in several previous studies, which can be attributed to the dopant-induced change of morphology and conformation of PANI derivatives [44, 45]. These results indicate the potential application of the device as a self-powered exhaled breath analyzer for disease diagnostics. Sensing performances of the five sensing units. The output current of a PANI(SDS) unit under different ethanol concentrations. b PANI(SO) unit under different CO concentrations. c PANI(CA) unit under different NOx concentrations. d PANI(NA) unit under different CH4 concentrations. e The relationship of the four units between the output current and special gas marker concentration. f The response of the five sensing units to 600 ppm gases Figure 5 shows the real-time continuously measured current profiles of the five sensing units to illustrate the dynamic response/recovery process with particular gas markers, all of which exhibit a slow response and a fast recovery time. The response time of PANI(SS), PANI(SDS), PANI(SO), PANI(CA), and PANI(NA) is ~ 50, 63, 67, 54, and 52 s, while the recovery time is ~ 15, 24, 18, 40, and 28 s, respectively. Such relatively slow response/recovery process is common in room-temperature gas sensors [46, 47]. The recovery process is faster than the response time, which may be ascribed to the high-speed gas flow measurement conditions for desorption of gas molecules from PANI film. This proves that the approach is an effective method for real-time exhaled breath detection. Response/recovery processes of the sensing units. a PANI(SS) for acetone. b PANI(SDS) for ethanol. c PANI(SO) for CO. d PANI(CA) for NOx. e PANI(NA) for CH4 The piezoelectric output current of PANI/PVDF bellows is affected by gas flow rate. Therefore, the influence of the gas flow rate is investigated, and the results are shown in Figs. S4 and 6. Figure S4 shows that the frequency of output current increases under airflow rate in the range of 1–10 m s−1. Figure 6a shows the output current of the PANI(SS) unit with increasing airflow rate. The sensing unit outputs a tiny current with airflow rates lower than 3 m s−1, while the output current significantly increases with airflow rates over 3 m s−1. Under low gas flow rates, vibrations are mainly induced by turbulent buffeting, while under high gas flow rates, they are mainly induced by elastic excitation. The vibrations rapidly increase with tiny increments in the gas flow rate when the gas flow rate is just over a certain threshold [48]. With an airflow rate ranging from 4 to 9 m s−1, the output current is 27.6, 33.2, 36.3, 41.0, 47.2, 53.7, and 55.9 nA, respectively, showing a linearly increasing trend. The output current reaches saturation with airflow rates over 9 m s−1. Figure 6b, c shows the output current of the five sensing units with air and with 600 ppm of gas markers, respectively, indicating that the output current of every sensing unit linearly increases both in airflow and gas markers with the gas flow rate ranging from 4 to 9 m s−1. Interestingly, the calculated responses are almost maintained at a constant level, as shown in Fig. 6d. The result shows that the gas flow rate has a negligible effect on gas response, further supporting the possibility of the device utilization in self-powered exhaled breath analysis. Sensing performances of the five units under different gas flow rates. a The output current of PANI(SS) unit under airflow rates in the range of 0–12 m s−1. b The output current of five sensing units under airflow rates in the range of 4–9 m s−1. c The output current of five sensing units in response to 600 ppm special gas marker under gas flow rate in the range of 4–9 m s−1. d The relationship of the five sensing units between responses and gas flow rates For breath analysis applications, humidity dependence on gas response of the device is very important. Figure 7 shows the influence of humidity on the ethanol response of the PANI(SDS) sensing unit. The relative humidity (RH) was controlled as 40%, 60%, 80%, and 100% RH, respectively. The output current of the PANI(SDS) sensing unit under airflow and 600 ppm ethanol gas flow in different humidity conditions is shown in Fig. 7a. From Fig. 7b, it can be observed that the output current increases with increasing RH in airflow, but decreases with increasing RH in ethanol gas flow. When the device is in humid airflow, water molecules can infiltrate pores in the PANI film to enhance conductivity and increase the output current. When the device is in humid ethanol gas flow, ethanol molecules react with PANI chains to decrease the doping level of PANI and liberate more infiltrated water molecules, leading to a significant increase in the resistance of PANI [49, 50]. As shown in Fig. 7c, the ethanol response increases with increasing humidity, even in high humidity levels. These results reveal that the device has potential application under high humidity levels in gas flow, such as in the case of human breath analysis. Humidity influence on ethanol response in the PANI(SDS) sensing unit. a The output current of the PANI(SDS) sensing unit under airflow and 600 ppm ethanol gas flow in different humidity conditions (40%, 60%, 80%, and 100% RH). b The relationship between output current and relative humidity in airflow and 600 ppm ethanol gas flow. c The relationship between response and relative humidity The working mechanism of the self-powered exhaled breath analyzer is simply shown in Fig. 8. The schematic diagram of the induced piezoelectric effect of PVDF is shown in Fig. 8a. Under blowing-induced deformation, the polarized dipoles in PVDF become parallel-aligned to create surface charge separation. According to the fundamental piezoelectric theory, the generated open-circuit voltage (VOC) and short-circuit current (ISC) of piezoelectric PVDF can be written as [33]: $$ V_{\text{OC}} = g_{33} \sigma Yd; $$ $$ I_{\text{SC}} = d_{33} YA\varepsilon , $$ where g33 is the piezoelectric voltage constant, d33 is the piezoelectric coefficient, Y is Young's modulus, σ is the strain in perpendicular direction, d is the thickness, A is the effective cross-sectional area, and ε is the applied strain. When the device is under gas flow, a current can be measured in the external circuit (Fig. 8b). While the blowing maintains a constant gas flow rate, the generated piezoelectricity of PVDF will be stable, and the resistance of PANI electrodes can be significantly influenced by gas species and concentration in the gas flow [39], giving rise to the variation of measured output current in the external circuit (Fig. 8c). To prove this mechanism, the I–V curve of the PANI(SDS) sensing unit is measured under 0, 300, and 600 ppm ethanol gas flow, shown in Fig. S5, and demonstrates the same response compared with self-powered gas-sensing performances. The interaction between gas molecules and PANI is multiform: For small molecules (CO and CH4), the increased resistance can be attributed to chain expansion induced by interposition of gas molecules into PANI [51, 52, 53]; for acetone and ethanol, weak chemical interactions between gas molecules and PANI dominate the decrease in doping level, leading to the increased resistance [54, 55]; and for NOx, a well-known oxidizing gas, PANI will be oxidized and positively charged by transferring electrons to NOx molecules, resulting in the decreased resistance of PANI and increase in output current [56]. When the atmosphere returns to air, adsorbed gas molecules are removed from PANI chains, and thus measured current in the external circuit recovers to its original value. Figure S6 shows the SEM image of PANI(NA) upon exposure to CH4, indicating that the surface morphology is significantly influenced by interposed gas molecules. Figure S7 shows the thickness of PANI(NA) before and after exposure to NH4, showing that the thickness is slightly affected by gas adsorption as well. Working principle of the device. a The original state of polarized PVDF. b Measuring the output current in air atmosphere. c Measuring the output current in gas marker atmosphere The blowing-driven PANI/PVDF bellows as an exhaled breath analyzer is demonstrated to detect ethanol in exhaled gas, as shown in Fig. 9. As ethanol gas presents itself in the exhaled breath of fatter liver patient, a healthy adult can mimic a fatty liver patient after drinking a certain amount of beer (Fig. 9a). The blowing is sustained for 5 s, and the highest gas flow rate is ~ 3 m s−1. The output current of the PANI(SDS) sensing unit is measured to represent ethanol concentration. As the adult blows into the self-powered analyzer prior to drinking beer (Fig. 9b), the output current is 8–12 nA. (The blowing rate is not constant.) As the adult blows into the self-powered analyzer after drinking one and two cups of beer, the output current drops from 10 to 7 nA (Fig. 9c) and from 10 to 5 nA (Fig. 9d), respectively. These results demonstrate that the PANI/PVDF piezo-gas-sensing arrays can be used as breath analyzer for a potential diagnostics application. It is worth noticing that although the sensor arrays can selectively detect several gas markers, the sensitivity and detection limit are insufficient for practical use in the current case. In the future, more work needs to be done to detect ppb-level gas markers, which will involve constructing composites of PANI derivatives and metal oxide nanostructures [57, 58]. Demonstration of self-powered PANI/PVDF breath analyzer. a Photograph of the measurement platform. b–d The output current of the device in response to blowing by an adult. b Without drinking, c after drinking one cup of beer, and d after drinking two cups of beer In summary, a blowing-driven bellows based on a PANI/PVDF piezo-gas-sensing array has been presented as noninvasive and self-powered exhaled breath analyzer for potential diagnostic applications. The device works by converting energy from blowing of exhaled breath into electrical sensing signals without any external power sources. Five sensing units in a single device can be used for diagnosis of liver cirrhosis, airway inflammation, diabetes, and asthma by detecting the gas markers in exhaled breath at concentrations in the range from 0 to 600 ppm. The sensing units exhibit excellent room-temperature response/recovery kinetics, and the response is maintained constant under different gas flow rates. The working principle can be attributed to the coupling of in-pipe gas-flow-induced piezoelectric effect of PVDF and gas-sensing properties of PANI electrodes. In addition, the device is demonstrated for detecting ethanol concentration in exhaled breath. This work launches a new working principle in the exhaled breath detection field and greatly advances the applicability of self-powered systems. This work was supported by the National Natural Science Foundation of China (11674048), the Fundamental Research Funds for the Central Universities (N170505001 and N160502002), and Program for Shenyang Youth Science and Technology Innovation Talents (RC170269). 40820_2018_228_MOESM1_ESM.pdf (559 kb) Supplementary material 1 (PDF 559 kb) Supplementary material 2 (MP4 10182 kb) T. Bodenheimer, K. Lorig, H. Holman, K. Grumbach, Patient self-management of chronic disease in primary care. JAMA J. Am. Med. Assoc. 288(19), 2469–2475 (2002). https://doi.org/10.1001/jama.288.19.2469 CrossRefGoogle Scholar A.H. Mokdad, E.S. Ford, B.A. Bowman, W.H. Dietz, F. Vinicor, V.S. Bales, J.S. Marks, Prevalence of obesity, diabetes, and obesity-related health risk factors, 2001. JAMA J. Am. Med. Assoc. 289(1), 76–79 (2003). https://doi.org/10.1001/jama.289.1.76 CrossRefGoogle Scholar S. Moussavi, S. Chatterji, E. Verdes, A. Tandon, V. Patel, B. Ustun, Depression, chronic diseases, and decrements in health: results from the world health surveys. Lancet 370(9590), 851–858 (2007). https://doi.org/10.1016/S0140-6736(07)61415-9 CrossRefGoogle Scholar A.S. Levey, J. Coresh, E. Balk, A.T. Kausz, A. Levin et al., National kidney foundation practice guidelines for chronic kidney disease: evaluation, classification, and stratification. Ann. Intern. Med. 139(2), 137–147 (2003). https://doi.org/10.7326/0003-4819-139-2-200307150-00013 CrossRefGoogle Scholar N. Singh, D.G. Armstrong, B.A. Lipsky, Preventing foot ulcers in patients with diabetes. JAMA J. Am. Med. Assoc. 293(2), 217–228 (2005). https://doi.org/10.1001/jama.293.2.217 CrossRefGoogle Scholar W. Gao, S. Emaminejad, H.Y.Y. Nyein, S. Challa, K. Chen et al., Fully integrated wearable sensor arrays for multiplexed in situ perspiration analysis. Nature 529(7587), 509–514 (2016). https://doi.org/10.1038/nature16521 CrossRefGoogle Scholar R.A. Dweik, P.B. Boggs, S.C. Erzurum, C.G. Irvin, M.W. Leigh, J.O. Lundberg, A.C. Olin, A.L. Plummer, D.R. Taylor, An official ATS clinical practice guideline: interpretation of exhaled nitric oxide levels (F ENO) for clinical applications. Am. J. Respir. Crit. Care Med. 184(5), 602–615 (2011). https://doi.org/10.1164/rccm.9120-11ST CrossRefGoogle Scholar I. Horvath, J. Hunt, P.J. Barnes, K. Alving, A. Antczak et al., Exhaled breath condensate: methodological recommendations and unresolved questions. Eur. Respir. J. 26(3), 523–548 (2005). https://doi.org/10.1183/09031936.05.00029705 CrossRefGoogle Scholar A. Jatakanon, S. Lim, S.A. Kharitonov, K.F. Chung, P.J. Barnes, Correlation between exhaled nitric oxide, sputum eosinophils, and methacholine responsiveness in patients with mild asthma. Thorax 53(2), 91–95 (1998). https://doi.org/10.1136/thx.53.2.91 CrossRefGoogle Scholar S.A. Kharitonov, P.J. Barnes, Exhaled markers of pulmonary disease. Am. J. Respir. Crit. Care Med. 163(7), 1693–1722 (2001). https://doi.org/10.1164/ajrccm.163.7.2009041 CrossRefGoogle Scholar G. Peng, U. Tisch, O. Adams, M. Hakim, N. Shehada et al., Diagnosing lung cancer in exhaled breath using gold nanoparticles. Nat. Nanotechnol. 4(10), 669–673 (2009). https://doi.org/10.1038/nnano.2009.235 CrossRefGoogle Scholar A.D. Smith, J.O. Cowan, K.P. Brassett, G.P. Herbison, D.R. Taylor, Use of exhaled nitric oxide measurements to guide treatment in chronic asthma. N. Engl. J. Med. 352(21), 2163–2173 (2005). https://doi.org/10.1056/NEJMoa043596 CrossRefGoogle Scholar Z. Wen, Q. Shen, X. Sun, Nanogenerators for self-powered gas sensing. Nano-Micro Lett. 9(4), 45 (2017). https://doi.org/10.1007/s40820-017-0146-4 CrossRefGoogle Scholar K.Y. Lee, M.K. Gupta, S.W. Kim, Transparent flexible stretchable piezoelectric and triboelectric nanogenerators for powering portable electronics. Nano Energy 14, 139–160 (2015). https://doi.org/10.1016/j.nanoen.2014.11.009 CrossRefGoogle Scholar Z.L. Wang, Triboelectric nanogenerators as new energy technology for self-powered systems and as active mechanical and chemical sensors. ACS Nano 7(11), 9533–9557 (2013). https://doi.org/10.1021/nn404614z CrossRefGoogle Scholar S. Wang, L. Lin, Z.L. Wang, Nanoscale triboelectric-effect-enabled energy conversion for sustainably powering portable electronics. Nano Lett. 12(12), 6339–6346 (2012). https://doi.org/10.1021/nl303573d CrossRefGoogle Scholar H. Shao, P. Cheng, R. Chen, L. Xie, N. Sun et al., Triboelectric electromagnetic hybrid generator for harvesting blue energy. Nano-Micro Lett. 10(3), 54 (2018). https://doi.org/10.1007/s40820-018-0207-3 CrossRefGoogle Scholar H. Guo, X. Pu, J. Chen, Y. Meng, M.H. Yeh et al., A highly sensitive, self-powered triboelectric auditory sensor for social robotics and hearing aids. Sci. Robot. 3(20), eaat2516 (2018). https://doi.org/10.1126/scirobotics.aat2516 CrossRefGoogle Scholar X. Pu, H. Guo, J. Chen, X. Wang, Y. Xi, C. Hu, Z.L. Wang, Eye motion triggered self-powered mechnosensational communication system using triboelectric nanogenerator. Sci. Adv. 3(7), e1700694 (2017). https://doi.org/10.1126/sciadv.1700694 CrossRefGoogle Scholar J. Janata, M. Josowicz, Conducting polymers in electronic chemical sensors. Nat. Mater. 2(1), 19–24 (2003). https://doi.org/10.1038/nmat768 CrossRefGoogle Scholar Y.Z. Long, M.M. Li, C. Gu, M. Wan, J.L. Duvail, Z. Liu, Z. Fan, Recent advances in synthesis, physical properties and applications of conducting polymer nanotubes and nanofibers. Prog. Polym. Sci. 36(10), 1415–1442 (2011). https://doi.org/10.1016/j.progpolymsci.2011.04.001 CrossRefGoogle Scholar D. Li, J. Huang, R.B. Kaner, Polyaniline nanofibers: a unique polymer nanostructure for versatile applications. Acc. Chem. Res. 42(1), 135–145 (2009). https://doi.org/10.1021/ar800080n CrossRefGoogle Scholar J.X. Huang, S. Virji, B.H. Weiller, R.B. Kaner, Polyaniline nanofibers: facile synthesis and chemical sensors. J. Am. Chem. Soc. 125(2), 314–315 (2003). https://doi.org/10.1021/ja028371y CrossRefGoogle Scholar Q. Liang, Q. Zhang, X. Yan, X. Liao, L. Han, F. Yi, M. Ma, Y. Zhang, Recyclable and green triboelectric nanogenerator. Adv. Mater. 29(5), 1604961 (2016). https://doi.org/10.1002/adma.201604961 CrossRefGoogle Scholar Q. Zhang, Q. Liang, Z. Zhang, Z. Kang, Q. Liao et al., Electromagnetic shielding hybrid nanogenerator for health monitoring and protection. Adv. Funct. Mater. 28(1), 1703801 (2017). https://doi.org/10.1002/adfm.201703801 CrossRefGoogle Scholar L. Persano, C. Dagdeviren, Y. Su, Y. Zhang, S. Girardo, D. Pisignano, Y. Huang, J.A. Rogers, High performance piezoelectric devices based on aligned arrays of nanofibers of poly(vinylidenefluoride-co-trifluoroethylene). Nat. Commun. 4(3), 1633 (2013). https://doi.org/10.1038/ncomms2639 CrossRefGoogle Scholar Y. Shapiro, A. Wolf, G. Kosa, Piezoelectric deflection sensor for a bi-bellows actuator. IEEE-ASME Trans. Mechatron. 18(3), 1226–1230 (2013). https://doi.org/10.1109/TMECH.2012.2218115 CrossRefGoogle Scholar P. Maurya, N. Mandal, Design and analysis of an electro-optic type pressure transmitter using bellows as primary sensor. IEEE Sens. J. 18(18), 7730–7740 (2018). https://doi.org/10.1109/JSEN.2018.2862921 CrossRefGoogle Scholar Y. Zhou, W. Liu, X. Huang, A. Zhang, Y. Zhang, Z.L. Wang, Theoretical study on two-dimensional MoS2 piezoelectric nanogenerators. Nano Res. 9(3), 800–807 (2016). https://doi.org/10.1007/s12274-015-0959-8 CrossRefGoogle Scholar Y. Hu, Z.L. Wang, Recent progress in piezoelectric nanogenerators as a sustainable power source in self-powered systems and active sensors. Nano Energy 14, 3–14 (2015). https://doi.org/10.1016/j.nanoen.2014.11.038 CrossRefGoogle Scholar Z.L. Wang, On maxwell's displacement current for energy and sensors: the origin of nanogenerators. Mater. Today 20(2), 74–82 (2017). https://doi.org/10.1016/j.mattod.2016.12.001 CrossRefGoogle Scholar N.R. Alluri, B. Saravanakumar, S.J. Kim, Flexible, hybrid piezoelectric film (BaTi(1−x)ZrxO3)/PVDF nanogenerator as a self-powered fluid velocity sensor. ACS Appl. Mater. Interfaces 7(18), 9831–9840 (2015). https://doi.org/10.1021/acsami.5b01760 CrossRefGoogle Scholar C. Chang, V.H. Tran, J. Wang, Y.K. Fuh, L. Lin, Direct-write piezoelectric polymeric nanogenerator with high energy conversion efficiency. Nano Lett. 10(2), 726–731 (2010). https://doi.org/10.1021/nl9040719 CrossRefGoogle Scholar M. Trchová, Z. Morávková, M. Bláha, J. Stejskal, Raman spectroscopy of polyaniline and oligoaniline thin films. Electrochim. Acta 122, 28–38 (2014). https://doi.org/10.1016/j.electacta.2013.10.133 CrossRefGoogle Scholar E.T. Kang, K.G. Neoh, K.L. Tan, Polyaniline: a polymer with many interesting intrinsic redox states. Prog. Polym. Sci. 23(2), 277–324 (1998). https://doi.org/10.1016/S0079-6700(97)00030-0 CrossRefGoogle Scholar M. Gerard, A. Chaubey, B.D. Malhotra, Application of conducting polymers to biosensors. Biosens. Bioelectron. 17(5), 345–359 (2002). https://doi.org/10.1016/S0956-5663(01)00312-8 CrossRefGoogle Scholar X. Cao, J. Ma, X. Shi, Z. Ren, Effect of TiO2 nanoparticle size on the performance of PVDF membrane. Appl. Surf. Sci. 253(4), 2003–2010 (2006). https://doi.org/10.1016/j.apsusc.2006.03.090 CrossRefGoogle Scholar P. Martins, A.C. Lopes, S. Lanceros-Mendez, Electroactive phases of poly(vinylidene fluoride): determination, processing and applications. Prog. Polym. Sci. 39(4), 683–706 (2014). https://doi.org/10.1016/j.progpolymsci.2013.07.006 CrossRefGoogle Scholar I.H. Kadhim, H. Abu Hassan, Q.N. Abdullah, Hydrogen gas sensor based on nanocrystalline SnO2 thin film grown on bare si substrates. Nano-Micro Lett. 8(1), 20–28 (2016). https://doi.org/10.1007/s40820-015-0057-1 CrossRefGoogle Scholar R. Kumar, O. Al-Dossary, G. Kumar, A. Umar, Zinc oxide nanostructures for NO2 gas-sensor applications: a review. Nano-Micro Lett. 7(2), 97–120 (2015). https://doi.org/10.1007/s40820-014-0023-3 CrossRefGoogle Scholar T. Wang, D. Huang, Z. Yang, S. Xu, G. He et al., A review on graphene-based gas/vapor sensors with unique properties and potential applications. Nano-Micro Lett. 8(2), 95–119 (2016). https://doi.org/10.1007/s40820-015-0073-1 CrossRefGoogle Scholar M.H. Naveen, N.G. Gurudatt, Y.B. Shim, Applications of conducting polymer composites to electrochemical sensors: a review. Appl. Mater. Today 9, 419–433 (2017). https://doi.org/10.1016/j.apmt.2017.09.001 CrossRefGoogle Scholar E. Comini, C. Baratto, G. Faglia, M. Ferroni, A. Vomiero, G. Sberveglieri, Quasi-one dimensional metal oxide semiconductors: preparation, characterization and application as chemical sensors. Prog. Mater. Sci. 54(1), 1–67 (2009). https://doi.org/10.1016/j.pmatsci.2008.06.003 CrossRefGoogle Scholar S.B. Abel, R. Olejnik, C.R. Rivarola, P. Slobodian, P. Saha, D.F. Acevedo, C.A. Barbero, Resistive sensors for organic vapors based on nanostructured and chemically modified polyanilines. IEEE Sens. J. 18(16), 6510–6516 (2018). https://doi.org/10.1109/JSEN.2018.2848843 CrossRefGoogle Scholar D. Nicolas-Debarnot, F. Poncin-Epaillard, Polyaniline as a new sensitive layer for gas sensors. Anal. Chim. Acta 475(1), 1–15 (2003). https://doi.org/10.1016/S0003-2670(02)01229-1 CrossRefGoogle Scholar I. Fratoddi, I. Venditti, C. Cametti, M.V. Russo, Chemiresistive polyaniline-based gas sensors: a mini review. Sens. Actuators B 220, 534–548 (2015). https://doi.org/10.1016/j.snb.2015.05.107 CrossRefGoogle Scholar M.Y. Chuang, Y.T. Lin, T.W. Tung, L.Y. Chang, H.W. Zan, H.F. Meng, C.J. Lu, Y.T. Tao, Room-temperature-operated organic-based acetone gas sensor for breath analysis. Sens. Actuators B 260, 593–600 (2018). https://doi.org/10.1016/j.snb.2017.12.168 CrossRefGoogle Scholar D.W. Longcope, G.H. Fisher, A.A. Pevtsov, Flux-tube twist resulting from helical turbulence: the sigma-effect. Astrophys. J. 507(1), 417–432 (1998). https://doi.org/10.1086/306312 CrossRefGoogle Scholar P. Cavallo, D.F. Acevedo, M.C. Fuertes, G.J.A.A. Soler-Illia, C.A. Barbero, Understanding the sensing mechanism of polyaniline resistive sensors. Effect of humidity on sensing of organic volatiles. Sens. Actuators B 210, 574–580 (2015). https://doi.org/10.1016/j.snb.2015.01.029 CrossRefGoogle Scholar Y. Guo, L. Li, C. Zhao, L. Song, B. Wang, Humidity sensing properties of poly-vanadium–titanium acid combined with polyaniline grown in situ by electrochemical polymerization. Sens. Actuators B 270, 80–88 (2018). https://doi.org/10.1016/j.snb.2018.05.010 CrossRefGoogle Scholar J. Zhao, G. Wu, Y. Hu, Y. Liu, X. Tao, W. Chen, A wearable and highly sensitive CO sensor with a macroscopic polyaniline nanofiber membrane. J. Mater. Chem. A 3(48), 24333–24337 (2015). https://doi.org/10.1039/C5TA06734K CrossRefGoogle Scholar M.K. Ram, O. Yavuz, V. Lahsangah, M. Aldissi, CO gas sensing from ultrathin nano-composite conducting polymer film. Sens. Actuators B 106(2), 750–757 (2005). https://doi.org/10.1016/j.snb.2004.09.027 CrossRefGoogle Scholar Z. Wu, X. Chen, S. Zhu, Z. Zhou, Y. Yao, W. Quan, B. Liu, Room temperature methane sensor based on graphene nanosheets/polyaniline nanocomposite thin film. IEEE Sens. J. 13(2), 777–782 (2013). https://doi.org/10.1109/JSEN.2012.2227597 CrossRefGoogle Scholar T. Kinkeldei, C. Zysset, N. Muenzenrieder, G. Troester, An electronic nose on flexible substrates integrated into a smart textile. Sens. Actuators B 174, 81–86 (2012). https://doi.org/10.1016/j.snb.2012.08.023 CrossRefGoogle Scholar A. Choudhury, Polyaniline/silver nanocomposites: dielectric properties and ethanol vapour sensitivity. Sens. Actuators B 138(1), 318–325 (2009). https://doi.org/10.1016/j.snb.2009.01.019 CrossRefGoogle Scholar D. Xie, Y.D. Jiang, W. Pan, D. Li, Z.M. Wu, Y.R. Li, Fabrication and characterization of polyaniline-based gas sensor by ultra-thin film technology. Sens. Actuators B 81(2–3), 158–164 (2002). https://doi.org/10.1016/S0925-4005(01)00946-7 CrossRefGoogle Scholar J.L. Wojkiewicz, V.N. Bliznyuk, S. Carquigny, N. Elkamchi, N. Redon, T. Lasri, A.A. Pud, S. Reynaud, Nanostructured polyaniline-based composites for ppb range ammonia sensing. Sens. Actuators B 160(1), 1394–1403 (2011). https://doi.org/10.1016/j.snb.2011.09.084 CrossRefGoogle Scholar P. Le Maout, J.L. Wojkiewicz, N. Redon, C. Lahuec, F. Seguin et al., Polyaniline nanocomposites based sensor array for breath ammonia analysis. Portable e-nose approach to non-invasive diagnosis of chronic kidney disease. Sens. Actuators B 274, 616–626 (2018). https://doi.org/10.1016/j.snb.2018.07.178 CrossRefGoogle Scholar © The Author(s) 2018 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.School of PhysicsUniversity of Electronic Science and Technology of ChinaChengduPeople's Republic of China 2.College of Physics and Electronics EngineeringShanxi UniversityTaiyuanPeople's Republic of China 3.College of SciencesNortheastern UniversityShenyangPeople's Republic of China Fu, Y., He, H., Zhao, T. et al. Nano-Micro Lett. (2018) 10: 76. https://doi.org/10.1007/s40820-018-0228-y Accepted 21 October 2018 First Online 12 November 2018 DOI https://doi.org/10.1007/s40820-018-0228-y
CommonCrawl
Physics Meta Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Quarks in a hadron: where does the mass come from? We know that the sum of the masses of the quarks in a proton is approximately $9.4^{+1.9}_{-1.3}~\text{MeV}/c^2$, whereas the mass of a proton is $\approx931~\text{MeV}/c^2$. This extra mass is attributed to the kinetic energy of the confined quarks and the confining field of the strong force. Now, when we talk about energetically favourably bound systems, they have a total mass-energy less than the sum of the mass-energies of the constituent entities. How does a proton, a bound system of quarks with its mass-energy so much more than its constituent entities, remain stable? The strong force and other energetic interactions supposedly contribute this mass-energy by the mass-energy equivalence principle, but how exactly does this occur? standard-model mass-energy protons Qmechanic♦ 181k3838 gold badges468468 silver badges20942094 bronze badges Tamoghna ChowdhuryTamoghna Chowdhury $\begingroup$ Related: physics.stackexchange.com/q/64232/2451 and links therein. $\endgroup$ – Qmechanic ♦ Apr 9, 2018 at 8:17 You say: Now, when we talk about energetically favourably bound systems, they have a total mass-energy less than the sum of the mass-energies of the constituent entities. and this is perfectly true. For example if we consider a hydrogen atom then its mass is 13.6ev less than the mass of a proton and electron separated to infinity - 13.6eV is the binding energy. It is generally true that if we take a bound system and separate its constituents then the total mass will increase. This applies to atoms, nuclei and even gravitationally bound systems. It applies to quarks in a baryon as well, but with a wrinkle. For atoms, nuclei and gravitationally bound systems the potential goes to zero as the constituents are separated so the behaviour at infinity is well defined. If the constitiuents of these systems are separated to be at rest an infinite distance apart then the total mass is just the sum of the individual rest masses. So the bound state must have a mass less then the sum of the individual rest masses. As Hritik explains in his answer, for the quarks bound into a baryon by the strong force the potential does not go to zero at infinity - in fact it goes to infinity at infinity. If we could (we can't!) separate the quarks in a proton to infinity the resulting system would have an infinite mass. So the bound state does have a total mass less than the separated state. It's just that the mass of the separated state does not have a mass equal to the masses of the individual particles. You can look at this a different way. To separate the electron and proton in a hydrogen atom we need to add energy to the system so if the added energy is $E$ the mass goes up by $E/c^2$. As the separation goes to infinity the energy $E$ goes to 13.6eV. If we try to separate the quarks in a proton by a small distance we have to put energy in and the mass also goes up by $E/c^2$ just as in any bound system. But with the strong force the energy keeps going up as we increase the separation and doesn't tend to any finite limit. John RennieJohn Rennie 341k116116 gold badges729729 silver badges10051005 bronze badges $\begingroup$ Your explanation is more intuitive than I would have expected. Thank you. You cleared a long-standing doubt of mine. $\endgroup$ – Tamoghna Chowdhury $\begingroup$ @TamoghnaChowdhury: it's something that puzzled me for a long time as well! $\endgroup$ – John Rennie $\begingroup$ Isn't the strong force a short-range force? Shouldn't it drastically lose effectivity beyond nuclear distances? $\endgroup$ $\begingroup$ @TamoghnaChowdhury: people tend to use the term strong force to describe two different forces. The strong force is the force between two quarks, and it goes to infinity as the quarks are separated (as Hritik explains). However the term is also used to describe the force between two baryons e.g. the force between a proton and a neutron. This force is a kind of left over force due to to the strong force between the quarks acting inside the proton and inside the neutron. Strictly speaking this is the nuclear force and does go to zero at infinity. $\endgroup$ $\begingroup$ Great Answer! How did they measure the mass of quarks then? If they can't be separated what is the meaning of their mass? $\endgroup$ – P. C. Spaniel This happens because of a property of the strong force, called Asymptotic Freedom. This causes the interaction between quarks to get asymptotically weaker as the distance between them decreases. This is the reason why quarks are always found in a bound state and are not freely available in nature. The strong force confines quarks to a region where they might possess a high amount of kinetic energy. (i.e. free movement for them.) Some related math: The strong force potential, a classical approximation, actually (from studies on bound states of quarks and antiquarks, can't say it is universal) can be represented by: $$V(r) = - \frac{4}{3} \frac{\alpha_s(r) \hbar c}{r} + kr$$ If you analyze this, the $\frac{1}{r}$ term dominates for short range, whereas the $r$ term is more significant at a relatively greater distance. The $r$ term is the confinement term, and shows that the potential actually increases at a further distance, and hence it is favorable for quarks to remain close together. It is similar for protons and other bound states too. (At least on a relatively small time scale, because the proton is generally assumed to be unstable, as @CuriousOne mentioned in the comments. The lower limit on a proton's half life is believed to be around $10^{35}$ years. ) This question will be an interesting read: What's inside a proton? Hritik NarayanHritik Narayan $\begingroup$ @TamoghnaChowdhury: It's not clear that the proton is stable. I think the majority consensus is that it is most likely not stable. $\endgroup$ – CuriousOne $\begingroup$ I don't know enough about the standard model and beyond to give you good links to proton decay. Thermodynamically, I think, a cosmological model in which all heavy particles decay down to photons is preferable to other heat death scenarios. Why should the end state consist of randomly distributed heavy particles, rather than of a homogeneous sea of photons? I am actually willing to give up charge conservation for that. $\endgroup$ $\begingroup$ The net charge of the universe is 0, so if all its particles were converted to photons, charge conservation would not be violated on a large scale. However, the process through which it would occur I have no idea about. $\endgroup$ $\begingroup$ @TamoghnaChowdhury: How do you know that the net charge is 0 or that it is even constant? Those are merely assumptions of the current model. We have to violate some symmetries to even make matter, so why not allow a violation of all? $\endgroup$ $\begingroup$ That's true. I said that on the basis that no. of protons should equal the no. of electrons in the universe. $\endgroup$ Thanks for contributing an answer to Physics Stack Exchange! Why does the quarks binding energy add mass to nucleons instead of reducing it? Where does a hadrons mass come from What happens to the rest of the 95 percent in quarks? Why does the mass of a proton not equal the mass of its corresponding quarks? Where does the mass of a proton come from? What's the mass? What's inside a proton? Is the Higgs boson not responsible for most mass? Is most mass from KE or PE? What is mass constituted by quarks in a proton? If quarks didn't have mass, could protons (and neutrons) exist? Intuitive explanation of how hadron mass emerges from the strong force Is a quark's constituent mass affected by the chiral limit? How much of the proton's mass is due to the Higgs field? How exactly does the quark gluon plasma prevent protons from forming? Kinetic energy of quarks and mass of proton
CommonCrawl
Geometric and Ergodic Aspects of Group Actions Geometric and Ergodic Aspects of Group Actions pp 21-81 | Cite as Horocycle Flows on Surfaces with Infinite Genus Omri Sarig Part of the Infosys Science Foundation Series book series (ISFS) We study the ergodic theory of horocycle flows on hyperbolic surfaces with infinite genus. In this case, the nontrivial ergodic invariant Radon measures are all infinite. We explain the relation between these measures and the positive eigenfunctions of the Laplacian on the surface. In the special case of \(\mathbb Z^d\)-covers of compact hyperbolic surfaces, we also describe some of their ergodic properties, paying special attention to equidistribution and to generalized laws of large numbers. This set of notes constituted the basis for a series of lectures given in April 2015 as part of the program "Geometric and ergodic aspects of group actions," at the Tata Institute for Fundamental Research, Mumbai. The author would like to thank the organizers of the program and the staff of TIFR for the kind hospitality. The author acknowledges the support of ISF grants 1149/18 and 199/14. 6 Appendix 1: Busemann's Function Busemann's function. Suppose \(z,w\in \mathbb D\) and \(e^{i\theta }\in \partial \mathbb D\). Busemann's function \(b_{e^{i\theta }}(z,w)\) is the signed hyperbolic distance from \(\mathrm {Hor}_z(e^{i\theta })\) to \(\mathrm {Hor}_w(e^{i\theta })\): the solution s to the equation \(g^s[\mathrm {Hor}_z(e^{i\theta })]=\mathrm {Hor}_w(e^{i\theta })]\) (Fig. 10). Open image in new window \(b_{e^{i\theta }}(z,w)=s\) Theorem 25 (The basic identity for Busemann's function) For every \(\varphi \in \)Möb\((\mathbb D)\), \(b_{\varphi (e^{i\theta })}(0,\varphi (0))=-\log |\varphi '(e^{i\theta })|\). The following properties are obvious: \(b_{\varphi (e^{i\alpha })}(\varphi (z_1),\varphi (z_2))=b_{e^{i\alpha }}(z_1,z_2)\) for all hyperbolic isometries \(\varphi \) (orientation reversing included). \(b_{e^{i\alpha }}(z_1,z_2)+b_{e^{i\alpha }}(z_2,z_3)=b_{e^{i\alpha }}(z_1,z_3)\). \((e^{i\theta },z,w)\mapsto b_{e^{i\theta }}(z,w)\) is Borel measurable. We claim that (I), (II), and (III) determine \((e^{i\theta },z,w)\mapsto b_{e^{i\theta }}(z,w)\) up to a multiplicative constant. Suppose \(c_{e^{i\theta }}(z,w)\) satisfies (I), (II) and (III). First, \(c_{e^{i\theta }}(z,z)=0\) for all z, because of (II). Second, \(c_{e^{i\theta }}(z,w)=0\) whenever \(w\in \mathrm {Hor}_{e^{i\theta }}(z)\). To see this, let y denote the midpoint of the horocyclic arc connecting z to w, and let \(\gamma \) denote the geodesic from y to \(e^{i\theta }\). Let \(\varphi \) denote the hyperbolic reflection w.r.t. \(\gamma \),5 then \(\varphi (e^{i\theta })=e^{i\theta }\) and \(\varphi (z)=w\), \(\varphi (w)=z\). By (I) and (II), \(0=c_{e^{i\theta }}(z,w)+c_{e^{i\theta }}(w,z)=c_{e^{i\theta }}(z,w)+c_{\varphi (e^{i\theta })}(\varphi (w),\varphi (z))=2c_{e^{i\theta }}(z,w)\), proving that \(c_{e^{i\theta }}(z,w)=0\). Third, \(c_{e^{i\theta }}(z,w)\) is determined by the values of the function \(c_1(0,t)\) for t real. To see this use a Möbius transformation to map \(e^{i\theta },z\) to 1, 0. Let \(w^*\) denote the image of w, and let t denote the intersection of \(\mathrm {Hor}_1(w)\) with the real line. Then \(c_{e^{i\theta }}(z,w)=c_1(0,w^*)=c_1(0,t)+c_1(t,w^*)=c_1(0,t)\). Finally, \(c_1(0,t)=\mathrm {const.\,}\mathrm {dist}\,(0,t)\) \((t\in \mathbb R)\) because (I) implies that \(c_1(t_1,t_2)\) is a function of the hyperbolic distance between \(t_1,t_2\), (II) says that this dependence is additive, and (III) says it is Borel. Here is a construction of a function \(c_{e^{i\alpha }}(z,w)\) which satisfies (I), (II) and (III): Let \(\lambda _z\) denote the harmonic measure on \(\partial \mathbb D\) at z, defined by \(d\lambda _z(e^{i\theta })=P(e^{i\theta },z)d\theta \), where \(P(e^{i\theta },z)=\frac{1-|z|^2}{|e^{i\theta }-z|^2}\) (Poisson's kernel). We claim that $$ c_{e^{i\theta }}(z,w):=\log \frac{d\lambda _z}{d\lambda _w}(e^{i\theta })=\log \left( \frac{P(e^{i\theta },z)}{P(e^{i\theta },w)}\right) $$ satisfies (I),(II), and (III). (III) is obvious. (II) is the chain rule for Radon–Nikodym derivatives. To see (I) we recall that Poisson's formula says that for every \(f\in C(\partial \mathbb D)\), \(F(z):=\int _{\partial \mathbb D}f d\lambda _z\) is the unique harmonic function on \(\mathbb D\) with boundary values \(f(e^{i\theta })\). For every \(f\in C(\partial \mathbb D)\) and \(\varphi \in \text {M}\ddot{\mathrm{o}}\text {b}(\mathbb D)\) \( \int f d\lambda _z\circ \varphi ^{-1}=\int f\circ \varphi d\lambda _z=F(\varphi (z))=\int _{\partial \mathbb D}f d\lambda _{\varphi (z)}, \) so \(\lambda _z\circ \varphi ^{-1}=\lambda _{\varphi (z)}\). This implies (I): $$ c_{\varphi (e^{i\theta })}(\varphi (z),\varphi (w))=\log \frac{d\lambda _z\circ \varphi ^{-1}}{d\lambda _w\circ \varphi ^{-1}}[\varphi (e^{i\theta })]=\log \left( \frac{d\lambda _z}{d\lambda _w}\circ \varphi ^{-1}\right) [\varphi (e^{i\theta })]=c_{e^{i\theta }}(z,w). $$ By the first part of the proof, \(b_{e^{i\theta }}(z,w)=\mathrm {const.\,}\log \frac{P(e^{i\theta },z)}{P(e^{i\theta },w)}\). Since \(c_1(0,r)=\log \left( \frac{P(1,0)}{P(1,r)}\right) =-\log P(1,r)=-\log \frac{1-r^2}{(1-r)^2}=\log \frac{1-r}{1+r}\), \(b_1(0,r)=\int _0^r \frac{2dt}{1-t^2}dt=\int _0^r\left( \frac{1}{1-t}+\frac{1}{1+t}\right) dt=\log \frac{1+r}{1-r}\), the constant equals \((-1)\). We obtain the identity \(b_{e^{i\theta }}(z,w)=-\log \left( \frac{d\lambda _z}{d\lambda _w}(e^{i\theta })\right) \). It follows that \(b_{\varphi (e^{i\theta })}(0,\varphi (0))=-\log \frac{d\lambda _0}{d\lambda _{\varphi (0)}}(e^{i\theta })=\log \frac{d\lambda _0\circ \varphi ^{-1}}{d\lambda _0}[\varphi (e^{i\theta })]\). Since \(\lambda _0\) is Lebesgue's measure, this equals \(\log |(\varphi ^{-1})'(\varphi (e^{i\theta }))|=-\log |\varphi '(e^{i\theta })|\). \(\square \) Proof of Theorem 1 part (3): Fix \(\varphi \in \text {M}\ddot{\mathrm{o}}\text {b}(\mathbb D)\), we have to show that $$\begin{aligned} \varphi (e^{i\theta _0},s_0,t_0)=(\varphi (e^{i\theta _0}),s_0-\log |\varphi '(e^{i\theta _0})|,t_0+\text {something independent of }t_0)\qquad \qquad {(*)} \end{aligned}$$ Draw in \(\mathbb D\) \(\omega =(h^{t_0}\circ g^{s_0})[\omega (e^{i\theta _0})]\) together with \(\mathrm {Hor}(\omega )\) and \(\mathrm {Hor}(\omega (e^{i\theta _0}))\). Add to the picture the geodesic rays of \(\omega (e^{i\theta _0})\) and \(\omega \). Now draw the image of these figures by \(\varphi \) (Fig. 11). Proof of \((*)\) The kan-coordinates of \(\varphi (\omega )\) are \( (\varphi (e^{i\theta _0}),s_0+b_{\varphi (e^{i\theta _0})}(0,\varphi (0)),t_0+\delta _0), \) where \(\delta _0\) is some function of \(0,\varphi (0),s_0, e^{i\theta _0}\). (\(*\)) follows from the basic identity for the Busemann function. \(\square \) Proof of Theorem24. The theorem asserts that if \(\varphi \in \varGamma \), \(e^{i\theta }\in \partial \mathbb D\), and \(|\varphi (e^{i\theta })-e^{i\theta }|<1\), then \( |R(\varphi ,e^{i\theta })-B(\gamma ,\widetilde{\gamma })|\le 4|e^{i\theta }-e^{i\theta }|^2, \) where \(\circ \) \(R(\varphi ,e^{i\theta })=-\log |\varphi '(e^{i\theta })|\) (the Radon–Nikodym cocycle). \(\gamma :=\) the projection to \(\varGamma \setminus \mathbb D\) of \(\gamma [-e^{i\theta },e^{i\theta }]\), the \(\mathbb D\)-geodesic from \(-e^{i\theta }\) to \(e^{i\theta }\). \(\widetilde{\gamma }:=\) the projection to \(\varGamma \setminus \mathbb D\) of \(\gamma [-e^{i\theta },\varphi (e^{i\theta })]\), the geodesic from \(-e^{i\theta }\) to \(\varphi (e^{i\theta })\). \(B(\gamma ,\widetilde{\gamma })=\mathrm {dist}\,_{\widetilde{\gamma }}(\omega _2,\omega _2^*)-\mathrm {dist}\,_\gamma (\omega _1,\omega _1^*)\) for some (any) \(\omega _1,\omega _1^*\in \gamma \), \(\omega _2,\omega _2^*\in \widetilde{\gamma }\), s.t. \( \mathrm {dist}\,(g^{-s}\omega _1,g^{-s}\omega _2)\xrightarrow [s\rightarrow \infty ]{}0\text { and } \mathrm {dist}\,(g^{s}\omega _1^*,g^s\omega _2^*)\xrightarrow [s\rightarrow \infty ]{}0. \) We take \(\omega _1=\omega _1^*=\) vector based at 0 and pointing at \(e^{i\theta }\), \(\omega _2:=\) intersection of \(\gamma [-e^{i\theta },\varphi (e^{i\theta })]\) and \(\mathrm {Hor}_{-e^{i\theta }}(0)\) and \(\omega _2^*:=\) intersection of \(\gamma [-e^{i\theta },\varphi (e^{i\theta })]\) and \(\varphi [\mathrm {Hor}_{e^{i\theta }}(0)]=\mathrm {Hor}_{\varphi (e^{i\theta })}(\varphi (0))\) (Fig. 12). Add to the picture \(\omega _3:=\) intersection of \(\gamma [-e^{i\theta },\varphi (e^{i\theta })]\) and \(\mathrm {Hor}_{\varphi (e^{i\theta })}(0)\). Proof of the approximation theorem Clearly \( B(\gamma ,\widetilde{\gamma })=\mathrm {dist}\,_{\gamma [-e^{i\theta },\varphi (e^{i\theta })]}(\omega _2,\omega _2^*)=\mathrm {dist}\,(\omega _3,\omega _2^*) -\mathrm {dist}\,(\omega _3,\omega _2) \). The first summand is the signed distance from \(\mathrm {Hor}_{\varphi (e^{i\theta })}(0)\) and \(\mathrm {Hor}_{\varphi (e^{i\theta })}(\varphi (0))\). This is \(b_{\varphi (e^{i\theta })}(0,\varphi (0))=-\log |\varphi '(e^{i\theta })|=R(\varphi ,e^{i\theta })\). So $$|R(\varphi ,e^{i\theta })-B(\gamma ,\widetilde{\gamma })|\le \mathrm {dist}\,(\omega _3,\omega _2)=:\delta .$$ To estimate \(\delta \), let \(\vartheta :\mathbb D\rightarrow \mathbb H\) be the Möbius map which sends \(e^{i\theta }\mapsto 0\), \(-e^{i\theta }\mapsto \infty \), and \(0\mapsto i\). This map maps \(\mathrm {Hor}_{-e^{i\theta }}(0)\) to the horizonal line \(y=1\), and \(\mathrm {Hor}_{\varphi (e^{i\theta })}(0)\) to a circle passing through i which is tangent to the real axis at \(\vartheta (\varphi (e^{i\theta }))\). So \(\delta \) is the hyperbolic distance between the peak of this circle and \(y=1\) (Fig. 13). It is clear from Fig. 13 that \(\delta =O(|\vartheta (e^{i\theta })-\vartheta [\varphi (e^{i\theta })]|^2)\). A precise calculation using an explicit formula for \(\vartheta \) shows that \(|\delta |\le 4|e^{i\theta }-\varphi (e^{i\theta })|^2\). \(\square \) Notes and references. The proof of Theorem 25 is taken from [28]. The proof of Theorem 24 is taken from [50]. 7 Appendix 2: The Cocycle Reduction Theorem 7.1 Preliminaries on Countable Equivalence Relations Suppose Open image in new window is a standard Borel space (a complete and separable metric space equipped with the \(\sigma \)-algebra of Borel sets). Every measurable group action on \(\varOmega \) generates an equivalence relation $$x\sim y\Leftrightarrow x,y\text { are in the same orbit.}$$ This is called the orbit relation of the action. The orbit relation keeps information on the orbits as sets, but forgets the way these sets are parametrized by the group. The language of equivalence relations, which we review below, is designed to handle dynamical properties such as invariance or ergodicity, which only depend on the structure of orbits as unparameterized sets. We will comment at the end of the section on why this is useful. Countable Borel equivalence relations: These are subsets \(\mathfrak G\subset \varOmega \times \varOmega \) such that \(x\sim y\Leftrightarrow (x,y)\in \mathfrak G\) is a reflexive, symmetric, and transitive relation; the equivalence classes of \(\sim \) are all finite or countable; \(\mathfrak G\) is in the product \(\sigma \)-algebra Open image in new window . For example, suppose G is a countable group of bi-measurable maps \(g:\varOmega \rightarrow \varOmega \). The orbit relation of G is the countable Borel equivalence relation $$ \mathfrak {orb}(G):=\{(x,g(x)):x\in X, g\in G\}. $$ (Feldman and Moore) Every countable Borel equivalence relation on a standard measurable space Open image in new window is the orbit relation of some countable group action on X. Corollary 2 Suppose \(\mathfrak G\) is a countable Borel equivalence relation on a standard measurable space Open image in new window . If Open image in new window , then \(\mathrm {Sat}(B):=\{x\in \varOmega :\exists y\in B\text { s.t. }(x,y)\in \mathfrak G\}\) is measurable. If Open image in new window , then \(\{x\in \varOmega : (x,y)\in \mathfrak G\Rightarrow (x,y)\in P\}\) is measurable. Use the Feldman–Moore Theorem to realize \(\mathfrak G\) as an orbit relation of a countable group G. Then Open image in new window , and $$ \{x\in \varOmega : (x,y)\in \mathfrak G\Rightarrow (x,y)\in P\}=\bigcap _{g\in G}\{x\in \varOmega : (x,g(x))\in P\}. $$ This set is measurable because G is countable, and \(x\mapsto (x,g(x))\) is measurable. \(\square \) "Almost everywhere in \(\mathfrak G\)": Let P(x, y) be a measurable property of pairs \((x,y)\in X\times X\), i.e., Open image in new window . Suppose \(\mu \) is a measure on X. We say that P holds \(\mu \)-a.e. in \(\mathfrak G\), if \(\{x\in X: (x,y)\in \mathfrak G\Rightarrow (x,y)\in P\}\) has full measure. The previous corollary guarantees measurability. Holonomies, invariant functions, invariant measures: Suppose \(\mathfrak G\) is a countable Borel equivalence relation. A \(\mathfrak G\)-holonomy is a bi-measurable bijection \(\kappa :A\rightarrow B\) where \(\mathrm {dom}(\kappa ):=A\), \(\mathrm {im}(\kappa ):=B\) are measurable sets and \((x,\kappa (x))\in \mathfrak G\) for all \(x\in \mathrm {dom}(\kappa )\). A function \(f:\varOmega \rightarrow \mathbb R\) is called \(\mathfrak G\) -invariant, if \(f\circ \kappa |_{\mathrm {dom}(\kappa )}=f|_{\mathrm {dom}(\kappa )}\) for all \(\mathfrak G\)-holonomies \(\kappa \). Equivalently, \(f(x)=f(y)\) whenever \((x,y)\in \mathfrak G\). A (possibly infinite) measure \(\mu \) on \(\varOmega \) is called \(\mathfrak G\) -invariant if \(\mu \circ \kappa |_{\mathrm {dom}(\kappa )}=\mu |_{\mathrm {dom}(\kappa )}\) for all \(\mathfrak G\)-holonomies \(\kappa \). A \(\mathfrak G\)-invariant measure is called ergodic, if every measurable \(\mathfrak G\)-invariant function is equal a.e. to a constant function. Lemma 8 Suppose G is a countable group acting measurably on Open image in new window , and let \(\mu \) be a (possibly infinite) measure on \((\varOmega ,\mathscr {F})\). Let \(\mathfrak G:=\mathfrak {orb}(G)\), then \(\mu \) is G-invariant iff \(\mu \) is \(\mathfrak G\)-invariant. \(\mu \) is G-ergodic iff \(\mu \) is \(\mathfrak G\)-ergodic. The proof is easy and we leave it to the reader. Induced equivalence relations: Suppose B is a measurable set with positive measure. The induced relation on B is $$ \mathfrak G[B]:=\mathfrak G\cap (B\times B)=\{(x,y)\in B\times B: (x,y)\in \mathfrak G\}. $$ Suppose \(\mu \) is a measure on \(\varOmega \) and \(\mu (B)>0\). If \(\mu \) is \(\mathfrak G\)-invariant, then \(\mu |_B\) is \(\mathfrak G[B]\)-invariant. If \(\mu \) is \(\mathfrak G\)-ergodic, then \(\mu |_B\) is \(\mathfrak G[B]\)-ergodic. The first statement is trivial, so we prove only the second. Suppose \(f:B\rightarrow \mathbb R\) is \(\mathfrak G[B]\)-invariant. The saturation of B is a \(\mathfrak G\)-invariant measurable set of positive measure (because it contains B). By ergodicity, \(\mathrm {Sat}(B)\) has full measure. Define $$ F(x):={\left\{ \begin{array}{ll} f(y) &{} \text { for some (any) }y\in B\text { s.t. }(x,y)\in \mathfrak G\text {, provided }x\in \mathrm {Sat}(B), \\ -666 &{} \text { whenever }x\not \in \mathrm {Sat}(B). \end{array}\right. } $$ The definition is proper because f is \(\mathfrak G[B]\)-invariant. Clearly, F is \(\mathfrak G\)-invariant. By \(\mathfrak G\)-ergodicity, F is equal a.e. on \(\varOmega \) to a constant function. So \(f=F|_B\) is equal a.e. on B to a constant function. \(\square \) Cocycles and skew-product extensions: Suppose \(\mathfrak G\) is a countable Borel equivalence relation on a standard Borel space Open image in new window . A \(\mathfrak G\)-cocycle is a measurable map \(\varPhi :\mathfrak G\rightarrow \mathbb R\) s.t. \( \varPhi (x,y)+\varPhi (y,z)=\varPhi (x,z)\) for all \((x,y),(y,z)\in \mathfrak G\). Necessarily \(\varPhi (x,x)=0\) and \(\varPhi (x,y)=-\varPhi (y,x)\). The \(\varPhi \)-extension of \(\mathfrak G\) is the equivalence relation on \(\varOmega \times \mathbb R\) Every \(\mathfrak G\)-holonomy \(\kappa :A\rightarrow B\) generates a \(\mathfrak G_\varPhi \)-holonomy \(\kappa _\varPhi :A\times \mathbb R\rightarrow B\times \mathbb R\) given by \( \kappa _\varPhi (x,t)=(\kappa (x),t+\varPhi (x,\kappa (x))). \) Example: Radon–Nikodym extensions. Suppose \(\varGamma \subset \text {M}\ddot{\mathrm{o}}\text {b}(\mathbb D)\) is countable and discrete group. Let \(\mathrm {Fix}(\varGamma ):=\{z\in \partial \mathbb D: \exists g\in \varGamma \setminus \{id\}\text { s.t. }g(z)=z\}\), and set $$ \varOmega :=\partial \mathbb D\setminus \mathrm {Fix}(\varGamma ), $$ together with its Borel subsets. This is a standard Borel space.6 Let \(\mathfrak G:=\mathfrak {orb}(\varGamma )\). If \((x,y)\in \mathfrak G\) then there exists a unique \(g\in \varGamma \) such that \(y=g(x)\) (otherwise x is a fixed point of a nontrivial element of \(\varGamma \)). Let $$ \varPhi (x,y):=-\log |g'(x)|\text { for the unique }g\in \varGamma \text { such that }y=g(x). $$ This is a \(\mathfrak G\)-cocycle, because of the chain rule. Then Lemma 10 Suppose \(\mu \) is \(\mathfrak G_\varPhi \)-ergodic invariant measure on \(X\times \mathbb R\), then for every Open image in new window and \(K_1,K_2\subseteq \mathbb R\) compact such that \(\mu (A\times K_1), \mu (B\times K_2)>0\) one can find a \(\mathfrak G\)-holonomy \(\kappa \) such that \(\mu [\kappa _\varPhi (A\times K_1)\cap (B\times K_2)]>0\). By the Feldman–Moore Theorem, \(\mathfrak G\) is the orbit relation of a countable group G. Every \(g\in G\) determines a \(\mathfrak G_\varPhi \) holonomy with domain \(X\times \mathbb R\) via \(g_\varPhi (x,s)=(g(x),s+\varPhi (x,g(x)))\). The set \(\bigcup _{g\in G}g_\varPhi (A)\) is a measurable \(\mathfrak G_\varPhi \)-invariant set, whence a set of full measure. So for some \(g\in G\), \(\mu [g_\varPhi (A\times K_1)\cap (B\cap K_2)]>0\). \(\square \) Why do we need all this general nonsense? The Feldman–Moore Theorem says that any countable equivalence relation is the orbit relation of some measurable action of a countable group. The independent-minded reader may wonder what is the point of working in this more abstract setup of equivalence relations, when it is not really more general. There are two main reasons: The language of equivalence relations is convenient in scenarios when it is easier to decide when two points belong to the same orbit than it is to find the parametrization of the orbit and calculate the actual group element which maps one to the other. This is the case for horocycle flows: There is a simple geometric criterion for deciding when two unit tangent vectors belong to the same horocycle—their geodesic rays are forward asymptotic. But to calculate the horocyclic time it takes to move from one to the other is much more subtle. Induction: It is difficult to construct the induced action of a group on a subset, especially in cases when the individual elements of the group are not conservative (as is the case for hyperbolic or parabolic Möbius transformations). But as we saw above, it is very easy to induce equivalence relations on subsets. Of course, by Feldman–Moore, the induced orbit equivalence relation is the orbit relation of some other countable group action—but constructing that group explicitly is not easy. We will use the operation of inducing repeatedly in the proof of the cocycle reduction theorem. This is the reason we need to use countable equivalence relations. 7.2 The Cocycle Reduction Theorem Let \(\mathfrak G\) be a countable Borel equivalence relation on a standard Borel space \((X,\mathscr {B})\). Let \(\varPhi :\mathfrak G\rightarrow \mathbb R\) be a measurable cocycle, and suppose \(\mu \) is a (possibly infinite) \(\mathfrak G_\varPhi \)-ergodic invariant measure on \(X\times \mathbb R\). We assume that \(\mu \) is locally finite: \(\mu (X\times K)<\infty \) for all compact sets \(K\subset \mathbb R\). A coboundary is a cocycle of the form \(\partial u(x,y):=u(y)-u(x)\). Two cocycles which differ by a coboundary are called cohomologous. The a.e. range of a cocycle is the smallest closed subgroup of \(\mathbb R\) such that \(\varPhi (x,y)\in H\) \(\mu \)-a.e. in \(\mathfrak G_\varPhi \). Sometimes one can reduce the range of a cocycle \(\varPhi \) by subtracting from it a coboundary. For example, if \(\varPhi \) is a \(\mathbb Z\)-valued \(\mathfrak G\)-cocycle, but u(x) is real-valued, then \(\varPhi +\partial u\) will be an \(\mathbb R\)-valued cocycle. If we subtract \(\partial u\), then we're back to a \(\mathbb Z\)-valued cocycle. How much can we reduce the range by subtracting a coboundary? The cocycle reduction theorem says that the best we can do is $$ H_\mu :=\{s\in \mathbb R: \mu \circ g^s\sim \mu \}\overset{!}{=}\{s\in \mathbb R:\mu \circ g^s\not \perp \mu \}. $$ Here \(g^s:X\times \mathbb R\rightarrow X\times \mathbb R\) is the flow \(g^s(x,t)=(x,t+s)\), \(\mu \circ g^s\sim \mu \) means that \(\mu (g^s E)=0\Leftrightarrow \mu (E)=0\) for all measurable \(E\subset X\times \mathbb R\), and \(\mu \circ g^s\perp \mu \) means that \(\mu \circ g^s\) and \(\mu \) are carried by disjoint sets. Equality \(\overset{!}{=}\) is a consequence of ergodicity: Two ergodic invariant measures of the same countable equivalence relation (equiv. countable group action) are either proportional or they are mutually singular. (Cocycle reduction theorem) If \(\mu \) is a locally finite \(\mathfrak G_\varPhi \)-ergodic and invariant measure on \(X\times \mathbb R\), then there exists a Borel function \(u:X\rightarrow \mathbb R\) s.t. The set \(\{(x,t):t\in u(x)+H_\mu \}\) has full \(\mu \)-measure. \(\varPhi (x,y)+u(x)-u(y)\in H_\mu \) \(\mu \)–a.e. in \(\mathfrak G_\varPhi \). \(H_\mu \) is contained in any closed subgroup of \(\mathbb R\) with property 1 or with property 2. So \(H_\mu \) is the minimal \(\mu \)–a.e. range of the cocycles which are \(\mu \)-a.e. cohomologous to \(\varPhi \). Caution! The reader should note the subtlety in the quantifier in part 2. The measure \(\mu \) is a measure on \(X\times \mathbb R\), not X, and it is not assumed a priori to be a product measure. Therefore, although the \(\mathbb R\)-coordinates of (x, t), (y, s) are not mentioned explicitly, they do matter—because of their influence on the support of \(\mu \). Think of the case when \(\mu \) is carried by the graph of a function. The third part of the cocycle reduction theorem is easy: Suppose \(\mu \) is a locally finite \(\mathfrak G_\varPhi \)-ergodic invariant measure. \(H_\mu \) is a closed subgroup of \(\mathbb R\), so \(H_\mu =\{0\},c\mathbb Z\) or \(\mathbb R\). If \(u:X\rightarrow \mathbb R\) is measurable and H is a closed subgroup of \(\mathbb R\) s.t. \(\{(x,t):t\in u(x)+H\}\) has full measure, then \(H\supseteq H_\mu \). If \(u:X\rightarrow \mathbb R\) is measurable and H is a closed subgroup of \(\mathbb R\) s.t. \(\varPhi +\partial u\in H\) \(\mu \)–a.e. in \(\mathfrak G_\varPhi \), then \(H\supseteq H_\mu \). To see the first part, note that there is no loss of generality in assuming that X is a compact metric space, because by Kuratowski's theorem all standard Borel spaces are measurably isomorphic to such spaces. Now proceed as in the proof of Proposition 1 in Sect. 5.2. The second part is done by checking that the support of \(\mu \) is invariant under \(g^s\) for all \(s\in H_\mu \). The third part is done by observing that if \(\varPhi +\partial u\in H\) a.e. in \(\mathfrak G_\varPhi \), then the function \(F:X\times \mathbb R\rightarrow \mathbb R/H\), \(F(x,t):=t-u(x)+H\) is \(\mathfrak G_\varPhi \) invariant, therefore \(\mu \)-a.e. constant. So there exists \(c\in \mathbb R\) s.t. \(\{(x,t):t-u(x)\in c+H\}\) has full measure. Arguing as in part 2, we find that \(H\supseteq H_\mu \). \(\square \) So \(H_\mu \) is contained in the a.e-range of every cocycle which cohomologous to \(\varPhi \). It remains to construct the coboundary which reduces the range to \(H_\mu \). We sketch the proof of the cocycle reduction theorem below. For complete details, see [49]. 7.3 The Proof in Case There Are No Square Holes A square hole is a set \(B\times [a,b]\) where \(B\in \mathscr {B}\), \(\mu (B\times [a,b])=0\), and \(\mu (B\times \mathbb R)\ne ~0\). Under the assumptions of the cocycle reduction theorem, if \(\mu \) has no square holes, then \(\mu \circ g^s\sim \mu \) for all \(s\in \mathbb R\). (Here \(g^s(x,\xi )=(x,\xi +s)\).) All standard Borel spaces are isomorphic to compact metric spaces, so there is no loss of generality in assuming that X is a compact metric space equipped with a metric d. Assume by way of contradiction that \(\exists a\in \mathbb R\) s.t. \(\mu \circ g^a\not \sim \mu \). Since \(g^s\) commutes with \(\kappa _\varPhi \) for every \(\mathfrak G\)-holonomy \(\kappa \), \(\mu \circ g^a\) is \(\mathfrak G_\varPhi \)-ergodic and invariant. Two ergodic measures are either equivalent or they are mutually singular (exercise), so \(\mu \circ g^a\perp \mu \). Similarly, \(\mu \circ g^{-a}\perp \mu \), and \( \mu \perp \overline{\mu }:=\mu \circ g^a+\mu \circ g^{-a}. \) Choose \(f:X\times \mathbb R\rightarrow [0,\infty )\) continuous with compact support s.t. $$ \int f d\mu =1\ ,\ \int f d\overline{\mu }<\frac{1}{4}\ , \ \mu [f\ne 0]<\infty . $$ Since f has compact support, f is uniformly continuous. Choose \(\delta >0\) so that $$\begin{aligned} \left. \begin{array}{l} d(x,y)<\delta \\ |s-t|<\delta \end{array} \right\} \Longrightarrow |f(x,t)-f(y,s)|<\frac{1}{4\mu [f\ne 0]}. \end{aligned}$$ We will use the assumption that there are no square holes to find a \(\mathfrak G\)-holonomy \(\kappa :A\rightarrow B\) with the following properties: for all \(x\in A\), \(d(x,\kappa (x))<\delta \); for all \(x\in A\), \(\min \{|\varPhi (x,\kappa (x))-a|,|\varPhi (x,\kappa (x))+a|\}<\delta \); \(A\times \mathbb R\) has full measure. Construction of \(\kappa \): Divide X into a finite pairwise disjoint collection of sets of diameter less than \(\delta \). We will construct \(\kappa \) on each cell U separately in such a way that \(\kappa (U)\subset U\). Then we will glue the pieces into one holonomy noting that bijectivity is not destroyed because the partition elements are disjoint. To get (c), we only need to worry about partition sets U such that \(\mu (U\times \mathbb R)\ne 0\). Fix some partition set U such that \(\mu (U\times \mathbb R)\ne 0\). Let \(B(t,r):=(t-r,t+r)\). Since there are no square holes, $$ \mu (U\times B(0,\delta /2))\ne 0\text { and } \mu (U\times B(a,\delta /2))\ne 0. $$ Since \(\mu \) is \(\mathfrak G_\varPhi \)-ergodic, we can use Lemma 10 to construct a \(\mathfrak G\)-holonomy \(\kappa \) such that \(\mu [\kappa _\varPhi (U\times B(0,\delta /2))\cap (U\times B(a,\delta /2))]>0\). Let \(A_1:=\mathrm {dom}(\kappa )\cap \kappa ^{-1}(U)\) and \(B_1:=\kappa (A_1)\), then \(\mu (A_1\times \mathbb R)>0\), \(\mu (B_1\times \mathbb R)>0\) and for all \(x\in A_1\), $$ x,\kappa (x)\in U\text { and } \exists |t|<\tfrac{\delta }{2}\text { s.t. }t+\varPhi (x,\kappa (x))\in B\left( a,\tfrac{\delta }{2}\right) . $$ So for all \(x\in A_1\), \(d(x,\kappa (x))<\delta \) and \(|\varPhi (x,\kappa (x))-a|<\delta \). If \(A_1\times \mathbb R\) has full measure in \(U\times \mathbb R\) we are done and can continue to another partition element. If \(B_1\times \mathbb R\) has full measure in \(U\times \mathbb R\) then we are also done, because we can use \(\kappa ^{-1}\). If \(A_1\times \mathbb R\) and \(B_1\times \mathbb R\) are of positive but non-full measure in \(U\times \mathbb R\), then we let \(\kappa _1:=\kappa \) and construct an extension of \(\kappa _1\) to a bigger domain inside U as follows. Since there are no square holes, \(\mu [(U\setminus A_1)\times B(0,\frac{\delta }{2})],\mu [(U\setminus B_1)\times B(a,\frac{\delta }{2})]\ne 0\). By Lemma 10, there is a \(\mathfrak G\)-holonomy \(\kappa '\) and sets \(A_1'\subset U\setminus A_1\), \(B_1'\subset U\setminus B_1\) such that \(\mu (A_1'\times \mathbb R),\mu (B_1'\times \mathbb R)\ne 0\) and $$ \mu \left[ \kappa '_\varPhi (A_1'\times B(0,\tfrac{\delta }{2}))\cap (B_1'\times B(a,\tfrac{\delta }{2}))\right] >0. $$ As before \(d(x,\kappa '(x))<\delta \) and \(|\varPhi (x,\kappa '(x))-a|<\delta \) for \(x\in A'\). Since \(\kappa ,\kappa '\) have disjoint domains and disjoint supports, \(\kappa _2:=\kappa _1\cup \kappa '\) is a well-defined holonomy from a subset of U to U. It is now a standard matter to proceed by the "method of exhaustion" to show that there exists a holonomy \(\kappa _\infty \) with properties (a),(b) and such that one of \(\mathrm {dom}(\kappa _\infty )\times \mathbb R\), \(\mathrm {im}(\kappa _\infty )\times \mathbb R\) has full measure in \(U\times \mathbb R\). See [49] for details. In first case set \(\kappa |_U:=\kappa _\infty \). In the second case set \(\kappa |_U:=\kappa _\infty ^{-1}\). Now that we are done defining \(\kappa \) a.e. on U, we move to the next partition element. After finitely many steps, we are done. Using the holonomy \(\kappa \) to prove the lemma: Let \(\kappa _\varPhi (x,t):=(\kappa (x),t+\varPhi (x,\kappa (x)))\). \(\kappa _\varPhi \) preserves \(\mu \), because \(\kappa _\varPhi \) is a \(\mathfrak G_\varPhi \)-holonomy; \(\min \{|f\circ \kappa _\varPhi ^{-1}-f\circ g^a|,|f\circ \kappa _\varPhi ^{-1}-f\circ g^{-a}|\}<\frac{1}{4\mu [f\ne 0]}\), because of (10). So \(1=\int f d\mu =\int f d\mu \circ \kappa _\varPhi =\int _{\kappa _\varPhi [f\ne 0]}f\circ \kappa _\varPhi ^{-1}\, d\mu \le \int _{\kappa _\varPhi [f\ne 0]}(f\circ g^a+f\circ g^{-a})d\mu +\int _{\kappa _\varPhi [f\ne 0]}\frac{2}{4\mu [f\ne 0]}d\mu \le \int f d\overline{\mu }+\frac{1}{2}<\frac{3}{4}\). This contradiction shows that there can be no a s.t. \(\mu \circ g^a\not \sim \mu \). \(\square \) Proof of the cocycle reduction theorem when there are no holes: The lemma shows that if there are no square holes, then \(H_\mu =\mathbb R\). In this case, the cocycle reduction theorem holds with \(u\equiv 0\). 7.4 The Proof in Case There Is a Square Hole The proof proceeds by determining the support of \(\mu \) locally, and then globally: Step 1: There is a window \(W:=A\times [\alpha ,\beta ]\) with positive \(\mu \)-measure such that \( A\times [\alpha ,\beta ]=\{(x,u_0(x)):x\in A\}\mod \mu \) with \(u_0:A\rightarrow [\alpha ,\beta ]\) measurable. Step 2: \(A\times \mathbb R=\{(x,t): t\in u_0(x)+H_\mu \}\mod \mu \). Step 3: \(X\times \mathbb R=\{(x,t): t\in u(x)+H_\mu \}\mod \mu \) for \(u:X\rightarrow \mathbb R\) measurable such that \(u|_A=u_0\). The main step is step 1; the other two steps follow from ergodicity and invariance. We will make repeated use of the following fact from measure theory: There exists a probability measure \(\nu \) on X and Radon measures \(\mu _x\) on \(\mathbb R\) such that for every nonnegative measurable and \(\mu \)-integrable function \(f:X\times \mathbb R\rightarrow [0,\infty )\), $$ \int f d\mu =\int _X \left( \int _{\{x\}\times \mathbb R} f(x,t) d\mu _x\right) d\nu (x). $$ For \(\nu \)–a.e. \(x\in X\), for every \(\mathfrak G\)-holonomy \(\kappa \) with \(x\in \mathrm {dom}(\kappa )\), \(\mu _{\kappa (x)}\circ \kappa _\varPhi =\mu _x\). Sketch of proof (see [1, Chap. 1], [22, Chap. 2], [53, Corollary 6.9]). Fix \(\varphi :\mathbb R\rightarrow (0,1)\) such that \(\int \varphi (t)d\mu (x,t)=1\) (such a function exists by local finiteness). Then \(\varphi d\mu \) is a probability measure. Since \(X\times \mathbb R\) is a standard probability space, we have a fiber decomposition of \(\varphi \mu \) by general results in measure theory. Multiplying by \(1/\varphi \) we obtain a fiber decomposition for \(\mu \). Notice that \(\nu (E)\equiv \int _{E\times \mathbb R} \varphi (t)d\mu (x,t)\). Any two fiber decompositions of \(\varphi \mu \) agree on a set of full measure, because \(\int f\varphi d\mu _x\) is a version of the conditional expectation of f on \(\mathscr {B}\otimes \{\varnothing ,\mathbb R\}\), and \(L^1(X\times \mathbb R)\) is separable. Let G be a countable group of invertible transformations of X such that \(\mathfrak {orb}(G)=\mathfrak G\) (see the Feldman–Moore Theorem). Comparing the fiber decomposition of \(\mu \) to that of \(\mu \circ \kappa _\varPhi \) for \(\kappa \in G\), we find that for a.e. x, \(\mu _{\kappa (x)}\circ \kappa _\varPhi =\mu _x\) for all \(\kappa \in G\). Since G generates \(\mathfrak G\), this is the case for every \(\mathfrak G\)-holonomy s.t. \(\mathrm {dom}(\kappa )\ni x\). \(\Box \) Step 1: If \(\mu \) has a square hole, then there is a set with positive measure \(W:=A\times [\alpha ,\beta ]\) and a measurable function \(u_0:A\rightarrow [\alpha ,\beta ]\) such that for all \(((x,\xi ),(y,\eta ))\in \mathfrak G_\varPhi [W]\equiv \mathfrak G_\varPhi \cap W^2\), \(\varPhi (x,y)=u_0(y)-u_0(x)\); \(W=\{(x,u_0(x)):x\in A\}\mod \mu \). Let \(B\times [a,b]\) be a square hole: \(\mu (B\times [a,b])=0\), \(\mu (B\times \mathbb R)\ne 0\). Fix some \(s\in \mathbb R\setminus [a,b]\) and \(0<\varepsilon <\min \{\frac{1}{6}|a-b|,\frac{1}{2}|s-a|\}\) such that $$\mu (B\times (s-\varepsilon ,s+\varepsilon ))\ne 0.$$ Without loss of generality \(s<a\), otherwise change coordinates \((x,\xi )\leftrightarrow (x,-\xi )\). Using Lemma 13, choose \(B_1\subseteq B\) s.t. \(\mu (B_1\times [a,b])=0\), \(\mu (B_1\times \mathbb R)\ne 0\), and so that for all \(x\in B_1\) \(\mu _x\sim \mu _{\kappa (x)}\circ \kappa _\varPhi \) for every \(\mathfrak G[B]\)-holonomy \(\kappa \); \(\mu _x(\{x\}\times [a,b])=0\); \(\mu _x(\{x\}\times (s-\varepsilon ,s+\varepsilon ))>0\). Next we choose some \(t<s\) and some \(A\subset B_1\) such that \(\mu (A\times \mathbb R)\ne 0\) and so that on top of the three bullets above, every \(x\in A\) also satisfies \(\mu _x(\{x\}\times (t-\varepsilon ,t+\varepsilon ))=0\). Here is how to do this. Let \(t:=s-(\frac{a+b}{2}-s)\equiv 2s-\frac{a+b}{2}\). We claim that $$\begin{aligned} \mu (B_1\times (t-\varepsilon ,t+\varepsilon ))=0. \end{aligned}$$ Indeed, if this were not the case, then by ergodicity there would exist some \(\mathfrak G\)-holonomy \(\kappa \) and some \(B'_1\subset B_1\) such that \(\kappa _\varPhi (B'_1\times (t-\varepsilon ,t+\varepsilon ))\cap (B\times (s-\varepsilon ,s+\varepsilon ))\) has positive measure. In this case, there would also exist some \(B''_1\subset B_1\) with \(\mu (B''_1\times \mathbb R)\ne 0\) such that for all \(x\in B''\), $$ \kappa (x)\in B\ , \ |\varPhi (x,\kappa (x))-(s-t)|<2\varepsilon . $$ So \(\kappa _\varPhi \) maps \(B''_1\times (s-\varepsilon ,s+\varepsilon )\) into \(B\times (2s-t-3\varepsilon ,2s-t+3\varepsilon )\subset B\times [a,b]\). But this is impossible, since \(\kappa _\varPhi \) is measure preserving, \(\mu (B\times [a,b])=0\), and $$ \mu (B''_1\times (s-\varepsilon ,s+\varepsilon ))=\int _{B_1''}\mu _x(\{x\}\times (s-\varepsilon ,s+\varepsilon )) d\nu (x)>0. $$ (\(\nu (B_1'')\ne 0\) because \(\mu (B_1''\times \mathbb R)\ne 0\)). Now that we know (11), the existence of A follows from the fiber decomposition of \(\mu \). Define \(a':=t-\varepsilon \), \(b':=t+\varepsilon \), and choose \([\alpha ,\beta ]\subset (s-\varepsilon ,s+\varepsilon )\) such that \(\mu (A\times [\alpha ,\beta ])>0\) and \( |\alpha -\beta |<\frac{1}{3}\varepsilon \). Necessarily $$ |\alpha -\beta |<\frac{1}{2}\min \{|a'-b'|, |a-b|, |a-\beta |, |b'-\alpha |\}. $$ Indeed, \(|a'-b'|=2\varepsilon \), \(|a-b|>6\varepsilon \), \(|a-\beta |>a-s-\varepsilon \ge \varepsilon \), and \(|b'-\alpha |>(s-\varepsilon )-(t+\varepsilon )=(s-t)-2\varepsilon =(\frac{a+b}{2}-s)-2\varepsilon >(\frac{a+b}{2}-a)-2\varepsilon =\frac{|a-b|}{2}-2\varepsilon \ge \varepsilon \). We show that \( W:=A\times [\alpha ,\beta ] \) satisfies the requirements of step 1. Define $$\begin{aligned} U(x)&:=\inf \{\tau \in [b',b]: \mu _x(\{x\}\times (\tau ,b])=0\}. \end{aligned}$$ Recall that \(\mathfrak G[A]:=\mathfrak G\cap A^2\), and fix some \(\mathfrak G[A]\)-holonomy \(\kappa \). If \(x\in A\cap \mathrm {dom}(\kappa )\), \(x':=\kappa (x)\in A\) and \(|\varPhi (x,x')|<|\alpha -\beta |\), then $$\begin{aligned}&U(x')=\inf \{\tau \in [b',b]: \mu _{\kappa (x)}(\{\kappa (x)\}\times (\tau ,b])=0\}\\&=\inf \{\tau \in [\alpha ,a]: \mu _{\kappa (x)}(\{\kappa (x)\}\times (\tau ,\tfrac{a+b}{2}])=0\} \ \ \because \left( \begin{array}{l} \mu _{x'}(\{x'\}\times [\tfrac{a+b}{2},b])=0\\ \mu _{x'}(\{x'\}\times [\alpha ,\beta ])>0\\ \end{array}\right) \\&=\inf \{\tau \in [\alpha ,a]: (\mu _{\kappa (x)}\circ \kappa _\varPhi )(\{x\}\times (\tau -\varPhi (x,x'),\tfrac{a+b}{2}-\varPhi (x,x')])=0\}\\&=\inf \{\tau \in [\alpha ,a]: (\mu _{\kappa (x)}\circ \kappa _\varPhi )(\{x\}\times (\tau -\varPhi (x,x'),a])=0\}\ \ \because \tfrac{a+b}{2}-|\alpha -\beta |>a\\&=\inf \{\tau \in [\alpha ,a]: \mu _{x}(\{x\}\times (\tau -\varPhi (x,x'),a])=0\}\\&=\inf \{\tau '\in [\alpha -\varPhi (x,x'),a-\varPhi (x,x')]: \mu _{x}(\{x\}\times (\tau ',a])=0\}+\varPhi (x,x')\\&\ge \inf \{\tau '\in [b',b]: \mu _{x}(\{x\}\times (\tau ',a])=0\}+\varPhi (x,x')=U(x)+\varPhi (x,x'). \end{aligned}$$ So \(\varPhi (x,x')\le U(x')-U(x)\). Exchanging the places of \(x,x'\) and noting that \(\varPhi (x',x)=-\varPhi (x,x')\), we find that \(\varPhi (x,x')=U(x')-U(x)\). It follows that the function \(F(x,\xi ):=\xi -U(x)\) is invariant with respect to the induced equivalence relation \(\mathfrak G_\varPhi [W]\). By Lemma 9, this equivalence relation is ergodic. So \( \xi -U(x)=const\,\mu \text {--a.e. in }W. \) The step follows with \(u_0(x):=U(x)+\mathrm {const.\,}\) Step 2: Either \(A\times \mathbb R=\{(x,u_0(x)):x\in A\}\mod \mu \) and \(H_\mu =\{0\}\), or there exists \(c>0\) s.t. \(A\times \mathbb R=\{(x,u_0(x)+cn):x\in A, n\in \mathbb Z\}\mod \mu \) and \(H_\mu =c\mathbb Z\). In both cases, \( \varPhi (x,y)+u_0(x)-u_0(y)\in H_\mu \, \mu \text {--a.e. in }\mathfrak G_\varPhi [A\times \mathbb R]. \) Sketch of proof (see [49] for details). Let $$u_1(x):=\sup \{t\ge u_0(x):\mu _x(\{x\}\times (u_0(x),t))=0\}.$$ Notice that \(\mu _x[\{x\}\times (u_0(x),u_1(x))]=0\), and if \(u_1(x)<\infty \) then $$\mu _x\biggl [\{x\}\times [u_1(x),u_1(x)+\varepsilon )\biggr ]>0\text { for all }\varepsilon >0.$$ Suppose \(((x,\xi ),(x',\eta ))\in \mathfrak G_\varPhi [W]\) and let \(\kappa \) be a \(\mathfrak G\)-holonomy such that \((x',\eta )=\kappa _\varPhi (x,\xi )\). Since \(\mu _{\kappa (x)}\circ \kappa _\varPhi =\mu _x\), \(u_1(x')<\infty \) iff \(u_1(x)<\infty \), and in this case, the identity \(u_0(x')=u_0(x)+\varPhi (x,x')\) implies that $$\begin{aligned}&\mu _{x'}\biggl [\{x'\}\times (u_0(x'),u_1(x)+\varPhi (x,x'))\biggr ]=0\\&\mu _{x'}\bigg [\{x'\}\times [u_1(x)+\varPhi (x,x'),u_1(x)+\varPhi (x,x')+\varepsilon )\biggr ]>0\text { for all }\varepsilon >0. \end{aligned}$$ It follows that \(u_1(x')=u_1(x)+\varPhi (x,x').\) Recall that \(u_0(x')=u_0(x)+\varPhi (x,x')\), then \(u_1(x')-u_0(x')=u_1(x)-u_0(x)\), proving that \(u_1-u_0\) is \(\mathfrak G_\varPhi [W]\)-invariant. By ergodicity, either \(u_1<\infty \) \(\mu \)–a.e. in W and then \(u_1=u_0+\mathrm {const.\,}\), or \(u_1=\infty \) \(\mu \)–a.e. in W. Because \(\mu |_W\sim \int _A \delta _{(x,u_0(x))}d\nu (x)\), instead of saying "\(\mu \)–a.e. in W" we can say "\(\nu \)–a.e. in A." In summary: \( u_1=u_0+c\,\nu \text {--a.e. in }A\text {, where }c\in [0,\infty ]. \) We claim that \(c>0\). By step 1, for every \(x\in A\), \(\mu _x\) has a single atom in \(\{x\}\times [\alpha ,\beta ]\) (at \((x,u_0(x))\)). So \(u_0(x)\le \beta \le u_1(x)\), and the only way for c to be equal to zero is to have \(u_0(x)=u_1(x)=\beta \). If this is the case, then $$ \mu (A\times \{\beta \})>0\text { and }\mu (A\times (\beta ,\beta +\delta ))>0\text { for all }\delta >0. $$ But then by ergodicity, we can find \(A'\subset A\) such that \(\mu (A\times (\beta ,\beta +|\alpha -\beta |))>0\) and a \(\mathfrak G\)-holonomy \(\kappa \) such that \(\kappa (A')\subset A\) and \(|\alpha -\beta |<\varPhi (x,\kappa (x))<0\) on \(A'\). But this is absurd because in this case, \(\kappa _\varPhi \) maps \(A\times \{\beta \}\) into \(A\times [\alpha ,\beta )\) which has zero measure by the assumption that \(u_0=\beta \) on W. We now separate cases. Case 1: \(u_1<\infty \) \(\nu \)–a.e. in A. In this case, a similar argument to the one we just used shows that for \(\nu \)–a.e. every \(x\in A\), \(\mu _x(\{x\}\times (u_1(x),u_1(x)+\delta ))=0\) for all \(\delta \) small enough. So \((x,u_1(x))=(x,u_0(x)+c)\) is an atom of \(\mu _x\). Since \(\mu _x(x,u_0(x))>0\) and \(\mu _x(x,u_0(x)+c)>0\) for \(\nu \)–a.e. \(x\in A\), \(\mu \) and \(\mu \circ g^c\) are not mutually singular. So \(c\in H_\mu \). Similarly, since \(\mu _x(\{x\}\times (u_0(x),u_0(x)+c))=\mu _x(\{x\}\times (u_0(x),u_1(x)))=0\) for \(\nu \)–a.e. \(x\in A\), \(\mu \not \sim \mu \circ g^\tau \) for \(0<\tau <c\). So \(H_\mu =c\mathbb Z\). Since \(\mu _x(\{x\}\times (u_0(x),u_0(x)+c))=0\), \(\mu _x(x,u_0(x))>0\), and \(\mu _x(x,u_0(x)+c)>0\) for \(\nu \)–a.e. \(x\in A\), \(\mu _x\sim \sum _{k\in \mathbb Z}\delta _{(x,u_0(x)+kc)}\) for a.e. \(x\in A\). It follows that $$ A\times \mathbb R=\{(x,u_0(x)+kc):x\in A,k\in \mathbb Z\}\mod \mu . $$ The \(\mathfrak G_\varPhi \)-invariance of \(\mu \) now implies that for a.e. \((x,\xi )\in A\times \mathbb R\), for all (countably many) \((y,\eta )\in A\times \mathbb R\) s.t. \(((x,\xi ),(y,\eta ))\in \mathfrak G_\varPhi [A\times \mathbb R]\), $$ \xi \in u_0(x)+c\mathbb Z\ , \ \eta \in u_0(y)+c\mathbb Z\ , \ \eta =\xi +\varPhi (x,y), $$ whence \(\varPhi (x,y)+u_0(x)-u_0(y)\in c\mathbb Z\). In other words, \(\varPhi (x,y)+u_0(x)-u_0(y)\in H_\mu \) a.e. in \(\mathfrak G_\varPhi [A\times \mathbb R]\). Case 2: \(u_1=\infty \) \(\nu \)–a.e. in A In this case, \(\mu _x(\{x\}\times (u_0(x),\infty ))=0\) but \(\mu _x(x,u_0(x))>0\) \(\nu \)-a.e. in A. We claim that \(\mu _x((-\infty ,u_0(x))=0\) \(\nu \)–a.e. in A. The argument is similar to the one we used before, so we only sketch it: Had there been some mass below the graph of \(u_0\) on A, then by ergodicity there would be some \(\mathfrak G_\varPhi \)-holonomy which maps a positive measure part of \(A\times \mathbb R\) into \(A\times \mathbb R\) in such a way that \(\varPhi \) takes strictly positive values. This holonomy would shift some positive measure piece of the graph of \(u_0\) strictly up in a measure preserving way. But this is impossible because there is no mass above the graph of \(u_0\). Thus \(A\times \mathbb R=\{(x,u_0(x)):x\in A\}\mod \mu \). It automatically follows that \(H_\mu =\{0\}\). Again, this implies that \(\varPhi (x,y)+u_0(x)-u_0(y)=0\) a.e. in \(\mathfrak G_\varPhi [A\times \mathbb R]\). \(\square \) Step 3: There exists \(u:X\rightarrow \mathbb R\) measurable s.t. \(X\times \mathbb R=\{(x,\xi ):\xi \in u(x)+H_\mu \}\mod \mu \) and \(\varPhi (x,y)+u(x)-u(y)\in H_\mu \) \(\mu \)-almost everywhere in \(\mathfrak G\). Proof. Define \(F_0:A\rightarrow \mathbb R/H_\mu \) by \(F_0(x,\xi ):=u_0(x)+H_\mu \). We can extend \(F_0\) to \( \mathrm {Sat}(A)=\{y\in X:\exists x\in A\text { s.t. }(x,y)\in \mathfrak G\} \) by setting $$ F(y):=F_0(x)+\varPhi (x,y)\text { for some (any) }x\in A\text { s.t. }(x,y)\in \mathfrak G. $$ The definition is proper, because if \(x_1,x_2\in A\) both satisfy \((x_i,y)\in \mathfrak G\), then $$\begin{aligned}&[F_0(x_1)+\varPhi (x_1,y)]-[F_0(x_2)+\varPhi (x_2,y)]\\&= u_0(x_1)-u_0(x_2)+\varPhi (x_1,y)+\varPhi (y,x_2)+H_\mu \\&=u_0(x_1)-u_0(x_2)+\varPhi (x_1,x_2)+H_\mu =H_\mu ,\text { by step 2}. \end{aligned}$$ By construction, \(F=F_0\) on A and for every \(x\in \mathrm {Sat}(A)\), for every y s.t. \((x,y)\in \mathfrak G\), $$ \varPhi (x,y)+F(x)-F(y)=H_\mu . $$ Let \(C:\mathbb R/H_\mu \rightarrow \mathbb R\) be a measurable (even piecewise continuous) function such that \(C(\tau +H_\mu )\in \tau +H_\mu \), and let $$ u(x):=C(F(x)). $$ Then \(\varPhi (x,y)+u(x)-u(y)\in H_\mu \) a.e. in \(\mathfrak G_\varPhi \). It immediately follows that \(G(x,\xi ):=\xi -u(x)+H_\mu \) is \(\mathfrak G_\varPhi \)-invariant, whence a.e. constant. The constant is zero because \(G=H_\mu \) on the positive measure set \(A\times [\alpha ,\beta ]\). So \(\xi -u(x)\in H_\mu \) \(\mu \)-a.e., whence \(X\times \mathbb R=\{(x,\xi ):\xi \in u(x)+H_\mu \}\). \(\Box \) 7.5 Notes and References The cocycle reduction theorem is taken from [49], as is the proof sketched above. Extensions to cocycles taking values in non-abelian groups are given in [45]. J. Aaronson: An introduction to infinite ergodic theory. Mathematical Surveys and Monographs, 50. American Mathematical Society, Providence, RI, 1997. xii+284 pp. ISBN: 0-8218-0494-4Google Scholar J. Aaronson, M. Denker, and A.M. Fisher: Second order ergodic theorems for ergodic transformations of infinite measure spaces. Proc. Amer. Math. Soc. 114 (1992) no. 1, 115–127.MathSciNetCrossRefGoogle Scholar J. Aaronson, H. Nakada, O. Sarig, and R. Solomyak: Invariant measures and asymptotics for some skew products. Israel J. Math. 128 (2002), 93–134.MathSciNetCrossRefGoogle Scholar J. Aaronson, O. Sarig, and R. Solomyak: Tail-invariant measures for some suspension semiflows. Discrete Contin. Dyn. Syst. 8 (2002), no. 3, 725–735.MathSciNetCrossRefGoogle Scholar J. Aaronson and B. Weiss: On the asymptotics of a 1-parameter family of infinite measure preserving transformations. Bol. Soc. Brasil. Mat. (N.S.) 29 (1998), no. 1, 181–193.Google Scholar M. Babillot: On the classification of invariant measures for horospherical foliations on nilpotent covers of negatively curved manifolds. In: Random walks and geometry (V.A. Kaimanovich, Ed.) de Gruyter, Berlin (2004), 319–335.Google Scholar M. Babillot and F. Ledrappier: Lalley's theorem on periodic orbits of hyperbolic flows. Ergodic Theory Dynam. Systems 18 (1998), no. 1, 17–39.MathSciNetCrossRefGoogle Scholar M. Babillot and F. Ledrappier: Geodesic paths and horocycle flows on Abelian covers. Lie groups and ergodic theory (Mumbai, 1996), 1–32, Tata Inst. Fund. Res. Stud. Math. 14, Tata Inst. Fund. Res., Bombay, (1998).Google Scholar M. Bachir Bekka: Ergodic theory and topological dynamics of group actions on homogeneous spaces, London Math. Soc. Lecture Notes Series 269, Cambridge University Press, 2013.Google Scholar Philippe Bougerol and Laure Élie: Existence of positive harmonic functions on groups and on covering manifolds. Ann. Inst. H. Poincaré Probab. Statist. 31 (1995), no. 1, 59–80.MathSciNetzbMATHGoogle Scholar M. Burger: Horocycle flow on geometrically finite surfaces. Duke Math. J. 61 (1990), no. 3, 779–803.MathSciNetCrossRefGoogle Scholar Gustave Choquet and Jacques Deny: Sur l'équation de convolution \(\mu =\mu \ast \sigma \). (French) C. R. Acad. Sci. Paris 250 1960 799–801.Google Scholar J.-P. Conze and Y. Guivarc'h: Propriété de droite fixe et fonctions harmoniques positives. (French) Théorie du potentiel et analyse harmonique (Exposés des Journées de la Soc. Math. France, Inst. Recherche Math. Avancée, Strasbourg, 1973), pp. 126–132. Lecture Notes in Math., Vol. 404, Springer, Berlin, 1974.Google Scholar Y. Coudene: Cocycles and stable foliations of Axiom A flows, Ergodic Th. & Dynam. Syst. 21 (2001), 767–774.MathSciNetzbMATHGoogle Scholar F. Dal'bo: Remarques sur le spectre des longueurs d'une surface et comptages. Bol. Soc. Brasil. Mat. (N.S.) 30 (1999), no. 2, 1991.Google Scholar S. G. Dani: Invariant measures of horospherical flows on noncompact homogeneous spaces. Invent. Math. 47 (1978), no. 2, 101–138.MathSciNetCrossRefGoogle Scholar S. G. Dani and J. Smillie: Uniform distribution of horocycle orbits for Fuchsian groups. Duke Math. J. 51 (1984), 185–194.MathSciNetCrossRefGoogle Scholar J. Feldman and C. C. Moore: Ergodic equivalence relations, cohomology, and von Neumann algebras. I. Trans. Amer. Math. Soc. 234 (1977), no. 2, 289–324.MathSciNetCrossRefGoogle Scholar A. Fisher: Convex-invariant means and a pathwise central limit theorem. Adv. in Math. 63 (1987), no. 3, 213–246.MathSciNetCrossRefGoogle Scholar A. M. Fisher: Integer Cantor sets and an order-two ergodic theorem. Ergodic Theory Dynam. Systems 13 (1993), no. 1, 45–64.MathSciNetCrossRefGoogle Scholar H. Furstenberg: The unique ergodicity of the horocycle flow. Springer Lecture Notes 318 (1972), 95–115.Google Scholar H. Furstenberg: Recurrence in ergodic theory and combinatorial number theory. M. B. Porter Lectures. Princeton University Press, Princeton, N.J., 1981. xi+203 pp.Google Scholar Y. Guivarc'h: Sur la représentation intégrale des fonctions harmoniques et des fonctions propres positives dans un espace riemannien symétrique. Bull. Sci. Math. (2) 108 (1984), no. 4, 373–392.Google Scholar Y. Guivarc'h and A. Raugi: Products of random matrices: convergence theorems. In Random matrices and their applications (Brunswick, Maine, 1984), 31–54, Contemp. Math., 50, Amer. Math. Soc., Providence, RI, (1986).Google Scholar E. Hopf: Ergodentheorie, Ergeb. Mat. vol. 5, Springer, Berlin, 1937.Google Scholar E. Hopf: Ergodic theory and the geodesic flow on surfaces of constant negative curvature, Bull. AMS 77 (1971), 863–877.MathSciNetCrossRefGoogle Scholar J. H. Hubbard: Teichmüller Theory and applications to geometry, topology, and dynamics. Volume 1: Teichmüller theory. xx+459 pages. Matrix Edition (2006).Google Scholar V. A. Kaimanovich: Ergodic properties of the horocycle flow and classification of Fuchsian groups. J. Dynam. Control Systems 6 (2000), no. 1, 21–56.MathSciNetCrossRefGoogle Scholar F. I. Karpelevich: The geometry of geodesics and the eigenfunctions of the Laplacian on symmetric spaces. Trans. Moskov. Math. Soc. 14 48–185 (1965).zbMATHGoogle Scholar S. Katok: Fuchsian groups. x+175 pages. The U. of Chicago Press (1992).Google Scholar A. Katsuda and T. Sunada: Closed orbits in homology classes. Inst. Hautes Études Sci. Publ. Math. No. 71 (1990), 5–32.Google Scholar S. Lalley: Closed geodesics in homology classes on surfaces of variable negative curvature, Duke Math. J., 55 (1989), 795–821.MathSciNetCrossRefGoogle Scholar F. Ledrappier: Invariant measures for the stable foliation on negatively curved periodic manifolds, Ann. Inst. Fourier 58 number 1 (2008), 85–105.MathSciNetCrossRefGoogle Scholar F. Ledrappier and O. Sarig: Unique ergodicity for non-uniquely ergodic horocycle flows. Discrete Contin. Dyn. Syst. 16 (2006), no. 2, 411–433.MathSciNetCrossRefGoogle Scholar F. Ledrappier and O. Sarig: Invariant measures for the horocycle flow on periodic hyperbolic surfaces. Israel J. Math. 160, 281–317 (2007).MathSciNetCrossRefGoogle Scholar F. Ledrappier and O. Sarig: Fluctuations of ergodic sums for horocycle flows on\({\mathbb{Z}}^d\)-covers of finite volume surfaces. Discrete Contin. Dyn. Syst. 22 (2008), no. 1-2, 247–325.Google Scholar V. Lin and Y. Pinchover: Manifolds with group actions and elliptic operators. Mem. Amer. Math. Soc. 112 (1994), no. 540, vi+78 pp.Google Scholar Terry Lyons and Dennis Sullivan: Function theory, random paths and covering spaces. J. Differential Geom. 19 (1984), no. 2, 299–323.MathSciNetCrossRefGoogle Scholar G. Margulis: Positive harmonic functions on nilpotent groups. Dokl. Akad. Nauk SSSR 166 1054–1057 (Russian); translated as Soviet Math. Dokl. 7 1966 241–244.Google Scholar S. J. Patterson: The limit set of Fuchsian group, Acta Math. 136 (1976), 241–273.MathSciNetCrossRefGoogle Scholar M. Pollicott: \({\mathbb{Z}}^d\)-covers of horospheric foliations, Discrete Continuous Dynam. Syst. 6 (2000), 599–604.Google Scholar M. Ratner: A central limit theorem for У-flows on three-dimensional manifolds. (Russian) Dokl. Akad. Nauk SSSR 186 (1969) 519–521.Google Scholar M. Ratner: On Raghunathan's measure conjecture. Ann. of Math. (2) 134 (1991), no. 3, 545–607.Google Scholar M. Ratner: Raghunathan's topological conjecture and distributions of unipotent flows. Duke Math. J. 63 (1991), no. 1, 235–280.MathSciNetCrossRefGoogle Scholar A. Raugi: Mesures invariantes ergodiques pour des produits gauches. Bull. Soc. Math. France 135 (2007), no. 2, 247–258.MathSciNetCrossRefGoogle Scholar M. Rees: Divergence type of some subgroups of finitely generated Fuchsian groups. Ergodic Theory Dynamical Systems 1 (1981), no. 2, 209–221.MathSciNetCrossRefGoogle Scholar T. Roblin: Sur l'ergodicité rationnelle et les propriétés ergodiques du flot géodésique dans les variétés hyperboliques. Ergodic Theory Dynam. Systems 20 (2000), no. 6, 1785–1819.MathSciNetCrossRefGoogle Scholar T. Roblin: Ergodicité et équidistribution en courbure négative. Mém. Soc. Math. Fr. (N.S.) 95 (2003), vi+96 pp.Google Scholar O. Sarig: Invariant measures for the horocycle flow on Abelian covers. Inv. Math. 157, 519–551 (2004).MathSciNetCrossRefGoogle Scholar O. Sarig: The horocyclic flow and the Laplacian on hyperbolic surfaces of infinite genus. Geom. Funct. Anal. 19 (2010), no. 6, 1757–1812.MathSciNetCrossRefGoogle Scholar O. Sarig and B. Schapira: The generic points for the horocycle flow on a class of hyperbolic surfaces with infinite genus. Int. Math. Res. Not. IMRN 2008, Art. ID rnn 086, 37 pp.Google Scholar B. Schapira: Equidistribution of the horocycles of a geometrically finite surface. Int. Math. Res. Not. 40, 2447–2471 (2005).MathSciNetCrossRefGoogle Scholar K. Schmidt: Cocycles on ergodic transformation groups. Macmillan Lectures in Mathematics, Vol. 1. Macmillan Company of India, Ltd., Delhi, 1977. 202 pp. (Available from the author's homepage.)Google Scholar C. Series: Geometrical methods of symbolic coding. In Ergodic Theory, Symbolic Dynamics, and Hyperbolic Spaces Edited by T. Bedford, M. Keane, C. Series. Oxford Univ. Press (1991).Google Scholar R. Solomyak: A short proof of the ergodicity of the Babillot-Ledrappier measures, Proc. AMS 129 (2001), 3589–3591.MathSciNetCrossRefGoogle Scholar A. N. Starkov: Fuchsian groups from the dynamical viewpoint, J. Dynamics and Control Systems 1 (1995), 427–445.MathSciNetCrossRefGoogle Scholar J. Stillwell: Geometry of surfaces. Universitext. Springer-Verlag, New York, 1992. xii+216 pp. ISBN: 0-387-97743-0 Google Scholar D. Sullivan: The density at infinity of a discrete group of hyperbolic motions. Inst. Hautes Études Sci. Publ. Math. No. 50 (1979), 171–202.Google Scholar D. Sullivan: Related aspects of positivity in Riemannian geometry. J. Diff. Geom. 25 327–351 (1987).MathSciNetCrossRefGoogle Scholar R. Zweimüller: Hopf's ratio ergodic theorem by inducing. Colloq. Math. 101 (2004), no. 2, 289–292.MathSciNetCrossRefGoogle Scholar 1.Weizmann Institute of ScienceRehovotIsrael Sarig O. (2019) Horocycle Flows on Surfaces with Infinite Genus. In: Dani S., Ghosh A. (eds) Geometric and Ergodic Aspects of Group Actions. Infosys Science Foundation Series. Springer, Singapore eBook Packages Mathematics and Statistics
CommonCrawl
Equipartition of energy for nonautonomous damped wave equations DCDS-S Home The fast-sorption and fast-surface-reaction limit of a heterogeneous catalysis model February 2021, 14(2): 575-596. doi: 10.3934/dcdss.2020363 Instability of free interfaces in premixed flame propagation Claude-Michel Brauner 1,2,, and Luca Lorenzi 3, School of Mathematical Sciences, Tongji University, 1239 Siping Rd., Shanghai 200092, China Institut de Mathématiques de Bordeaux, Université de Bordeaux, 33405 Talence Cedex, France Plesso di Matematica, Dipartimento di Scienze Matematiche, Fisiche e Informatiche, Università di Parma, Parco Area delle Scienze 53/A, I-43124 Parma, Italy * Corresponding author: [email protected] Dedicated to Michel Pierre on his 70th birthday, in friendship. Received December 2019 Revised February 2020 Published May 2020 In this survey, we are interested in the instability of flame fronts regarded as free interfaces. We successively consider a classical Arrhenius kinetics (thin flame) and a stepwise ignition-temperature kinetics (thick flame) with two free interfaces. A general method initially developed for thin flame problems subject to interface jump conditions is proving to be an effective strategy for smoother thick flame systems. It relies on the elimination of the free interface(s) and reduction to a fully nonlinear parabolic problem. The theory of analytic semigroups is a key tool to study the linearized operators. Keywords: Free boundary problems with one or two free interfaces, traveling wave solutions, instability, fully nonlinear parabolic problems, analytic semigroups, combustion. Mathematics Subject Classification: Primary: 35R35; Secondary 35B35, 35K40, 80A25. Citation: Claude-Michel Brauner, Luca Lorenzi. Instability of free interfaces in premixed flame propagation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 575-596. doi: 10.3934/dcdss.2020363 D. Addona, C.-M. Brauner, L. Lorenzi and W. Zhang, Instabilities in a combustion model with two free interfaces, J. Differential Equations, 268 (2020), 396-4016. doi: 10.1016/j.jde.2019.10.015. Google Scholar O. Baconneau, C.-M. Brauner and A. Lunardi, Computation of bifurcated branches in a free boundary problem arising in combustion theory, ESAIM Math. Model. Numer. Anal., 34 (2000), 223-239. doi: 10.1051/m2an:2000139. Google Scholar I. Blank, Sharp results for the regularity and stability of the free boundary in the obstacle problem, Indiana Univ. Math. J., 50 (2001), 1077-1112. doi: 10.1512/iumj.2001.50.1906. Google Scholar I. Brailovsky, P. V. Gordon, L. Kagan and G. I. Sivashinsky, Diffusive-thermal instabilities in premixed flames: Stepwise ignition-temperature kinetics, Combustion and Flame, 162 (2015), 2077-2086. doi: 10.1016/j.combustflame.2015.01.006. Google Scholar C. -M Brauner, P. V. Gordon and W. Zhang, An ignition-temperature model with two free interfaces in premixed flames, Combust. Theory Model., 20 (2016), 976-994. doi: 10.1080/13647830.2016.1220625. Google Scholar C.-M. Brauner, L. Hu and L. Lorenzi, Asymptotic analysis in a gas-solid combustion model with pattern formation, Chin. Ann. Math. Ser. B, 34 (2013), 65-88. doi: 10.1007/s11401-012-0758-4. Google Scholar C.-M. Brauner, J. Hulshof, L. Lorenzi and G. I. Sivashinsky, A fully nonlinear equation for the flame front in a quasi-steady combustion model, Discrete Contin. Dyn. Syst. Ser. A, 27 (2010), 1415-1446. doi: 10.3934/dcds.2010.27.1415. Google Scholar C.-M. Brauner, J. Hulshof and A. Lunardi, A general approach to stability in free boundary problems, J. Differential Equations, 164 (2000), 16-48. doi: 10.1006/jdeq.1999.3734. Google Scholar C.-M. Brauner and L. Lorenzi, Local existence in free interface problems with underlying second-order Stefan condition, Rev. Roumaine Math. Pures Appl., 63 (2018), 339-359. Google Scholar C.-M. Brauner, L. Lorenzi, G.I. Sivashinsky and C.-J. Xu, On a strongly damped wave equation for the flame front, Chin. Ann. Math. Ser. B, 31 (2010), 819-840. doi: 10.1007/s11401-010-0616-1. Google Scholar C.-M. Brauner, L. Lorenzi and M. Zhang, Stability analysis and Hopf bifurcation for large Lewis number in a combustion model with free interface, Ann. Inst. H. Poincaré Anal. Non Linéaire, (2020), in press. doi: 10.1016/j.anihpc.2020.01.002. Google Scholar C.-M. Brauner and A. Lunardi, Instabilities in a two-dimensional combustion model with free boundary, Arch. Ration. Mech. Anal., 154 (2000), 157-182. doi: 10.1007/s002050000099. Google Scholar [13] J. D. Buckmaster and G. S. S. Ludford, Theory of Laminar Flames, Cambridge University Press, Cambridge-New York, 1982. Google Scholar J. D. Buckmaster and G. S. S. Ludford, Lectures on Mathematical Combustion, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1983. doi: 10.1137/1.9781611970272. Google Scholar J. R. Cannon and J. Douglas Jr., The stability of the boundary in a Stefan problem, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 21 (1967), 83-91. Google Scholar F. Conrad and F. Issard-Roch, Loss of stability at turning points in nonlinear elliptic variational inequalities, Nonlinear Anal., 14 (1990), 329-356. doi: 10.1016/0362-546X(90)90169-H. Google Scholar F. Conrad, F. Issard-Roch, C.-M. Brauner and B. Nicolaenko, Nonlinear eigenvalue problems in elliptic variational inequalities: A local study, Comm. Partial Differential Equations, 10 (1985), 151-190. doi: 10.1080/03605308508820375. Google Scholar C. Elliot and J. R. Ockendon, Weak and Variational Methods for Moving Boundary Problems, Pitman, Boston, Mass.-London, 1982. Google Scholar M. Hadžić and S. Shkoller, Global stability and decay for the classical Stefan problem, Comm. Pure Appl. Math., 68 (2015), 689-757. doi: 10.1002/cpa.21522. Google Scholar M. Hadžić, G. Navarro and S. Shkoller, Local well-posedness and global stability of the two-phase Stefan problem, SIAM J. Math. Anal., 49 (2017), 4942–5006., doi: 10.1137/16M1083207. Google Scholar D. Henry, Geometric Theory of Semilinear Parabolic Equations, Lect. Notes in Math., Vol. 840, Springer-Verlag, Berlin-New York, 1981. Google Scholar L. Lorenzi, Regularity and analyticity in a two-dimensional combustion model, Adv. Differential Equations, 7 (2002), 1343-1376. Google Scholar L. Lorenzi, A free boundary problem stemmed from combustion theory. Part I: Existence, uniqueness and regularity results, J. Math. Anal. Appl., 274 (2002), 505-535. doi: 10.1016/S0022-247X(02)00271-8. Google Scholar L. Lorenzi, A free boundary problem stemmed from combustion theory. Part II: Stability, instability and bifurcation results, J. Math. Anal. Appl., 275 (2002), 131-160. doi: 10.1016/S0022-247X(02)00280-9. Google Scholar A. Lunardi, Analytic Semigroups and Optimal Regularity in Parabolic Problems, Birkhäuser Verlag, Basel, 1995. Google Scholar B. J. Matkowsky and G. I. Sivashinsky, An asymptotic derivation of two models in flame theory associated with the constant density approximation, SIAM J. Appl. Math., 37 (1979), 686-699. doi: 10.1137/0137051. Google Scholar J. R. Ockendon, Linear and nonlinear stability of a class of moving boundary problems, in Free Boundary Problems: Volume II, Ist. Naz. Alta Mat. Francesco Severi, Rome, 1980,443–478. Google Scholar S. Serfaty and J. Serra, Quantitative stability of the free boundary in the obstacle problem, Anal. PDE, 11 (2018), 1803-1839. doi: 10.2140/apde.2018.11.1803. Google Scholar G. I. Sivashinsky, On flame propagation under condition of stoichiometry, SIAM J. Appl. Math., 39 (1980), 67-82. doi: 10.1137/0139007. Google Scholar G. I. Sivashinsky, Instabilities, pattern formation and turbulence in flames, Ann. Rev. Fluid Mech., 15 (1983), 179-199. Google Scholar Chueh-Hsin Chang, Chiun-Chuan Chen, Chih-Chiang Huang. Traveling wave solutions of a free boundary problem with latent heat effect. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021028 Yoichi Enatsu, Emiko Ishiwata, Takeo Ushijima. Traveling wave solution for a diffusive simple epidemic model with a free boundary. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 835-850. doi: 10.3934/dcdss.2020387 Jingjing Wang, Zaiyun Peng, Zhi Lin, Daqiong Zhou. On the stability of solutions for the generalized vector quasi-equilibrium problems via free-disposal set. Journal of Industrial & Management Optimization, 2021, 17 (2) : 869-887. doi: 10.3934/jimo.2020002 Huijuan Song, Bei Hu, Zejia Wang. Stationary solutions of a free boundary problem modeling the growth of vascular tumors with a necrotic core. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 667-691. doi: 10.3934/dcdsb.2020084 Maho Endo, Yuki Kaneko, Yoshio Yamada. Free boundary problem for a reaction-diffusion equation with positive bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3375-3394. doi: 10.3934/dcds.2020033 Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $ p $ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020442 Lei Yang, Lianzhang Bao. Numerical study of vanishing and spreading dynamics of chemotaxis systems with logistic source and a free boundary. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1083-1109. doi: 10.3934/dcdsb.2020154 Marek Macák, Róbert Čunderlík, Karol Mikula, Zuzana Minarechová. Computational optimization in solving the geodetic boundary value problems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 987-999. doi: 10.3934/dcdss.2020381 Min Xi, Wenyu Sun, Jun Chen. Survey of derivative-free optimization. Numerical Algebra, Control & Optimization, 2020, 10 (4) : 537-555. doi: 10.3934/naco.2020050 Isabeau Birindelli, Françoise Demengel, Fabiana Leoni. Boundary asymptotics of the ergodic functions associated with fully nonlinear operators through a Liouville type theorem. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020395 Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259 Ching-Hui Wang, Sheng-Chen Fu. Traveling wave solutions to diffusive Holling-Tanner predator-prey models. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021007 Aurelia Dymek. Proximality of multidimensional $ \mathscr{B} $-free systems. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021013 Hongbo Guan, Yong Yang, Huiqing Zhu. A nonuniform anisotropic FEM for elliptic boundary layer optimal control problems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1711-1722. doi: 10.3934/dcdsb.2020179 Zhiyan Ding, Qin Li, Jianfeng Lu. Ensemble Kalman Inversion for nonlinear problems: Weights, consistency, and variance bounds. Foundations of Data Science, 2020 doi: 10.3934/fods.2020018 Serge Dumont, Olivier Goubet, Youcef Mammeri. Decay of solutions to one dimensional nonlinear Schrödinger equations with white noise dispersion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020456 Tong Yang, Seiji Ukai, Huijiang Zhao. Stationary solutions to the exterior problems for the Boltzmann equation, I. Existence. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 495-520. doi: 10.3934/dcds.2009.23.495 Jong-Shenq Guo, Ken-Ichi Nakamura, Toshiko Ogiwara, Chang-Hong Wu. The sign of traveling wave speed in bistable dynamics. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3451-3466. doi: 10.3934/dcds.2020047 Claude-Michel Brauner Luca Lorenzi
CommonCrawl
Mats Vermeeren sketches a simple proof of Noether's first theorem Image: Konrad Jacobs Erlangen CC BY-SA 2.0 DE Mats Vermeeren ? Newton One of the most famous formulas in physics is Newton's second law, \[ \boldsymbol{F} = m \boldsymbol{a}. \] It is named after Isaac Newton (1643–1727), who in all likelihood was never actually hit on the head by any apples, and states that 'force equals mass times acceleration'. It shines in its simplicity, but contemporary mathematical physicists would much rather write it as \[ \frac{\mathrm{d} z}{\mathrm{d}t} = \{H,z\} . \] In this formula, $z$ can be any quantity related to the system and $H$ is the Hamilton function, which represents the total energy of the system. But what is this strange bracket? And why would any sane person write a simple idea like Newton's second law in such an obscure way? Isaac kilogram metre per square second There are a few possible answers to this last question. I could start singing praise for some abstract geometric beauty, but it also provides an easy explanation of Emmy Noether's famous theorem on the relation between symmetries and conserved quantities. A conserved quantity, as the name suggests, is something that does not change. The most common example is probably the conservation of energy in physics, but there can be many other conserved quantities. Noether's (first) theorem states that a system has a conserved quantity if and only if it possesses a related symmetry. Translation symmetry corresponds to conservation of (linear) momentum, for example, and rotational symmetry to angular momentum. 1️⃣??? Hamilton William Rowan Hamilton (the musical) Let's consider a physical system consisting of one point particle of mass $m$ moving through space. The state of the system is given by its position $\boldsymbol{x} = (x_1,x_2,x_3)$ and momentum $\boldsymbol{p} = (p_1,p_2,p_3)$. In mechanics, momentum is simply the product of mass and velocity, $\boldsymbol{p} = m \boldsymbol{v}$, so you could also say the state is given by position and velocity. But it turns out that using momentum leads to more elegant mathematics. If we know the state of the system at some point in time, then the states at all future (and past) times are determined by Newton's second law. Suppose we can write the potential energy of the system as a function $V(\boldsymbol{x})$ of the position. Then the force on the particle is given by minus the gradient of this function, $F(\boldsymbol{x}) = -\boldsymbol{\nabla} V(\boldsymbol{x})$, so Newton's second law can be written as \[ \frac{\mathrm{d}^2 \boldsymbol{x}}{\mathrm{d}t^2} = – \frac{1}{m} \boldsymbol{\mathsf{\nabla}} V(\boldsymbol{x}), \] or, written in components, \[ \frac{\mathrm{d}^2 x_i}{\mathrm{d}t^2} = – \frac{1}{m} \frac{\partial V(\boldsymbol{x})}{\partial x_i}. \] Since the kinetic energy of the particle is $|\boldsymbol{p}|^2/(2m)$, the total energy will be given by \[ H(\boldsymbol{x},\boldsymbol{p}) = \frac{1}{2m} |\boldsymbol{p}|^2 + V(\boldsymbol{x}) . \] This is called the Hamilton function, after William Rowan Hamilton (1805–1865), who is also famous as the inventor/discoverer of the quaternions. One reason why this function deserves its special name, is that it gives the equations of motion in a very satisfying way. The time derivatives of position and momentum are, up to sign, equal to the partial derivatives of $H$: \[ \frac{\mathrm{d} \boldsymbol{x}}{\mathrm{d}t} = \frac{\partial H}{\partial \boldsymbol{p}} \qquad\text{and}\qquad \frac{\mathrm{d} \boldsymbol{p}}{\mathrm{d}t} = -\frac{\partial H}{\partial \boldsymbol{x}}. \] We call these the 'equations of motion'. By combining the two equations of motion, you can find an expression for the acceleration $\mathrm{d}^2 \boldsymbol{x}/\mathrm{d}t^2$. You can check that this again gives Newton's second law, so these equations really do describe the motion of the system. For our choice of Hamilton function, the first equation of motion will simply be $\mathrm{d} \boldsymbol{x}/\mathrm{d}t = \boldsymbol{p}/m$, but we prefer to write it in the form above. It is not just pleasing to have both equations of motion look so similar, but it also helps us find the time derivative of $H$ itself. By the chain rule we have \[ \frac{\mathrm{d} H}{\mathrm{d}t} = \frac{\partial H}{\partial \boldsymbol{x}} \cdot \frac{\mathrm{d} \boldsymbol{x}}{\mathrm{d}t} + \frac{\partial H}{\partial \boldsymbol{p}} \cdot \frac{\mathrm{d} \boldsymbol{p}}{\mathrm{d}t}, \] where the dot denotes the scalar product between vectors. If you prefer to write out the components, you can expand the first term as \[ \frac{\partial H}{\partial \boldsymbol{x}} \cdot \frac{\mathrm{d} \boldsymbol{x}}{\mathrm{d}t} = \frac{\partial H}{\partial x_1} \frac{\mathrm{d} x_1}{\mathrm{d}t} + \frac{\partial H}{\partial x_2} \frac{\mathrm{d} x_2}{\mathrm{d}t} + \frac{\partial H}{\partial x_3} \frac{\mathrm{d} x_3}{\mathrm{d}t} \] and the second term in a similar way. Using the equations of motion we can see the two terms in the expression for $\mathrm{d} H/\mathrm{d}t$ cancel: \[ \frac{\mathrm{d} H}{\mathrm{d}t} = \frac{\partial H}{\partial \boldsymbol{x}} \cdot \frac{\partial H}{\partial \boldsymbol{p}} – \frac{\partial H}{\partial \boldsymbol{p}}\cdot \frac{\partial H}{\partial \boldsymbol{x}} = 0, \] so we can call $H$ a conserved quantity, because it doesn't change in time. Hence any system that fits into this Hamiltonian framework automatically satisfies conservation of energy. ? Kepler When thinking about physics, it is always useful to have a particular system in mind. Consider for example a planet orbiting the sun (where we approximate both by point masses and assume that the sun is fixed at the origin). This is known as the Kepler system, after Johannes Kepler (1571–1630), who figured out the laws of planetary motion long before Newton came up with a theory of gravity. The Kepler system is governed by the Hamilton function, \[ H(\boldsymbol{x},\boldsymbol{p}) = \frac{1}{2m} |\boldsymbol{p}|^2 – \frac{1}{|\boldsymbol{x}|}. \] Its equations of motion are \[ \frac{\mathrm{d} \boldsymbol{x}}{\mathrm{d}t} = \boldsymbol{p} \qquad\text{and}\qquad \frac{\mathrm{d} \boldsymbol{p}}{\mathrm{d}t} = \frac{-\widehat{\boldsymbol{x}}}{|\boldsymbol{x}|^2}, \] where $\widehat{\boldsymbol{x}}$ denotes the unit vector in the direction of $\boldsymbol{x}$. This is Newton's second law with the inverse square central force \[ F(\boldsymbol{x}) = -\frac{\widehat{\boldsymbol{x}}}{|\boldsymbol{x}|^2}. \] If the planet is moving relatively slowly, its motion will consist of repeated orbits, tracing out an ellipse. If it is speeding, then its orbit will either be a parabola or a hyperbola, so it will approach the sun only once before rushing off into interstellar space. Let's ignore the possibility of such a speedy galactic nomad and assume that we are dealing with an elliptic orbit. The Kepler system has some notable properties. One of these is that it is rotationally symmetric (see below). This can be seen from the formula for the Hamilton function: it only depends on the lengths of $\boldsymbol{x}$ and $\boldsymbol{p}$, not their directions, so it will not change if we rotate these vectors. The diagram below shows a planet (blue) orbiting the sun (yellow), tracing an ellipse. It is shown near its closest point to the sun, together with its velocity vector. Rotational symmetry means that if we instantly rotate both the planet and its velocity vector through some angle around the sun, then the new orbit will be the same ellipse as before, rotated by the same angle. The conserved quantity corresponding to rotational symmetry is the angular momentum $\boldsymbol{L} = \boldsymbol{x} \times \boldsymbol{p}$ (see below). We can check this by verifying that its time derivative is zero. Using the product rule we find \begin{align*} \frac{\mathrm{d} \boldsymbol{L}}{\mathrm{d}t} &= \frac{\mathrm{d} \boldsymbol{x}}{\mathrm{d}t} \times \boldsymbol{p} + \boldsymbol{x} \times \frac{\mathrm{d} \boldsymbol{p}}{\mathrm{d}t} \\ &= \boldsymbol{p} \times \boldsymbol{p} + \boldsymbol{x} \times \frac{-\widehat{\boldsymbol{x}}}{|\boldsymbol{x}|^2}, \end{align*} which is zero because the cross product of parallel vectors is always zero. Hence $\boldsymbol{L}$ is indeed a conserved quantity. Angular momentum ($\boldsymbol{L} = \boldsymbol{x} \times \boldsymbol{p}$) is a measure of how much an object is spinning (around a reference point). It increases both when the speed of rotation increases and when the distance to the reference point grows. Conservation of angular momentum implies that the planet moves slower when it is further from the the sun. A terrestrial example of conservation of angular momentum can be observed when a figure skater performs a pirouette. They will spin more quickly when they move some of their mass closer to the rotation axis by bringing in their arms. Our observations on the Kepler system illustrate Noether's theorem, which states that symmetries and conserved quantities are in one-to-one correspondence. In particular, if a system is rotationally symmetric, then it conserves angular momentum. Other systems may have different symmetries. If a system has translation symmetry (eg billiards on an unbounded table) it conserves the total linear momentum. Even conservation of energy fits into this picture. It corresponds to the time translation symmetry: the laws of physics don't change over time. To prove Noether's theorem, we need one more item in our toolbox. {?,?} Poisson Earlier we checked that $H$ is always a conserved by calculating its time derivative. What if we want to know the time derivative of some other function $z$ of position and momentum? We can calculate \begin{align*} \frac{\mathrm{d} z}{\mathrm{d}t} &= \frac{\partial z}{\partial \boldsymbol{x}} \cdot \frac{\mathrm{d} \boldsymbol{x}}{\mathrm{d}t} + \frac{\partial z}{\partial \boldsymbol{p}} \cdot \frac{\mathrm{d} \boldsymbol{p}}{\mathrm{d}t} \\ &= \frac{\partial z}{\partial \boldsymbol{x}} \cdot \frac{\partial H}{\partial \boldsymbol{p}} – \frac{\partial z}{\partial \boldsymbol{p}} \cdot \frac{\partial H}{\partial \boldsymbol{x}}. \end{align*} The expression in the last line is usually written as: \[ \{H,z\} := \frac{\partial H}{\partial \boldsymbol{p}} \cdot \frac{\partial z}{\partial \boldsymbol{x}} – \frac{\partial H}{\partial \boldsymbol{x}} \cdot \frac{\partial z}{\partial \boldsymbol{p}} . \] This curly bracket is called the Poisson bracket, after Siméon Denis Poisson (1781–1840), whose name sounds a lot less posh once you remember that 'poisson' is French for 'fish'. Siméon Denis Poisson With this definition, we obtain our brackety formula, \[ \frac{\mathrm{d} z}{\mathrm{d}t} = \{H,z\} , \] as a reformulation of the Hamiltonian equations of motion. We can also go the other way and recover the Hamiltonian equations of motion from the brackety formula. Since $z$ can be any function of position and momentum, we can choose to set it equal to a component of either position or momentum and find \begin{align*} \frac{\mathrm{d} x_i}{\mathrm{d}t} &= \{H,x_i\} = \frac{\partial H}{\partial p_i} &&\text{and}& \frac{\mathrm{d} p_i}{\mathrm{d}t} &= \{H,p_i\} = -\frac{\partial H}{\partial x_i}. \end{align*} We have now established that $\mathrm{d} z/\mathrm{d}t = \{H,z\}$ is equivalent to the Hamiltonian equations of motion, but why do we care about Poisson brackets? Well, they have some interesting properties: A function $z$ of position and momentum is a conserved quantity of the system with Hamilton function $H$ if and only if $\{H,z\} = 0$. This follows immediately from the brackety formula: the vanishing of the Poisson bracket is equivalent to $\mathrm{d} z/\mathrm{d}t = 0$. The Poisson bracket is skew-symmetric, meaning that if we swap around its entries, we get the same result but with a minus sign: \[\{z,u\} = -\{u,z\}\] for any two functions $u$ and $z$ of position and momentum. There are additional properties of the Poisson bracket which make sure that the identification of a time derivative with a bracket makes sense, but we won't go into those technicalities here. Instead, let's spin our attention towards Noether. ? $\Leftrightarrow$?? Noether Emmy Noether (1882–1935) was a mathematician of many talents. Much of her work was in abstract algebra, but she is most famous for her theorem stating that conserved quantities of a system are in one-to-one correspondence with its symmetries. She counts as one of the top mathematicians of the interwar period, a status she managed to achieve in the face of cruel discrimination because of her gender and descent. At the University of Göttingen, Germany, where she spent most of her career, she was refused a paid position, despite strong support from her colleagues David Hilbert and Felix Klein. In 1933, she emigrated to the US to escape the Nazi regime. The University of Göttingen. Image: Daniel Schwen, CC BY-SA 2.5 Let's put aside the grim history and step into Noether's mathematical footsteps to find out what symmetries have to do with conserved quantities. Consider two Hamilton functions $H$ and $I$ and the corresponding dynamical systems, \begin{align} \frac{\mathrm{d} z}{\mathrm{d}t} &= \{H,z\} , \label{H}\tag{$H$}\\ \frac{\mathrm{d} z}{\mathrm{d}t} &= \{I,z\} . \label{I}\tag{$I$} \end{align} In the Kepler problem, for example, the system labelled \eqref{H} would describe the physical motion of a planet and the one labelled \eqref{I} a rotation around the sun. The Hamilton function for a rotation is a component of the angular momentum vector, so in this example we would take $I$ equal to a component of $\boldsymbol{L} = \boldsymbol{x} \times \boldsymbol{p}$. Now what does it mean for \eqref{I} to be a symmetry of the system \eqref{H}? It means that the motion of \eqref{I} does not change the equation \eqref{H}. Since the dynamics of a system is fully encoded by its Hamilton function, this is equivalent to saying that the system \eqref{I} does not change the Hamilton function $H$. Hence \[ \text{\eqref{I} is a symmetry of \eqref{H}} \quad \iff \quad \text{$H$ is a conserved quantity of \eqref{I}} . \] We can use property (A) to express this in terms of a Poisson bracket: \[ \text{\eqref{I} is a symmetry of \eqref{H}} \quad \iff \quad \{I,H\}= 0 . \] Next we use property (B): the Poisson bracket is skew-symmetric, hence \[ \text{\eqref{I} is a symmetry of \eqref{H}} \quad \iff \quad \{H,I\}= 0 . \] Or, using property (A) once again: \[ \text{\eqref{I} is a symmetry of \eqref{H}} \quad \iff \quad \text{$I$ is a conserved quantity of \eqref{H}} . \] This is essentially the statement of Noether's first theorem: the symmetries of a system are related to its conserved quantities. There is an important thing that we have swept under the rug in this derivation. Not every possible symmetry is generated by a Hamilton function. Hence the correct formulation of Noether's theorem is that conserved quantities are in one-to-one correspondence with Hamiltonian symmetries. This issue disappears when, instead of Hamiltonian mechanics, we consider Lagrangian mechanics, which is based on the calculus of variations. Within that framework a natural notion of symmetry leads to a one-to-one correspondence between symmetries and conserved quantities. In fact, Noether's original paper dealt with the Lagrangian perspective. It included not just the case of mechanics, but also field theory, which deals with partial differential equations. Her main motivation was to understand conservation of energy in Einstein's theory of gravity. This is a surprisingly subtle problem, because general relativity has an infinite number of symmetries. When a system has an infinite number of symmetries, the conserved quantities produced by Noether's first theorem are trivial in some sense. For example, a function which maps position and momentum to a constant would be a trivial conserved quantity: it does not change in time, but that fact does not tell us anything about the dynamical system. Noether's second theorem is relevant to these systems with infinitely many symmetries. Roughly speaking, it says that if a system has an infinite number of symmetries, then the equations of motion must have a certain redundancy to it: some of the information contained in one of the equations of motion will also be contained in the others. ? Noether's legacy It was known before Noether's time that conserved quantities are related to symmetries. And while her paper was the first one to make this connection precise, her main breakthrough was the lesser known second theorem. Noether's insights were warmly welcomed by mathematical physicists of the time, including Albert Einstein himself, and are still a key part of modern physics. The power of Noether's theorems lies in their generality: they apply to any system with the relevant kind of symmetries, and to prove them you don't need to know the particulars of the system at hand. Similarly, Poisson brackets allow us to capture essential features of physics with an equation that takes the same form no matter what system it describes. Instead of having to work out all the forces between interacting objects, all you need to put into this framework is the total energy in the form of the Hamilton function. It's no wonder that mathematical physicists often prefer $\mathrm{d} z/\mathrm{d}t = \{H,z\}$ over $\boldsymbol{F} = m\boldsymbol{a}$. Mats is a researcher at Loughborough University, working in the area of mathematical physics. He would not be able to illustrate angular momentum by performing pirouettes on ice. Instead he likes to get some linear momentum going as a long-distance runner. All articles by Mats Ellen talks to the mathematician and scientist about Attenborough, arctan and Antarctica E Adrian Henle, Nick Gantzler, François-Xavier Coudert & Cory Simon team up for a deadly challenge Goran Newsum always should be someone you really love Poppy Azmi explores the patterns that are all around us Madeleine Hall looks at one of the minds behind GPS Chris Boucher explores the secrets and symmetries behind a measure of the distance between binary strings ← The big argument: Are whiteboards better than blackboards? Cryptic crossword, Issue 15 →
CommonCrawl
Scalable Multi-grained Cross-modal Similarity Query with Interpretability Mingdong Zhu ORCID: orcid.org/0000-0002-1019-69971, Derong Shen2, Lixin Xu1 & Xianfang Wang1 Data Science and Engineering volume 6, pages 280–293 (2021)Cite this article Cross-modal similarity query has become a highlighted research topic for managing multimodal datasets such as images and texts. Existing researches generally focus on query accuracy by designing complex deep neural network models and hardly consider query efficiency and interpretability simultaneously, which are vital properties of cross-modal semantic query processing system on large-scale datasets. In this work, we investigate multi-grained common semantic embedding representations of images and texts and integrate interpretable query index into the deep neural network by developing a novel Multi-grained Cross-modal Query with Interpretability (MCQI) framework. The main contributions are as follows: (1) By integrating coarse-grained and fine-grained semantic learning models, a multi-grained cross-modal query processing architecture is proposed to ensure the adaptability and generality of query processing. (2) In order to capture the latent semantic relation between images and texts, the framework combines LSTM and attention mode, which enhances query accuracy for the cross-modal query and constructs the foundation for interpretable query processing. (3) Index structure and corresponding nearest neighbor query algorithm are proposed to boost the efficiency of interpretable queries. (4) A distributed query algorithm is proposed to improve the scalability of our framework. Comparing with state-of-the-art methods on widely used cross-modal datasets, the experimental results show the effectiveness of our MCQI approach. Avoid the common mistakes With rapid development of computer science and technology, multimedia data including images and texts have been emerging on the Internet, which have become the main form of humans knowing the world. Consequently, cross-modal similarity query has been an essential technique with wide applications, such as search engine and multimedia data management. Cross-modal similarity query [1] is such an effective query paradigm that users can get the results of one type by submitting a query of the other type. In this work, we mainly focus on queries between images and texts. For instance, when one user submits a piece of textual description of one football game, most relevant images in datasets can be fetched and vice versa. Cross-modal similarity query should discover latent semantic relationships among different types, it has attracted great interests from researchers. Due to the significant advantage of deep neural networks (DNN) in feature extraction, DNN models are utilized for cross-modal similarity query [2]. The complex structure and high-dimensional feature maps equip the deep neural networks with considerable power of learning nonlinear relationships; however, at the same time, complex models introduce some drawbacks. First, numerous parameters of deep neural networks make query process and results difficult to be explained. That is, those models have weak interpretability, which is an important property for general and reliable cross-modal query system. Second, in order to find the most similar data objects, typically the cosine similarity between the high-dimensional feature vector of query object and that of each object in the whole dataset should be computed. Hence, for a large-scale dataset, the computation cost is so high that the query response time will be obnoxious. Existing researches tend to focus on designing complicated composite models to enhance query accuracy and hardly take account of query interpretability, efficiency and scalability at the same time. Query interpretability of the query framework can improve the credibility of query result. Query efficiency can ensure the accuracy of query result. And query scalability can enhance the adaptability of query methods, especially when faced with large-scale data. Hence, to develop a cross-modal similarity query framework with interpretability, efficiency and scalability is necessary. There are two challenges to achieve this goal. First it is how to bridge the semantic gap among different modality, which need a sophisticated model to capture the common semantic in terms of coarse grain and fine grain. The second challenge is how to enhance interpretability of the query framework with complex structure and millions of parameters. The third challenge is how to integrate the query model with scalability, in case of processing large-scale data, which are ubiquitous nowadays. Our core insight is that we can leverage deep neural network model to capture multi-grained cross-modal common semantics and build an efficient hybrid index with interpretability and scalability. Hence, in this work, we propose a novel efficient and effective Multi-grained Cross-modal Query framework with Interpretability (MCQI). In order to ensure the adaptability and generality of our framework, during training common feature vectors for different types we first capture coarse-grained and fine-grained semantic information by designing different networks and then combine them. And in order to discover the latent semantic relations between images and texts, we integrated LSTM model and attention model, besides, the data foundation of cross-modal correlative information is constructed in this way. In addition, for the sake of query efficiency, we built an index supporting interpretable query. And further, in order to enhance the scalability of our framework, a distributed query algorithm is proposed based on our framework. At last, to confirm the efficiency and effectiveness of our approach, we systematically evaluate the performances of the approach by comparing with 8 state-of-the-art methods on five widely used multimodal datasets. Concretely, our contributions are shown as follows: By integrating coarse-grained and fine-grained semantic learning models, a multi-grained cross-modal query processing architecture is proposed to ensure the adaptability and generality of query processing. In order to capture the latent semantic relation between images and texts, the framework combines LSTM and attention mode, which enhances query accuracy for the cross-modal query and constructs the foundation for interpretable query processing. Index structure and corresponding nearest neighbor query algorithm are proposed to boost the efficiency of interpretable queries. A distributed query algorithm is proposed to improve the scalability of our framework. The remainder of this paper is organized as follows. Section 2 briefly reviews related work. In Sect. 3, we introduce definitions of problems and then describe in detail our MCQI framework and a kNN query algorithm in Sect. 4. Section 5 gives a distributed query algorithm to enhance scalability of our framework. Section 6 provides experimental results and analysis on five datasets, and we conclude in Sect. 7. In this section, we briefly review the related researches for cross-modal query, including cross-modal retrieval, latent semantic alignment and cross-modal hashing. Cross-modal Retrieval Traditional methods mainly learn linear projections for different data types. Canonical correlation analysis (CCA) [3] is proposed to learn cross-modal common representation by maximizing the pairwise correlation, which is a classical baseline method for cross-modal measurement. Beyond pairwise correlation, joint representation learning (JRL) [4] is proposed to make use of semi-supervised regularization and semantic information, which can jointly learn common representation projections for up to five data types. S2UPG [5] further improves JRL by constructing a unified hypergraph to learn the common space by utilizing the fine-grained information. Recent years, DNN-based cross-modal retrieval has become an active research topic. Deep canonical correlation analysis (DCCA) is proposed by [6] with two subnetworks, which combines DNN with CCA to maximize the correlation on the top of two subnetworks. UCAL [7] is an unsupervised cross-modal retrieval method based on adversarial learning, which takes a modality classifier as a discriminator to distinguish the modality of learned features. DADN [8] approach is proposed for addressing the problem of zero-shot cross-media retrieval, which learns common embeddings with category semantic information. These methods mainly focus on query accuracy rather than query efficiency and interpretability. Latent Semantic Alignment Latent semantic alignment is the foundation for interpretable query. [9] embeds patches of images and dependency tree relations of sentences in a common embedding space and explicitly reasons about their latent, intermodal correspondences. Adding generation step, [10] proposes a model which learns to score sentence and image similarity as a function of R-CNN object detections with outputs of a bidirectional RNN. By incorporating attention into neural networks for vision related tasks, [11, 12] investigate models that can attend to salient part of an image while generating its caption. These methods inspire ideas of achieving interpretable cross-modal query, but neglect issues of query granularity and efficiency. Cross-modal Hashing Deep cross-modal hashing (DCMH) [13] combines hashing learning and deep feature learning by preserving the semantic similarity between modalities. Correlation auto-encoder hashing (CAH) [14] embeds the maximum cross-modal similarity into hash codes using nonlinear deep autoencoders. Correlation hashing network (CHN) [15] jointly learns image and text representations tailored to hash coding and formally controls the quantization error. Pairwise relationship guided deep hashing (PRDH) [16] jointly uses two types of pairwise constraints from intra-modality and intermodality to preserve the semantic similarity of the learned hash codes. [17] proposes a generative adversarial network to model cross-modal hashing in an unsupervised fashion and a correlation graph-based learning approach to capture the underlying manifold structure across different modalities. For large high-dimensional data, hashing is a common tool, which can achieve sublinear time complexity for data retrieval. However, after constructing a hash index on hamming space, it is difficult to obtain flexible query granularity and reasonable interpretability. Distributed Similarity Query Existing methods for distributed similarity queries in metric spaces can be partitioned into two categories [18]. The first category utilizes basic metric partitioning principles to distribute the data over the underlying network. [19] proposes a distributed index, GHT* index, which can exploit parallelism in a dynamic network of computers by putting a part of the index structure in every network node. [20] proposes a mapping mechanism that enables to actually store the data in well-established structures such as the B-tree. The second category utilizes the index integrating technique to distribute the data. Paper [21] integrates R-tree and CAN overlay to process multi-dimensional data in a cloud system. Paper [22] combines B-tree and BATON overlay to provide a distributed index which has high scalability but incurs low maintenance. They both choose a part of local index nodes to build global index node by computing the cost model. [23] integrates quadtree index with Chord overlay to enable more powerful accesses to data in P2P networks. In this paper, we adopt the pivot-mapping-based method due to two reasons below. These methods are not enough due to two reasons below. First, they are query sensitive; that is, they cannot adjust distribution of data for different query load and then cannot keep load balance, which is also important for distributed environment. For cross-modal similarity query, given a query object of one type, most similar objects of the other type in the dataset should be returned. The formal definition is shown below. The multimodal dataset consists of two modalities with m images and n texts, which is denoted as D = {Dt, Di}. The texts are encoded as a one hot code originally and in the set D the data of text modality are denoted as \(D^{t} = \left\{ {x_{k}^{t} } \right\}_{k = 1}^{m}\), where the kth text object is defined as \(x_{k}^{t} \in R^{{l_{k} * c}}\) with the sentence length lk and the vocabulary size c. The data of image modality are denoted as \(D^{i} = \left\{ {x_{k}^{i} } \right\}_{k = 1}^{n}\), where the kth image instance is defined as \(x_{k}^{i} \in R^{{w * h * c^{\prime}}}\) with image resolution w*h and color channel number c'. Besides, the pairwise correspondence is denoted as (\({\text{x}}_{\text{k}}^{\text{t}}\),\({\text{x}}_{\text{k}}^{\text{i}}\)), which means that the two instances of different types are strongly semantically relevant. Cross-modal similarity query means that given one query object it is to find similar objects of the other modality which share relevant semantics with the given one, kNN query is a classical type of similarity query and the definition is given as follows. Definition 1 (kNN Query). Given an object q, an integer k > 1, dataset D and similarity function SIM, the k nearest neighbors query kNN computes a size-k subset S ⊆ D, s.t. \({ }\forall o_{i} \in S,o_{j} \in D - S:SIM\left( {q,o_{i} } \right) \ge SIM\left( {q,o_{j} } \right).\) In this work, we set cosine similarity as similarity function. Table 1 lists the used notations throughout this paper. The list mainly consists of the notations which are mentioned far from their definitions. Table 1 Used notations Proposed Model In this section, we describe the proposed MCQI framework in detail. As shown in Fig. 1, MCQI framework consists of two stages. The first stage is the learning stage, which models common embedding representation of multimodal data by fusing coarse-grained and fine-grained semantic information. The second stage is the index construction stage, in which M-tree index and inverted index are integrated to process efficient and interpretable queries. In the following paragraphs, we introduce it in the aspects of embedding representations of multimodal data and interpretable query processing. The framework of MCQI Embedding Representations of Multimodal Data In the first stage, MCQI learns the embedding representation of multimodal data by fusing coarse-grained and fine-grained semantic information. Fine-grained Embedding Learning with Local Semantics Different methods are used to extract local semantic features for texts and images. For texts, EmbedRank [24] is utilized to extract keyphrases. Then, a pretrained model Sent2vec[25] is chosen for computing the embedding of each keyphrase. Then, by three-layer fully connected neural network, we map each keyphrase into the common embedding space with dimension d_l, denoted as tspq, which means the embedding representation of the qth keyphrase of the pth text description. For images, Region Convolutional Neural Network (RCNN) [26] is utilized to detect objects in images. We use top detected locations of the entire image as local semantic features and then compute the common embedding vectors based on the visual matrix in each bounding box by a pretrained convolutional neural network and by transition matrix transform the vector to common space with dimension d_l; lastly, we get isuv, which means the embedding representation of the vth bounding box of the uth image. Typically, for a pair of matched text and image, at least one of keyphrases in the text is semantically relevant with a certain bounding box in the image instance; that is, at least one common embedding vector of the text instance is close to a certain common embedding vector of the image instance. Base on this intuitiveness, according to hinge rank loss function, we set the original objective of fine-grained embedding learning as follows: $${\text{C}}_{b} \left( {ts_{pq} ,is_{uv} } \right) = \mathop \sum \limits_{{\text{q}}} \mathop \sum \limits_{{\text{v}}} {(}\frac{{{\text{Pnum}}}}{{{\text{Anum}}}}{)}^{{{\text{I}}\left( {{\text{p}} \ne {\text{u}}} \right)}} {(1 - }\frac{{{\text{Pnum}}}}{{{\text{Anum}}}}{)}^{{{\text{I}}\left( {\text{p = u}} \right)}} {\text{max(0, M - }}\left( { - 1} \right)^{{{\text{I}}\left( {{\text{p}} \ne {\text{u}}} \right)}} \frac{{{\text{ts}}_{{{\text{pq}}}} \cdot {\text{is}}_{{{\text{uv}}}} }}{{{\text{|ts}}_{{{\text{pq}}}} {|} \cdot {\text{|is}}_{{{\text{uv}}}} {|}}}{)}$$ Here, Pnum is the number of matched pairs in the training sample set and Anum is the training sample capability. \(\left( {\alpha \frac{Pnum}{{Anum}}} \right)^{{I\left( {p = u} \right)}} \left( {1 - \alpha \frac{Pnum}{{Anum}}} \right)^{{I\left( {p \ne u} \right)}}\) is utilized to balance positive and negative samples. \(\frac{{ts_{pq} \cdot is_{uv} }}{{\left| {ts_{pq} } \right| \cdot \left| {is_{uv} } \right|}}\) is the cosine similarity of two embedding vectors. M is the margin constant, which defines the tolerance of true positive and true negative. The more M is close to 1, the stricter it is about semantically recognizing true positive and true negative. Cb cost is computed over all pairs of local features between text instances and image instances. However, in many cases, for two semantically relevant instances only few parts of local features are matched and similar pairs are difficult to be acquired by computation over all pairs. To address this problem, according to MILE [9], we make a multiple instance learning extension of formula (1) as shown in formula (2). For each text instance, local features of matched image are put into a positive bag, while local features in other image are treated as negative samples. $$\begin{gathered} C_{P} = \min_{kqv} \sum\nolimits_{q} {\sum\nolimits_{v} {\left( {\frac{Pnum}{{Anum}}} \right)} }^{{I\left( {p \ne u} \right)}} \left( {1 - \frac{Pnum}{{Anum}}} \right)^{{I\left( {p = u} \right)}} \max \left( {0,M - k_{qv} \frac{{ts_{pq} \cdot is_{uv} }}{{\left| {ts_{pq} } \right| \cdot \left| {is_{uv} } \right|}}} \right) \hfill \\ {\text{s}}.{\text{t}}.\mathop \sum \limits_{{{\text{v}} \in {\text{B}}_{{\text{q}}} }} \left( {{\text{k}}_{{{\text{qv}}}} { + 1}} \right) \ge {2 }\forall {\text{v}},k_{qv} = \left\{ \begin{gathered} 1,p = u \hfill \\ - 1,p \ne u \hfill \\ \end{gathered} \right. \hfill \\ \end{gathered}$$ Here, \({\text{B}}_{\text{q}}\) is the positive bag of the qth feature vector, \({\text{k}}_{\text{qv}}\) is the correlation index which indicates whether the corresponding text instance and image instance are matched. It is worth notice that each feature vector \({\text{is}}_{\text{uv}}\) and the corresponding bounding box are stored in the storage system for processing interpretable queries. Coarse-grained Embedding Learning with Global Semantics Coarse-grained embedding network tries to capture global common semantics between texts and images. For texts, Universal Sentence Encoder [27] is utilized to extract feature vectors of texts and by fully connected layers the feature vectors are transformed into the global common embedding space with dimension d_g. For images, inspired by [11] that pretrained LSTM with soft attention model is integrated to translate images into sequential representation. For an image, feature maps before classification in a pretrained R-CNN network and the whole image's feature maps before fully connected layers in pretrained CNN network are combined into feature vectors, denoted as \(a = \left\{ {a_{i} } \right\}_{i = 1}^{LV}\) and LV is the number of the feature vectors. Our implementation of LSTM with soft attention is based on [11]. ai is the input, y and \(\alpha_{ti}\) are outputs, y is the generated sequential text and \(\alpha_{ti}\) represents importance of feature vector \({\text{a}}_{\text{i}}\) when generating the tth word. please note that each word yt has an attention weight \(\alpha_{ti}\) for each feature vector \({\text{a}}_{\text{i}}\) and each tuple tut = < yt, imageID, \(\alpha_{ti}\), xloci, xloc1i, xloc2i, yloc1i, yloc2i > is stored for answering future queries, where imageID is the image's unique identifier, xloci, xloc1i, xloc2i, yloc1i, yloc2iare the corresponding coordinate position of \({\text{a}}_{\text{i}}\) in the image. We collect all tuples as set TU = {tut}. For generated sequential text \({\text{y}}\), Universal Sentence Encoder is utilized to generate the coarse-grained representative vector of \({\text{y}}\), denoted as GIV, while the coarse-grained representative vector of original paired training text by Universal Sentence Encoder is denoted as OTV. Intuitively, global training objective function is shown as follows. $$C_{G} = GIVOTV$$ Multi-grained Objective Function We are now ready to formulate the multi-grained objective function. The objective function is designed by two criteria. First, it is likely that matched pairs of images and texts have similar patches, which applies to CP. Second, matched pairs of image and text probably have similar global semantics, which applies to CG. By integrating CP and CG, the objective function is defined as follows. $$C\left( \theta \right) = \alpha C_{P} \left( \theta \right) + \beta C_{G} \left( \theta \right) + \gamma \left| \theta \right|_{2}^{2} ,$$ where θ is a shorthand for parameters of our model and \(\alpha , \beta , \gamma\) are hyperparameters which are computed by cross-validation. \({|\theta |}_{2}^{2}\) is the regularization. The proposed model consists of two branches, which are designed for common fine-grained semantic and coarse-grained semantic, respectively. Naturally, the training process is divided into two stages, i.e., branch training and joint training. Both training processes are based on stochastic gradient descent (SGD) with a batch size of 32, a momentum of 0.9 and a weight decay of 0.00005. Stage 1: In this stage, branches for common fine-grained semantic and coarse-grained semantic are trained in turn, taking formula (2) and formula (3) as loss functions, respectively. In the fine-grained branch, pretrained Sent2Vec model and RCNN model are utilized, while in the coarse-grained branch, pretrained several pretrained Universal Sentence Encoder model and LSTM model are utilized. The default parameters in those pretrained models are utilized, and its parameters are kept fixed at this stage. The other parameters of our model, including the attentional mechanism, are automatically initialized with the Xavier algorithm [28]. Stage 2: After all branch networks are trained, we jointly fine-tune the entire model parameters by combining the loss terms over all granularities in formula (4). Interpretable Query Processing In MCQI framework, images and texts can be represented by high-dimensional feature vectors, which include fine-grained and coarse-grained semantic features. Denote IFVi as feature vectors of the ith instance Insi, then IFVi = {CFVFi, CFVCi}, where CFVFi and CFVCi mean the corresponding common fine-grained semantic feature and the coarse-grained semantic feature of Insi, respectively. Given a query instance, i.e., an image or text instance, in order to find the matched cross-modal instance, i.e., the most relevant text or image instance, the similarity between two cross-modal instances can be computed by cosine similarity shown in formula (5) as follows. $$\begin{gathered} {\text{SIM}}\left( {Ins_{i} ,Ins_{j} } \right) = \delta \frac{{CFVF_{i} \cdot CFVF_{j} ,}}{{\left| {CFVF_{i} } \right| * \left| {CFVF_{j} } \right|}} + \left( {1 - \delta } \right)\frac{{CFVC_{i} \cdot CFVC_{j} ,}}{{\left| {CFVC_{i} } \right| * \left| {CFVC_{j} } \right|}} \hfill \\ = \delta {\text{Cos}} ine\left( {CFVF_{i} ,CFVF_{j} } \right) + \left( {1 - \delta } \right){\text{Cos}} ine\left( {CFVC_{i} ,CFVC_{j} } \right) \hfill \\ \end{gathered}$$ Here, Insi and Insj are two cross-modal instances, \(\delta\) is the weight factor, Cosine is the cosine similarity function. A naive method to obtain the matched cross-modal instances is pairwise computation; however, this method is inefficient. Particularly when the dataset is large and the dimension of vectors is high, the computation is nontrivial. To address this, an inverted index and an M-tree index are integrated into MCQI model. The M-tree index increases the efficiency of queries and the inverted index enhances the interpretability of queries. Index construction and query processing method based on the indices are discussed separately as follows. Index Construction It is shown in formula (5) the similarity between two instances mainly is calculated by the cosine similarity of two types of feature vectors. By assuming that variables obey uniform distribution, we get Observation 1 in the following. Observation 1 shows that cosine similarity between the whole feature vectors of Insi and Insj is close to SIM(Insi, Insj). Observation 1 For Random Variable \(\delta \in \left[ {0.2,0.8} \right],\exists \varepsilon ,\sigma \in \left[ {0,1} \right],\) s.t. \(P\left( {\left| {\left( {\delta \frac{{CFVF_{i} \cdot CFVF_{j} ,}}{{\left| {CFVF_{i} } \right| * \left| {CFVF_{j} } \right|}} + \left( {1 - \delta } \right)\frac{{CFVC_{i} \cdot CFVC_{j} ,}}{{\left| {CFVC_{i} } \right| * \left| {CFVC_{j} } \right|}}} \right) - \frac{{IFV_{i} \cdot IFV_{j} ,}}{{\left| {IFV_{i} } \right| * \left| {IFV_{j} } \right|}}} \right|{ < }\varepsilon } \right){ > }\sigma ,\) i.e., \(P\left( {\left| {SIM\left( {Ins_{i} ,Ins_{j} } \right) - {\text{Cos}} ine\left( {Ins_{i} ,Ins_{j} } \right)} \right|{ < }\varepsilon } \right){ > }\sigma\). This Observation is obtained by statistical hypotheses testing method, which will be illustrated in the experiments. By setting DIF = \(\left|{\text{SIM}}({\text{Ins}}_{\text{i}}, {\text{Ins}}_{\text{j}})-\text{Cosine(}{\text{Ins}}_{\text{i}}, {\text{Ins}}_{\text{j}}\text{)}\right|\), we get P(DIF < \(\varepsilon\))) > \(\sigma\). In experiments, when set \(\varepsilon\) = 0.05, we have \(\sigma\) = 0.9 and when set \(\varepsilon\) = 0.1, we have \(\sigma\) = 0.98. It is known that the M-tree is an efficient structure for NN queries in metric spaces. In order to use M-tree index, cosine distance should be transformed to angular similarity (AS) which is metric. The angular similarity between Insi and Insj is defined in formula (6) in the following. $${\text{AS}}\left( {Ins_{i} , \, Ins_{j} } \right) = 2\frac{{{\text{arccos(Cosine(Ins}}_{{\text{i}}} {\text{, Ins}}_{{\text{j}}} {))}}}{{\uppi }}$$ Lemma 1. For any instance q, the nearest neighbor of q by angular similarity is also the nearest neighbor of q by cosine similarity. Lemma 1 can be easily proved by contradiction, which is omitted for simplicity. Based on Lemma 1 and formula (6), an M-tree is constructed on the data set of feature vectors. And then M-tree is augmented with an inverted index of semantic relationship tuple set TU, which is mentioned in Sect. 4.1. Interpretable kNN Query For processing similarity queries efficiently, we adopt a filter-and-fine model. Our method first obtains candidates of matched objects by M-tree and then verifies the candidates and identifies the final answers. The M-tree inherently supports range query, denoted as Range (Insi, r), where Insi is the query instance and r is the query range. In our algorithm the kNN candidates can be efficiently obtained by two range queries on M-tree. To verify the candidates, formula (5) is utilized and for the verified objects, Inverted index is accessed to give reasons why the objects are relevant to the query. The detailed query processing is shown in algorithm 1 as follows. Specifically, at first we use range query Range (Insi, 0) to find the closest index node and read all the objects in the node(line 2). If the number of objects is less than k, we read its sibling nodes through its parent node, recursively, until we obtain k objects (line 3). And then we use the kth farthest distance r from the query instance to issue the second range query by setting range as r and get the candidates. Finally, we utilized formula (5) to verify the candidates and each matched pair is augmented with the relationship interpretation through inverted index (line 6–8). As for complexity, considering the first range query with range zero, the cost of processing a query is O(H), where H is the height of the M-tree. As for the second range query the selectivity of a range query is se, the cost of each level of index nodes can be approximated as a geometric sequence with common ratio, cr*se, where cr is the capacity of index node. Hence, the average cost is: $$\frac{{{\text{cr*se*(1 - (cr*se)}}^{{\text{H}}} {)}}}{{\text{1 - cr*se}}}$$ As for query accuracy, by Observation 1 and Lemma 1, we can get Observation 2 as follows. Observation 2. Algorithm 1 can obtain kNN instances of the query instances with probability more than \(\sigma\). We assume o∗ is the actual the kth NN query result but is not returned. Denote dis is the distance between the returned kth NN query result and the query. and by Lemma 1, we can get that the distance between o∗ and the query is less than dis + DIF. And set \(\varepsilon\) = dis, according to Observation 1, by probability \(\sigma\) or more, DIF is less than dis. So, by algorithm 1, we can get the query result o∗, which is a contradiction for assumption. Then the Observation 2 is proved. Distributed Algorithm When the data set is relatively large, the computational complexity of the algorithm will be relatively high as shown in formula 7. Therefore, in order to effectively process large-scale data sets, this section will extend the framework to a distributed environment and propose a distributed kNN algorithm. The distributed algorithm is based on the idea of divide and conquer. Each computing node in a P2P distributed system is independent and autonomous. Let C be the number of computing nodes, PV is the pivot set of the data set, PV = {pvi}, where 1 ≤ i ≤ pn and pn is the number of pivot points. PV is stored on each computing node as global information. Each computing node is responsible for one or more pivot points. Data are divided into computing nodes according to the distance between the data object and the pivot point. Then, each computing node builds M-tree and inverted index locally. When a computing node receives a similarity query q with query range R, the computing node will act as the coordinator and calculate the relevant pivot point by formula 8. $${\text{SIM}}(pv_{i} ,\overline{q}) \ge {1} - (R + maxd_{i} ),$$ where maxdi is the largest distance among the data objects maintained by pvi. Then, the coordinator will forward the query to the computing node where the relevant pivot point is located and each computing node will calculate the query result through the local indices and return it to the coordinator. Finally, the coordinator collects the intermediate query results and returns the final result to the user. Obviously, the selection of pivot points and query algorithms are the key points of query performance and these two parts will be discussed in detail as follows. Selection of Pivot Points The main function of the pivot point is to filter irrelevant data objects in the query process, so the index of selecting the pivot point is to increase the filtering ability of the query as much as possible. In the metric space, the farthest pivot point is generally selected. Based on this heuristic method, we propose a pivot point selection scheme similar to [29]. First randomly select a data object o from the sample data set and then put the data object farthest from o in the sample data set to the pivot point set PV, then further add the data object with the largest average distance between the sample data set and the central pivot point to the PV and then repeat the previous step until |PV|= pn. Query-Sensitive Load Balancing In a distributed environment, consistent hashing is used to maintain and manage the pivot point, that is, the pivot point is divided into [0, 2max] domain, [0, 2max] is divided into multiple intervals (token) and each compute node is responsible for one or more intervals and the query is routed through the distributed hash table in the system. Through a hash method such as SHA1, the pivot point will be divided evenly on each computing node. However, the query load is not always evenly distributed and the distribution will change dynamically. Therefore, in order to achieve the load balance of the system, a query-aware adjustment method is needed. First, set the threshold t for load balance. If a computing node (ComputerA) exceeds t times the average load, that is, ComputerA becomes a query bottleneck of load, then ComputerA communicates with the adjacent computing node (ComputerB) of its responsible area, then reduce the area that ComputerA is responsible for while increase the area that ComputerB is responsible for and move the corresponding pivot point from ComputerA to ComputerB. After the last step, if other computing nodes have a load balance problem, repeat this process for this computing node until the load balance of the system is achieved. Note that in order to avoid thrash, the load adjustment method should be performed in the same direction. Computation of pn The execution time of query processing can be divided into two parts, one part is the time gt for computing the relevant computing node based on the pivot point and the other part is the time lt for each computing node to perform a local query. Therefore, the computing time for query processing is ct = gt + lt. Obviously, the computing time gt of the relevant computing node is proportional to the number of pivot points pn, that is, gt = α*pn, α is the coefficient ratio and α is related to the processing capability of computing nodes. The computing time of the query lt and pn is inversely proportional, in the average, lt = \(\beta \frac{{r^{{\left( {\log_{m} \frac{N}{pn}} \right)}} }}{r - 1} - 1\) = \(\beta \frac{{\left( \frac{N}{pn} \right)^{{\left( {\frac{1}{{\log_{r} m}}} \right)}} }}{r - 1} - 1\), where r is the average selection degree of the child nodes of the index tree by the query, m is the out degree of the index tree, N is the size of the data set, β is the coefficient ratio and β is determined by the average processing capability of the computing node. Therefore, the formula of computing ct can be obtained: $${\text{ct}} = \alpha *pn + {\upbeta }\frac{{{(}\frac{{\text{N}}}{{{\text{pn}}}}{)}^{{{(}\frac{1}{{{\text{log}}_{{\text{r}}} {\text{m}}}}{)}}} { - 1}}}{{\text{r - 1}}}$$ By deriving and calculating the extreme value, it is easy to get when $$pn = \left( {\frac{{\beta N^{{\frac{1}{{\log_{r} m}}}} }}{{\alpha \left( {r - 1} \right)\log_{r} m}}} \right)^{{\frac{{\log_{r} m}}{{\log_{r} m + 1}}}} ,$$ ct takes the minimum. By Formula 10, the number of pivot points can be obtained. Distributed kNN Query Algorithm As mentioned at the beginning of this section, by Formula 8 it is easy to handle range queries. In this section, we discuss distributed nearest neighbor query algorithm. Considering a simple case, when k = 1, it is a 1NN query. When a computing node receives a 1NN query q, the computing node as the scheduler first initiates a query object q and the query radius 0. Calculate the relevant pivot points, that is, calculate the pivot point set PS = {pni|SIM(pni, q)\(\ge\) 1-maxdi}, where maxdi is the distance between the pivot node pni and the farthest data object maintained. Then, the scheduler forwards the query to the computing node (denoted as CS) responsible for the data objects in the PS set and each computing node calculates the local NN of the data object q and returns it to the scheduler. After receiving all candidate nearest neighbors, the scheduler calculates the data object with the smallest distance to q and let mind be the smallest distance. After that, the scheduler uses q as the query object and mind as the query range to calculate the relevant pivot points and forwards the NN query to the computing nodes responsible for these pivot points except for the CS set. Finally, the scheduler collects candidate data objects to calculate the NN and returns it to the user. The kNN query algorithm is discussed as follows. The specific process is shown in Algorithm 2. First, the initial query distance initR is estimated according to the statistical histogram and the relevant computing node (line 2) is calculated. For a kNN query with a query object of q, The distance between q and each pivot point is qdisti = 1-SIM(pni, q), let $$initR = {\text{argmin}}_{r} \cdot \mathop \sum \limits_{{\text{i = 1}}}^{{{\text{i}} \le {\text{pn}}}} {\text{NumHist(pv}}_{{\text{i}}} {\text{, qdist}}_{{\text{i}}} {\text{ - r, qdist}}_{{\text{i}}} {\text{ + r)}} \le {\text{k}}$$ where NumHist(r, dmin, dmax) function obtain the number of data object in index with root r between dmin and dmax. Then, forward the query to the relevant computing node set CS1. Each computing node calculates the local kNN data object as a candidate set and returns it to the scheduler. The scheduler calculates the smallest kNN candidate data object distance mind from all the candidate data objects and calculate the relevant computing node CS2 (lines 3–7) when the query range is mind, forward the request to the computing node set CS1- CS2 and collect the local kNN candidate data objects of each computing node, calculate the final kNN result and return (lines 8–11). The kNN query algorithm is also implemented using two range queries, but the main difference is that in the first range query, kNN uses the histogram information summarized by the pivot point to predict a better query range initR. initR can have a good estimate of the k nearest neighbor data objects, thereby effectively reducing the cost of the second range query. Experiment Setup We evaluate our cross-modal query performance on Flickr8K [30], Flickr30K [31], NUS-WIDE [32], MS-COCO [33] and a synthetic dataset Synthetic9K in our experiments. Flickr8K consists of 8096 images from the Flickr.com website and each image is annotated by 5 sentences by Amazon Mechanical Turk. Flickr30K is also a cross-modal dataset with 31,784 images including corresponding descriptive sentences. NUS-WIDE dataset is a web image dataset for media search, which consists of about 270,000 images with their tags and each image along with its corresponding tags is viewed together as an image/text pair. MS-COCO contains 123,287 images, and each image is also annotated by five independent sentences provided by Amazon Mechanical Turk. By extracting 2000 image/text pairs from each above dataset, we obtain a hybrid dataset, denoted as Synthetic9K. For each data set, 10% data are used as testing set and validation set, while the rest are training set. We compare our approach with eight state-of-the-art cross-modal retrieval methods, including CCL [34], HSE [35], DADN [8], SCAN [36], DCMH [13], LGCFL [37], JRL [4], KCCA[38]. CCL learns cross-modal correlation by hierarchical network in two stages. First, separate representation is learned by jointly optimizing intra-modality and intermodality correlation and then a multi-task learning is adopted to fully exploit the intrinsic relevance between them. HSE proposes a uniform deep model to learn the common representations for four types of media simultaneously by considering classification constraint, center constrain and ranking constraint. DADN proposes a dual adversarial distribution network which takes zero-shot learning and correlation learning in a unified framework to generate common embeddings for cross-modal retrieval. SCAN considers the latent alignments between image regions and text words to learn the image-text similarity. DCMH combines hashing learning and deep feature learning by preserving the semantic similarity between modalities. LGCFL uses a local group-based priori to exploit popular block based features and jointly learns basis matrices for different modalities. JRL applies semi-supervised regularization and sparse regularization to learn the common representations. KCCA follows the idea of projecting the data into a higher-dimensional feature space and then performing CCA. Some compared methods rely on category information for common representation learning, such as CCL and HSE; however, the datasets have no label annotations available. So, in our experiments first keywords are extracted from text descriptions by TF-IDF method and seen as labels for corresponding images. For distributed query processing, our algorithms are compared with two most related methods. One is a naive method(SimD), data objects are scattered randomly and when there is a query, objects are compared with query in pairwise way. The other is a state-of-the-art method [39](DistMP), which is a general framework based on MapReduce. Following [34], we apply the mean average precision (MAP) score to evaluate the cross-modal query performance. We first calculate average precision (AP) score for each query in formula (8) and then calculate their mean value as MAP score. $$AP = \frac{1}{\left| R \right|}\sum\nolimits_{i = 1}^{k} {p_{i} * rel_{i} }$$ where |R| is the number of ground-truth relevant instances, \({\text{k}}\) is from the kNN query, \({\text{p}}_{\text{i}}\) denotes the precision of the top i results and \({\text{rel}}_{\text{i}}\) is the indicator whether the ith result is relevant. We adopt TensorFlow [40] to implement our MCQI approach. In the first stage, we take 4096 dimensional feature extracted from the image inside a given bounding box from RCNN. For the nonlinear transformation model, we use three fully connected layers with 1,024 dimensions and set the dimension of common embedding space d_l and d_g as 1024. The Sent2vec for fine-grained semantics has 700 dimensions, which is pretrained on Wikipedia and Universal Sentence Encoder for coarse-grained semantics has 512 dimensions. Experiments for centralized algorithms are conducted on a server with Intel E5-2650v3, 256 GB RAM, NVIDIA V100 and Ubuntu 16.04 OS, while experiments for distributed algorithms are processed on a cluster of 30 computer nodes with Intel Core i5-10210 1.6 GHz*4CPU and 8 GB memory. Verification of Observation 1 Figures 2 and 3 show the accuracy of DIF < 0.05 and DIF < 0.1 respectively, with different sample size. \(\delta\) is randomly generated from three different ranges, i.e., [0.2, 0.8], [0.3, 0.7], [0.4, 0.6] and for different varying ranges, it can tell that when \(\delta\) is closer to 0.5 the accuracy is higher. In the situation of DIF < 0.05, with the increasing of sample size, the accuracy is steadily more than 0.9. And for DIF < 0.1, with the increasing of sample size, the accuracy is steadily more than 0.99. Without loss of generality, according to statistical hypotheses testing method, in the situation of \(\delta\)=[0.2, 0.8], we assume DIF < 0.05 with significant level as 0.1. In our experiments with sample size 100,000, the mean value of DIF is 0.021, sample variance is 0.00045 and because the standard deviation is unknown, t-distribution should be referred. Test statistic is -0.63. And with significant level 0.1, the critical quantile is -1.28. because -0.63 > -1.28, the assumption is accepted. Accuracy of DIF < 0.05 Accuracy of DIF < 0.1 Performance of Query Accuracy We present query accuracy of our MCQI approach as well as all the compared methods in this part. Table 2 shows the MAP scores for 30NN query. As shown in the table, the accuracies of DNN-based methods like DADN and CCL are higher than traditional methods on average. Due to the fusion of multi-grained semantic feature and transfer learning embedding, MCQI approach steadily achieves the best query accuracies. The number of data categories in Sythetic9K is more than other datasets, and comparatively learning common semantic embeddings are more dependent on the quantity of training data. So, under the same condition, the accuracy is impacted relatively. Table 2 MAP scores of MCQI and compared methods for 30NN query Performance of Query Time As shown in Fig. 4, we measure the query time for our proposed MCQI approach as well as two representative methods on 5 datasets. CCL is a DNN-based method and DCMH is a hash-based method. For CCL pairwise computation is need to get kNN result. And for DCMH, data can be transformed into binary code and it is fast to obtain 1NN, while for kNN with varying k, query time is affected. Intuitively query times are proportional to the size of the datasets. As CCL and DCMH are not very sensitive to k of kNN queries, we show query time of only 30NN queries on each dataset. From 30NN queries to 5NN queries, filtering effect of M-tree index enhances, consequently query times decrease. In all cases, MCQI is fastest among the methods. Particularly for 5NN, average running times for MCQI are about 13 times faster than that of CCL and 20 times faster than DCMH, i.e., our approach on average outperforms CCL and DCMH by an order of magnitude. Performance of Distributed Algorithm In order to show the scalability of our framework. Figure 5 present the running time of methods with varying k of kNN query on NUS-WIDE and MS-COCO dataset, respectively. In terms of running time, MCQI is nearly three times as fast as DistMP, which is one order of magnitude faster than SimD. SimD as a pairwise method causes enormous communication cost in distributed environment, while DistMP which utilizes the metric distance to filter unrelated data can save computation cost. However, for DistMP the lack of an efficient index leads to worse query performance than MCQI. In essence, MCQI is composed of two rounds of NN query groups and it is easy to see that MCQI is significantly better than SimD and DistMP. Effect of distributed query on NUS-WIDE. Effect of distributed query on MS-COCO Query Interpretability Figure 5 shows some examples of cross-modal similarity query results. Because MCQI not only contains the latent semantic common embedding of two types, but also has explicit alignment information. As shown in Fig. 6, for kNN queries, MCQI can return similar objects in datasets and further gives a reason why those objects are semantically related, which is very important for serious applications. Examples of processing cross-modal similarity queries by MCQI In this paper, we proposed a novel framework for Multi-grained Cross-modal Similarity Query with Interpretability (MCQI) to effectively leverage coarse-grained and fine-grained semantic information to achieve effective interpretable cross-modal queries. MCQI integrates deep neural network embedding and high-dimensional query index and also introduces an efficient kNN similarity query algorithm with theoretical support. Experimental results on widely used datasets prove the effectiveness of MCQI. In our future work, we will study more reinforcement learning-based cross-modal query approaches for reducing dependence on large training data of certain area. Open data are used in this work and are publicly available (references are provided in the paper). Peng Y, Huang X, Zhao Y (2018) An over view of cross-media retrieval: Concepts, methodologies, benchmarks and challenges. IEEE Trans Circuits Syst Video Technol 28(9):2372–2385 He X, Peng Y, Xi L (2019) A new benchmark and approach for fine-grained cross-media retrieval. In: 27th ACM international conference on multimedia, ACM. pp 1740–1748 Rasiwasia N, Pereira J, Coviello E et al (2010) A new approach to cross-modal multimedia retrieval. In: 18th international conference on multimedia, ACM. pp 251–260 Zhai X, Peng Y, Xiao J (2014) Learning cross-media joint representation with sparse and semisupervised regularization. IEEE Trans Circuits Syst Video Technol 24(6):965–978 Peng Y, Zhai X, Zhao Y, Huang X (2016) Semi-supervised cross-media feature learning with unified patch graph regularization. IEEE Trans Circuits Syst Video Technol 26(3):583–596 Yan F, Mikolajczyk K (2015) Deep correlation for matching images and text. In: IEEE conference on computer vision and pattern recognition, IEEE. pp 3441–3450 He L, Xu X, Lu H et al (2017) Unsupervised cross-modal retrieval through adversarial learning. In: IEEE international conference on multimedia and expo, IEEE. pp 1153–1158 Chi J, Peng Y (2020) Zero-shot cross-media embedding learning with dual adversarial distribution network. IEEE Trans Circuits Syst Video Technol 30(4):1173–1187 Andrej K, Armand J, Li F (2014) Deep fragment embeddings for bidirectional image sentence mapping. In: 27th international conference on neural information processing systems, ACM. pp 1889–1897 Andrej K, Li F (2017) Deep Visual-Semantic Alignments for Generating Image Descriptions. IEEE Trans Pattern Anal Mach Intell 39(4):664–676 Xu K, Ba J, Kiros R et al (2015) Show, attend and tell: neural image caption generation with visual attention. In: 2015 international conference on machine learning, IEEE. pp 2048–2057 Wang X, Wang Y, Wan W (2018) Watch, listen and describe: globally and locally aligned cross-modal attentions for video captioning. In: Proceedings of 2018 conference of the North American chapter of the association for computational linguistics, ACL. pp 795–801 Jiang Q, Li W (2017) Deep cross-modal hashing. In: 2017 IEEE conference on computer vision and pattern recognition, IEEE. pp 3270–3278 Cao Y, Long M, Wang J et al (2016) Correlation autoencoder hashing for supervised cross-modal search. In: international conference on multimedia retrieval, ACM. pp 197–204 Cao Y, Long M, Wang J (2017) Correlation hashing network for efficient cross-modal retrieval. In: 28th British machine vision conference, BMVA. pp 1–12 Yang E, Deng C, Liu W et al (2017) Pairwise relationship guided deep hashing for cross-modal retrieval. In: 31st conference on artificial intelligence, AAAI. pp 1618–1625 Zhang J, Peng Y, Yuan M et al (2018) Unsupervised generative adversarial cross-modal hashing. In 32nd conference on artificial intelligence, AAAI. pp 539–546 Yang K, Ding X, Zhang Y et al (2019) Distributed similarity queries in metric spaces. Data Science and Engineering 4(4):1–16 Batko M (2004) Distributed and scalable similarity searching in metric spaces. In: 9th EDBT, ACM. pp 44–153 Novak D, Batko M (2011) Zezula P, Metric index: An efficient and scalable solution for precise and approximate similarity search. Inf Syst 36(4):721–733 Wang J, Wu S, Gao H et al (2010) Indexing multi-dimensional data in a cloud system. In: SIGMOD, ACM. pp 591–602 Wu S, Jiang D, Ooi B, Wu K (2010) Efficient B-tree based indexing for cloud data processing. In: 36th VLDB, ACM. pp 1207–1218 Tanin E, Harwood A, Samet H (2007) Using a distributed quadtree index in peer-to-peer networks. VLDB J 16(2):165–178 Bennanismires K, Musat C, Hossmann A et al (2018) Simple Unsupervised Keyphrase Extraction using Sentence Embeddings. In: conference on computational natural language learning, ACL. pp 221–229 Shen Y, He X, Gao, J et al (2014) A latent semantic model with convolutional-pooling structure for information retrieval. In: conference on information and knowledge management, ACM. pp 101–110 Cheng B, Wei Y, Shi H et al (2018) Revisiting RCNN: On awakening the classification power of faster RCNN. In: European conference on computer vision, Springer. pp 473–490 Cer D, Yang Y, Kong S et al (2018) Universal Sentence Encoder. arXiv: Computation and Language. https://arxiv.org/abs/1803.11175v2. Accessed 12 April 2018 Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: 13th international conference on artificial intelligence and statistics, JMLR. pp 249–256 Zhu M, Xu L, Shen D et al (2018) Methods for similarity query on uncertain data with cosine similarity constraints. Journal of Frontiers of Computer Science and Technology 12(1):49–64 Hodosh M, Young P, Hockenmaier J (2013) Framing image description as a ranking task: data, models and evaluation metrics. Journal of Artificial Intelligence Research 47(1):853–899 Young P, Lai A, Hodosh M et al (2014) From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics 7(2):67–78 Chua T, Tan J, Hong R et al (2009) NUS-WIDE: a real-world web image database from national university of Singapore. In: 8th conference on image and video retrieval, ACM. pp 1–9 Lin T, Maire M, Belongie S (2014) Microsoft coco: Common objects in context. In: 13th European conference on Computer Vision (ECCV), Springer. pp 740–755 Peng Y, Qi J, Huang X et al (2018) CCL: Cross-modal correlation learning with multigrained fusion by hierarchical network. IEEE Trans Multimedia 20(2):405–420 Chen T, Wu W, Gao Y et al (2018) Fine-grained representation learning and recognition by exploiting hierarchical semantic embedding. In: 26th ACM multimedia, ACM. pp 2023–2031 Lee K, Chen X, Hua G et al (2018) Stacked cross attention for image-text matching. In: European conference on computer vision, Springer. pp 212–228 Kang C, Xiang S, Liao S et al (2015) Learning Consistent Feature Representation for Cross-Modal Multimedia Retrieval. IEEE Trans Multimedia 17(3):370–381 Hardoon D, Szedmak S, Shawetaylor J et al (2004) Canonical correlation analysis: An overview with application to learning methods. Neural Comput 16(12):2639–2664 Akdogan A, Demiryurek U, Kashani FB et al (2010) Voronoi-based geospatial query processing with mapreduce. In: 2nd international conference of cloud Computing(CloudCom), IEEE. pp 9–16 Abadi M, Barham, P, Chen J et al (2016) TensorFlow: A system for large-scale machine learning. In: 12th USENIX conference on operating systems design and implementation, ACM. pp 265–283 We would like to thank selfless friends and professional reviewers for all the insightful advices. The preliminary version of this article has been published in APWeb-WAIM 2020 [https://doi.org/10.1007/978-3-030-60290-1_26] This work is supported by the National Natural Science Foundation of China (61802116, 62072157), the Training Plan of Young Backbone Teachers in Universities of Henan Province (2020GGJS263), the Natural Science Foundation of Henna province(202300410102), the Science and Technology Plan of Henan Province (192102210113, 192102210248) and the Key Scientific Research Project of Henan Universities (19B520005). School of Computer Science & Technology, Henan Institute of Technology, Xinxiang, 453703, China Mingdong Zhu, Lixin Xu & Xianfang Wang School of Computer Science & Engineering, Northeastern University, Shenyang, 110819, China Derong Shen Mingdong Zhu Lixin Xu Xianfang Wang Mingdong Zhu is responsible for providing idea, designing model and experimental methods. Derong Shen gives overall guidance. Lixin Xu implements experiments, Xianfang Wang gives guidance for theory proof. Correspondence to Mingdong Zhu. Avoid reviewers from Henan Institute of Technology and Northeastern University. All authors consent to participate in this work. All authors consent to publish the paper. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 19 kb) Zhu, M., Shen, D., Xu, L. et al. Scalable Multi-grained Cross-modal Similarity Query with Interpretability. Data Sci. Eng. 6, 280–293 (2021). https://doi.org/10.1007/s41019-021-00162-4 Revised: 03 April 2021 Issue Date: September 2021 Cross-modal Interpretability Multi-grained Similarity query ·Scalability
CommonCrawl
Keyword Analysis & Research: si unit of angular momentum in joule si unit of angular momentum in joule 1.01 0.4 1861 48 36 si 0.92 0.9 6689 80 2 unit 0.06 0.5 5234 54 4 of 0.59 0.6 423 98 2 angular 0.95 0.6 8126 58 7 momentum 0.51 0.8 3172 50 8 in 0.08 1 5775 26 2 joule 0.95 0.5 831 1 5 Keyword Research: People who searched si unit of angular momentum in joule also searched si unit of angular momentum in joule 1.41 0.8 7061 28 si unit of angular momentum 0.41 0.2 9815 74 si units of angular momentum 1.83 1 5775 44 the si units of angular momentum are 0.13 0.5 1556 34 unit of angular momentum 0.7 0.6 8010 79 what is the unit of angular momentum 1.85 0.7 3290 43 units of angular momentum 1.91 0.8 8795 51 what are the units of angular momentum 1.67 0.5 6406 20 units for angular momentum physics 0.94 0.8 4717 18 angular momentum formula units 0.46 1 9382 100 How do you find angular momentum in physics? p = m*v. With a bit of a simplification, angular momentum ( L ) is defined as the distance of the object from a rotation axis multiplied by the linear momentum: L = r*p or L = mvr. What is dimensional formula of angular momentum? Dimensional Formula of Angular Momentum. The dimensional formula of angular momentum is given by, M 1 L 2 T-1. Where, M = Mass; L = Length; T = Time; Derivation. Angular Momentum = Angular Velocity × Moment of Inertia . . . . What is its SI unit of angular velocity? The standard SI unit for angular velocity is radians per second (although that ought to be viewed as the angular speed as there is a vector element too in the form of the axis of rotation). Why where radians per second chosen? The second is fairly obvious - it's the SI unit for time. Search Results related to si unit of angular momentum in joule on Search Engine Joule-second - Wikipedia https://en.wikipedia.org/wiki/Joule-second Joule-second Angular momentum | Definition, Examples, Unit, & Facts https://www.britannica.com/science/angular-momentum WebJul 20, 1998 · Appropriate MKS or SI units for angular momentum are kilogram metres squared per second (kg-m 2 /sec). For a given object or … Angular momentum - Wikipedia https://en.wikipedia.org/wiki/Angular_momentum OverviewAnalogy to linear momentumDefinition in classical mechanicsConservation of angular momentumAngular momentum in orbital mechanicsSolid bodiesAngular momentum in general relativityAngular momentum in quantum mechanicsAngular momentum can be described as the rotational analog of linear momentum. Like linear momentum it involves elements of mass and displacement. Unlike linear momentum it also involves elements of position and shape. Many problems in physics involve matter in motion about some certain point in space, be it in actual rotation about it, or simply moving past it, where it is desired to know what effect the movi… Common symbols: L Derivations from other quantities: L = Iω = r × p Conserved?: yes In SI base units: kg⋅m²⋅s−1 Common symbols: L Derivations from other quantities: L = Iω = r × p Conserved?: yes In SI base units: kg⋅m²⋅s−1 Angular Momentum - Definition, Units, Formula, Video, Right … byjus.com https://byjus.com/physics/angular-momentum/ What Is Angular Momentum?Angular Momentum FormulaExamples of Angular MomentumAngular momentum can be experienced by an object in two situations. They are: Point object: The object accelerating around a fixed point. For example, Earth revolving around the sun. Here the angular momentum is given by: Where, 1. L→istheangularvelocity 2. r is the radius (distance between the object and the fixed point about which it revolves) 3....See more on byjus.comEstimated Reading Time: 4 minsWhat is Angular Momentum?See this and other topics on this result Angular momentum can be experienced by an object in two situations. They are: Point object: The object accelerating around a fixed point. For example, Earth revolving around the sun. Here the angular momentum is given by: Where, 1. L→istheangularvelocity 2. r is the radius (distance between the object and the fixed point about which it revolves) 3.... Angular momentum can be experienced by an object in two situations. They are: Point object: The object accelerating around a fixed point. For example, Earth revolving around the sun. Here the angular momentum is given by: Where, 1. L→istheangularvelocity 2. r is the radius (distance between the object and the fixed point about which it revolves) 3.... Planck's Constant | Definition, Units, Symbol, & Facts https://www.britannica.com/science/Plancks-constant The joule-second (symbol J⋅s or J s) is the unit of action and of angular momentum in the International System of Units (SI) equal to the product of an SI derived unit, the joule (J), and an SI base unit, the second (s). The joule-second is a unit of action or of angular momentum. The joule-second also appears in quantum mechanics within the definition of Planck's constant. Angular momentum is the product of an object's moment of inertia, in units of kg⋅m and its angular velocity in … Symbol: J s 11.2 Angular Momentum - University Physics Volume 1 openstax.org https://openstax.org/books/university-physics-volume-1/pages/11-2-angular-momentum WebWe have investigated the angular momentum of a single particle, which we generalized to a system of particles. Now we can use the principles discussed in the previous section to … Essentials of the SI: Base & derived units - NIST https://physics.nist.gov/cuu/Units/units.html WebSI derived units. Other quantities, called derived quantities, are defined in terms of the seven base quantities via a system of quantity equations. The SI derived units for these … The Planck constant $\\hbar$, the angular momentum, … https://physics.stackexchange.com/questions/28957/the-planck-constant-hbar-the-angular-momentum-and-the-action WebThe (orbital) angular momentum is defined as $\vec r \times P$; the commutator of $x,p$ is $xp-px=i\hbar$, which you may have included as well, has the units of position times … vedantu.com https://www.vedantu.com/physics/angular-momentum https://www.vedantu.com/formula/dimensional-formula-of-angular-momentum https://www.quora.com/What-is-the-SI-unit-of-angular-velocity-1
CommonCrawl
Morozov, Andrei Alekseevich Statistics Math-Net.Ru Total publications: 9 Scientific articles: 9 Presentations: 7 This page: 1798 Abstract pages: 2111 Full texts: 423 References: 293 http://www.mathnet.ru/eng/person58643 List of publications on Google Scholar List of publications on ZentralBlatt Publications in Math-Net.Ru 1. N. Kolganov, An. Morozov, "Quantum $\mathcal{R}$-matrices as universal qubit gates", Pis'ma v Zh. Èksper. Teoret. Fiz., 111:9 (2020), 623–624 2. L. V. Bishler, Saswati Dhara, T. Grigor'ev, A. D. Mironov, A. Yu. Morozov, An. A. Morozov, P. Ramadevi, Vivek Kumar Singh, A. V. Sleptsov, "Разности инвариантов узлов-мутантов и их дифференциальное разложение", Pis'ma v Zh. Èksper. Teoret. Fiz., 111:9 (2020), 591–596 3. A. Yu. Morozov, A. A. Morozov, A. V. Popolitov, "Matrix model and dimensions at hypercube vertices", TMF, 192:1 (2017), 115–163 ; Theoret. and Math. Phys., 192:1 (2017), 1039–1079 4. A. Mironov, A. Morozov, An. Morozov, A. Sleptsov, "Quantum Racah matrices and 3-strand braids in irreps $R$ with $|R|=4$", Pis'ma v Zh. Èksper. Teoret. Fiz., 104:1 (2016), 52–57 ; JETP Letters, 104:1 (2016), 56–61 5. A. Aleksandrov, A. D. Mironov, A. Morozov, A. A. Morozov, "Towards matrix model representation of HOMFLY polynomials", Pis'ma v Zh. Èksper. Teoret. Fiz., 100:4 (2014), 297–304 ; JETP Letters, 100:4 (2014), 271–278 6. A. S. Anokhina, A. A. Morozov, "Cabling procedure for the colored HOMFLY polynomials", TMF, 178:1 (2014), 3–68 ; Theoret. and Math. Phys., 178:1 (2014), 1–58 7. A. D. Mironov, S. A. Mironov, A. Yu. Morozov, A. A. Morozov, "Calculations in conformal theory needed for verifying the Alday–Gaiotto–Tachikawa hypothesis", TMF, 165:3 (2010), 503–542 ; Theoret. and Math. Phys., 165:3 (2010), 1662–1698 8. V. Alba, A. A. Morozov, "Non-conformal limit of AGT relation from the 1-point torus conformal block", Pis'ma v Zh. Èksper. Teoret. Fiz., 90:11 (2009), 803–807 ; JETP Letters, 90:11 (2009), 708–712 9. A. A. Morozov, "Simplifying Experiments with Mandelbrot Set Model of Phase Transition Theory", Pis'ma v Zh. Èksper. Teoret. Fiz., 86:11 (2007), 856–859 ; JETP Letters, 86:11 (2007), 745–748 Presentations in Math-Net.Ru 1. Eigenvalue hypothesis and 6j-symbol symmetries A. A. Morozov 3rd French Russian Conference on Random Geometry and Physics: Sachdev–Ye–Kitaev Model and Related Topics June 7, 2019 10:00 2. Проблемы зацепленности и теории узлов (продолжение) Seminar of the Department of Mathematical Physics, Steklov Mathematical Institute of RAS 3. Entanglement in knot theory Seminar of the Department of Theoretical Physics, Steklov Mathematical Institute of RAS 4. Проблемы зацепленности и теории узлов 5. Knot polynomial and Chern-Simons theory: Modern status Internaional conference «Modern Mathematical Physics. Vladimirov-95» 6. Chern-Simons theory and knot theory Seminar of Laboratory 5 IITP RAS "Integrable structures in statistical and field models" 7. Вычисления в теории узлов Seminar of the Department of Geometry and Topology "Geometry, Topology and Mathematical Physics", Steklov Mathematical Institute of RAS March 5, 2014 18:30 National Engineering Physics Institute "MEPhI", Moscow Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), Moscow Laboratory of Quantum Topology, Chelyabinsk State University Chelyabinsk State University Faculty of Physics, Lomonosov Moscow State University State Scientific Center of the Russian Federation - Institute for Theoretical and Experimental Physics, Moscow
CommonCrawl
Ballester-Bolinches, Adolfo This page: 1183 Abstract pages: 2246 References: 162 http://www.mathnet.ru/eng/person21609 https://mathscinet.ams.org/mathscinet/MRAuthorID/263725 1. O. D. Artemovych, A. Ballester-Bolinches, M. R. Dixon, F. de Giovanni, R. I. Grigorchuk, V. V. Kirichenko, L. A. Kurdachenko, V. S. Monakhov, J. Otal, M. O. Perestyuk, A. P. Petravchuk, M. V. Pratsiovytyi, A. M. Samoilenko, A. N. Skiba, I. Ya. Subbotin, E. I. Zelmanov, A. V. Zhuchok, Yu. V. Zhuchok, "Nicolai N. Semko (dedicated to 60-th Birthday)", Algebra Discrete Math., 24:1 (2017), C–F 2. A. Ballester-Bolinches, S. F. Kamornikov, O. L. Shemetkova, X. Yi, "On subgroups of finite groups with a cover and avoidance property", Sib. Èlektron. Mat. Izv., 13 (2016), 950–954 3. A. Ballester-Bolinches, J. C. Beidleman, A. D. Feldman, H. Heineken, M. F. Ragland, "$S$-Embedded subgroups in finite groups", Algebra Discrete Math., 13:2 (2012), 139–146 4. A. Ballester-Bolinches, Luis M. Ezquerro, Alexander N. Skiba, "On subgroups which cover or avoid chief factors of a finite group", Algebra Discrete Math., 2009, 4, 18–28 5. A. Ballester-Bolinches, John Cossey, R. Esteban-Romero, "A characterization via graphs of the soluble groups in which permutability is transitive", Algebra Discrete Math., 2009, 4, 10–17 6. A. Ballester-Bolinches, J. C. Beidleman, H. Heineken, M. C. Pedraza-Aguilera, "A survey on pairwise mutually permutable products of finite groups", Algebra Discrete Math., 2009, 4, 1–9 7. A. Ballester-Bolinches, C. Calvo, "Factorizations of one-generated $\mathfrak X$-local formations", Sibirsk. Mat. Zh., 50:3 (2009), 489–502 ; Siberian Math. J., 50:3 (2009), 385–394 8. M. Asaad, A. Ballester-Bolinches, J. C. Beidleman, R. Esteban-Romero, "Transitivity of Sylow permutability, the converse of Lagrange's theorem, and mutually permutable products", Tr. Inst. Mat., 16:1 (2008), 4–8 9. A. Ballester-Bolinches, C. Calvo, L. A. Shemetkov, "On partially saturated formations of finite groups", Mat. Sb., 198:6 (2007), 3–24 ; Sb. Math., 198:6 (2007), 757–775 10. A. Ballester-Bolinches, L. A. Shemetkov, "On normalizers of Sylow subgroups in finite groups", Sibirsk. Mat. Zh., 40:1 (1999), 3–5 ; Siberian Math. J., 40:1 (1999), 1–2 11. A. Ballester-Bolinches, "On saturated formations, theta-pairs and completions in finite groups", Sibirsk. Mat. Zh., 37:2 (1996), 243–250 ; Siberian Math. J., 37:2 (1996), 207–212 12. A. Ballester-Bolinches, P. Jimenez-Seral, L. M. Ezquerro, "A question on saturated formations of finite groups", Sibirsk. Mat. Zh., 36:6 (1995), 1225–1233 ; Siberian Math. J., 36:6 (1995), 1058–1064 13. A. Ballester-Bolinches, N. N. Bilotskii, M. R. Dixon, R. I. Grigorchuk, V. V. Kirichenko, L. A. Kurdachenko, N. F. Kuzennyj, J. Otal, N. N. Semko, P. Shumyatsky, V. I. Sushchansky, E. I. Zel'manov, "Igor Ya. Subbotin: to the 60th birthday", Algebra Discrete Math., 9:1 (2010), E–H 14. A. Ballester-Bolinches, R. I. Grigorchuk, M. R. Dixon, Yu. A. Drozd, V. V. Kirichenko, J. Otal, M. A. Perestyuk, A. P. Petravchuk, N. V. Polyakov, A. M. Samoilenko, N. N. Semko, V. V. Sharko, L. A. Shemetkov, A. N. Skiba, I. Ya. Subbotin, V. I. Sushchansky, E. I. Zel'manov, "Leonid A. Kurdachenko: to the 60th birthday", Algebra Discrete Math., 2009, 4, E–H
CommonCrawl
Volume 19 Supplement 7 12th and 13th International Meeting on Computational Intelligence Methods for Bioinformatics and Biostatistics (CIBB 2015/16) A study on multi-omic oscillations in Escherichia coli metabolic networks Francesco Bardozzo1, Pietro Lió2 & Roberto Tagliaferri1 Two important challenges in the analysis of molecular biology information are data (multi-omic information) integration and the detection of patterns across large scale molecular networks and sequences. They are are actually coupled beause the integration of omic information may provide better means to detect multi-omic patterns that could reveal multi-scale or emerging properties at the phenotype levels. Here we address the problem of integrating various types of molecular information (a large collection of gene expression and sequence data, codon usage and protein abundances) to analyse the E.coli metabolic response to treatments at the whole network level. Our algorithm, MORA (Multi-omic relations adjacency) is able to detect patterns which may represent metabolic network motifs at pathway and supra pathway levels which could hint at some functional role. We provide a description and insights on the algorithm by testing it on a large database of responses to antibiotics. Along with the algorithm MORA, a novel model for the analysis of oscillating multi-omics has been proposed. Interestingly, the resulting analysis suggests that some motifs reveal recurring oscillating or position variation patterns on multi-omics metabolic networks. Our framework, implemented in R, provides effective and friendly means to design intervention scenarios on real data. By analysing how multi-omics data build up multi-scale phenotypes, the software allows to compare and test metabolic models, design new pathways or redesign existing metabolic pathways and validate in silico metabolic models using nearby species. The integration of multi-omic data reveals that E.coli multi-omic metabolic networks contain position dependent and recurring patterns which could provide clues of long range correlations in the bacterial genome. In the last decades, the study of E.coli treatment tolerance metabolic response through multi-omics is emerging as an essential part of several approaches to molecular biology, metabolic engineering and medicine [1]. Nowadays promising models on multi-omics are based on statistical methodologies [2] and, recently, on multiplex approaches [3, 4]. High-throughput omics technologies [5] enrich complex relational data structures (i.e., XML documents [6], complex networks or maps of multi-view omics [7]) and provide increasing elements for the multi-omic integration at different layers of quantitative and relational information. In several works, the bacterial metabolic response upon perturbations is modelled through the multi-omics dynamic changes on metabolic, signalling and regulatory networks [8]. Then, the multi-omic analysis leads to several engineering and optimizations approaches [9, 10] that reveal hidden biological motifs and pattern regularities [11, 12]. The integration of single omics, even if not biologically comparable (e.g. codon usage and protein abundance), can increase the total information about the system [13]. Ishii and Tomita [14] describe in depth the concept of multi-omic spaces as a powerful data-driven approach to understanding biological processes and systems. The information elicited from specific multi-omic spaces is multi-layered and phenotypic responsive [15]. In conclusion, identifiable multi-omic motifs could reveal the dynamical behavior of the total cellular system in standard conditions and after perturbations. Multi-omic metabolic network motifs are short recurring patterns that are presumed to have a biological function. Often they indicate sequence-specific parts of pathways with associated oscillating multi-omics. In this work, multi-omic metabolic network motifs [16] are identified and their recurring oscillating multi-omic patterns are analysed. Oscillations are defined in a binary or, at least, discrete space of features. We represented the oscillating multi-omics in two different ways: (i) as linked nodes with opposite characteristics on networks (refer to blue-red nodes of Fig. 1b (1)), (ii) as a sequence of high-low adjacent values (refer to blue-red cylinders of Fig. 1b (3)). Then, the oscillating multi-omic features on sequences and networks are linked into a multi-layer relational structure (MLS) strengthening the relations between the sequence patterns and the network motifs. The E.coli multi-omic space is represented in Figure a: different layers represent different omics. Genomic layer (blue rectangles) presents binary discretized omic values, the same for the other layers. Multi-omics in steady state conditions could be perturbed by induced treatmens thus increasing the number of layers for each multi-omic space. Then, the number of perturbed layers depends on the number of experiment considered. Recurring multi-omic patterns motifs related with pathways are represented with multi-omic layers structure (MLS), as shown in Figure b. Figure b part (1) represents the j-th pathway in relation with its associated set of genes (Figure b part (2)); the resultant multi-omic pattern is shown in Figure b part (3). The recurring multi-omic pattern is an array of pathway gene products multi-omic values arranged by the gene order. Multi-omics on the patterns are oscillating, in other words, low values follow high values and vice-versa. This feature is deeply related to the gene positions as shown in Figure c part (1) and (2). Oscillating multi-omics are present in succession along the pattern as shown in a1 and a2 of Figure d part (1). The patterns lose their oscillating features if two adjacent multi-omic values are not oscillating in an half (a1 of Figure d part (2)) or completely (a1 and a2 of Figure d part (3)) Multi-omic data integration is well documented in several works [17, 18]. In our paper we adopt noise robust techniques on up-to-date data (Additional file 1: Section S5). We will show that oscillating multi-omics are found on E.coli metabolic networks as motifs and on sequences as patterns. For this reason, we introduce the MLS on which an ad hoc algorithm (MORA - multi-omic relational adjacency) finds the reciprocal influences of the neighbouring multi-omics on sequences and projected on networks. Oscillating multi-omics and their variations are helpful in the analysis of the impact of new drugs and in applications of metabolic engineering. Moreover, this work contributes to the study and to the creation of new interesting metabolic circuits based on multi-omic structural relations. Furthermore, the MORA reciprocal influences could seen as an index of the topological interplay between the gene order (considered in our sequences) and the pathways. Gene order along sequences and pathway structures are evolutionary conserved, then this index could be useful in evolutionary organism comparisons. Oscillating multi-omic motifs and patterns coupled with the MORA reciprocal influences describe in a new fashion system homeostasis processes and their regulatory functions unveiling the extent of multi-scale oscillating multi-omics and their network plasticity [19]. The subject organism of this study is the E.coli K-12 MG1655 [20]. In "Definition of MLSs" section MLSs are described in detail. The global impact of antibiotics on the whole network and the local impact on pathways have been taken into account on these structures. Therefore, the multi-omic feature scaling and normalization are applied twice (please refer to "Binary discretization of multi-omics taking into account the global and the local effect" section for more detail). Multi-omics are discretized into a binary field in order to be analysed. Through the MLS it is possible to outline the relations (or reciprocal influences) of oscillating multi-omics across sequences and small networks (pathways). For this latter purpose, in "An algorithm to discover multi-omics relational adjacency (MORA)" section, the algorithm MORA is introduced. Reciprocal influences are not enough informative for understanding if the oscillating multi-omics are actual motifs of the bacterial system. Then, two models to represent the extent of oscillating multi-omic motifs/patterns as sequences (paragraph Oscillating multi-omics on patterns) and on pathways ("Oscillating multi-omics on networks" section) are introduced. A detailed description of data-sources, procedures and methods of data-integration is provided in "Data sources and multi-omics data integration" section and in the Additional file 1: Section S5. To facilitate the reader a block diagram of the overall procedure is shown in Fig. 2. We have concentrated our attention on E.coli organism but it is possible to extend the methodologies to other bacterial organisms. One of the most important preconditions is the availability of data: (i) the whole genome, (ii) the protein abundance, (iii) operon and protein complex information and (iv) the whole metabolic network. The most relevant bottlenecks in the preliminary data integration processes come from the availability of the protein abundance and operon information. In the PaxDB (protein abundance database) [21] the data coverage for (i) E.coli is of 98%, (ii) H.pylori is of 98%, (iii) B. henselae if of 85% and (iv) S.enterica is of 59%. Other proteobacteria have a data coverage lower than the 47%. Moreover, the operon information (DOOR DB [22]) is conspicuous only on E.coli (152,785 entries), instead, in the other PaxDB listed organisms is less than half or completely not numerically relevant. Nevertheless, the main lack of information, except for E.coli, comes from the reconstruction of the whole metabolic network. In particular, this information is important in two steps of the procedure that we will present: (1) in searching a parameter (the average path length) of the algorithm MORA, (2) in the computation of the pathways with extensions and/or operon compressions. In this Figure, the whole procedure block diagram is described. Gray blocks represent the extraction of multi-omic values and structures. Then, the global and the local effects (blue blocks) are computed. The local effect depends on the type of Multi-layered structure (network + sequence) (violet block). Once the multi-omic effects are computed and normalized, then these values are discretised in 0/1. After that, the oscillation measures are computed in the respective structures (networks and sequences). The generated multi-omic patterns (from sequences) and motifs (from networks) are given in input to the algorithm MORA for the computation of their reciprocal influences. This procedure is computed in standard conditions and after perturbations obtaining combined and competing patterns/motifs The most of the times incomplete metabolic networks (i.e. obtained merging only the KEGG pathways) do not present the properties of complex networks, such as the power-law degree distribution, the small world, the average path length, etc. Moreover, the power-law degree parameter α is important to assess if a network is biological one or not (see the duplicaton model of Chung et al. [23]). For this reason, as described in Additional file 1: Section S5, our integrated network is deeply studied under its biological aspects. In the domain of bacteria, for all we know, there is not another E.coli protein-centric network more complete than this one. For this reason, it is made available in the annexed repository. Definition of MLSs In an integrated multi-omic space, as that in Fig. 1a, each omic layer is arranged on the basis of its data-structure relations. In the genomic layer, the genes disposed along the double strand with specific positions are transformed in a one line sequence. The gene order is considered as an organism-specific order relation (≤) and is highly conserved in duplication and during the translational processes [24]. As shown in Fig. 1b(2), we described a gene relationship of the type g1≤g2≤⋯≤gn for each gene gi. The order induced on the multi-omic sequence reproduces the gene order relation on the double strand. In particular, multi-omics are said to be adjacent with respect to the gene pairs when they are in the positions gi and gi+1 ∀i ∈[1…n]. As it is shown in Fig. 1c(1)(2) g4,g5,g6 could be on the same strand or on both. As said previously, the gene reciprocal positions on the double strand are merged and represented on a line sequence. Thus, when the gene order changes in one of the two strands, then the multi-omic pattern (Fig. 1b(3)) changes the arrangement of its values. Indeed, in Fig. 1c(1) and (2) the fragment of the pattern changes because of the swap of g5 and g6. In an integrative approach, some specific data-structure relations could involve more than one omic layer. For example, the proteomic and the metabolomic layers can be represented as a protein-centric network of reactions G(V,E) where the node-set V contains proteins and the edge-set E represents the enzymatic reactions. In the protein-centric network representation, as illustrated in Fig. 1b(1), the reversible reactions are depicted as a double arrow link (⇔) and the irreversible reactions are represented as single arrow links (←or→) [25]. In this setting, two proteins pi,pj∈V(G)) are said to have a strong relationship if they are linked by an edge e(pi,pj) ∈E(G) or if they are the end points of an undirected shortest path that must not be longer than the average path length (APL) [26]. The proteins in a strong relationship will be the subject of thorough analysis as described in "An algorithm to discover multi-omics relational adjacency (MORA)" section. In literature, it is proved that the gene adjacency is conserved across prokaryotes with a relevant operon architecture [27]. In particular, it has been shown that the proteins encoded by conserved adjacent genes present interactions on the metabolomic layer [28]. In presence of protein complexes, these interactions are physical, while, when dealing with anabolic and catabolic processes, they are functional. Then, the genes are positioned on DNA depending on their association to metabolic functions. In order to model the relationship between the gene order and the pathways, the concept of MLS is introduced. The MLS represents the pathways (Fig. 1b(1)) in combination with the gene order information (Fig. 1b(2)) studying the patterns on multi-omic sequences (Fig. 1b(3)). The abbreviation mov is used when we refer to a sequence of multi-omic values on the multi-omic sequences. The multi-omics related to each pathway are used to build a multi-omic subspace that represents the values of each MLS. Furthermore, MLS represents the interactions gi⇔pi ∀i∈G between the elements in different omic layers of the same subspace (Fig. 1b). Additionally, Operon compression (Fig. 3a) and path extension (Fig. 3b) are modifications of MLS, introduced to identify relevant multi-omic pattern variations. The operon compression maintains unaltered the MLS gene order. In this case, the elements that belong to the same operon (Fig. 3a e2- e3- e4) are compressed to the more frequent multi-omic. Moreover, path extension is a multi-omic pattern modification accomplished into two steps: 1) adjacent non oscillating elements on the pattern are labeled as end nodes of the pathway (i.e. in Fig. 3b the multi-omics in positions e2- e3 and e3- e4); 2) we search for the best oscillating shortest path that links the above end nodes. Multi-omic pattern operon compression is shown in Figure a. The elements that belong to the same operon (e2- e3- e4) are merged to the more frequent multi-omic value: in this case the low one (blue-head cylinder). The path extension is shown in Figure b. In this case, the MLS is modified searching an alternative path, on the global metabolic network, that links two nodes associated to two not oscillating pattern adjacent multi-omics (i.e the multi-omics in the positions e2- e3 and e3- e4). The multi-omic path, chosen from among all the alternative paths on the whole metabolic network, is the shortest path with the most oscillating multi-omics. (i.e in the path extension between e3- e4 is chosen the path p3- p5- p4 (violet dotted lines) and not the path p3- p8- p4 As a result, the chosen multi-omic path is the shortest between the most oscillating multi-omics. The alternation is measured by using the score defined in Eq. 7. Then, these nodes extend the pathway and insert new multi-omics in the multi-omic sequence, breaking in some cases the gene order (Fig. 3b). Moreover, detailed aspects of oscillating multi-omics are illustrated in the following paragraphs. Binary discretization of multi-omics taking into account the global and the local effect Protein abundance is a measure of the part per million quantity of the proteins inside a cell, as provided, for example, in Wang et al. [29]. Its definition and the protein abundance variation used in this paper, respectively pa and pv, can be found in Additional file 1: S5 Section 5.0.3). Codon Adaptation Index (CAI) is an index of non-uniform codon use defined by Sharp and Li [30] (a deeper discussion is in Additional file 1: S5 Section 5.0.1). All these quantities are extracted, integrated and normalized with the purpose of identifying multi-omic patterns and their mov. Then, the zero-mean unit-variance normalization is applied twice: i) the first time it is computed on the complete data set, i.e. considering the whole organism (see Fig. 4a(1)). The second time it is computed considering the same omics but on a small sample filtered from the multi-omic space by a specific pathway of N elements (see also Fig. 4a(2)). Then, a vector of numbers called local effect, is obtained for each pathway: \({mov}_{1} = (\widetilde {e}_{1}, \widetilde {e}_{2} \dots, \widetilde {e}_{N})\). The same elements of the multi-omic space are selected from the normalized complete data set getting a vector of real numbers called global effect: \({mov}_{2} = (\widehat {e}_{1}, \widehat {e}_{2} \dots, \widehat {e}_{N})\) (see Fig. 4). Both the vectors represent the same elements and have the same length of N. The normalization of the omics is described in the following Eq. 1: $$ {}\overline{pvi}_{j}= \frac{{pv}_{j} - \mu_{pv}}{\sqrt{\sigma^{2}_{pv}}},\\ \overline{pa}_{j}= \frac{{pa}_{j} - \mu_{pa}}{\sqrt{\sigma^{2}_{pa}}},\\ \overline{cai}_{j} = \frac{{CAI}_{j} - \mu_{CAI}}{\sqrt{\sigma^{2}_{CAI}}}, $$ Figure 4 (a) part (1): Multi-omics are normalized considering the complete multi-omic space. Figure 4 (a) part (2): For each recurring multi-omic pattern, the multi-omics are normalized considering a small sample filtered from the multi-omic space by a specific pathway of N elements. Then, the global effect vector mov2 and the local effect vector mov1 are obtained: both the vectors have the same lenght but different multi-omic normalized values. Figure 4b part (1): The vectors of the global effect (pink) and the local effect (gray) are binary discretized. Figure 4b part (2): In order to consider the global response to treatments, the missing mov1 oscillations are substituted with the mov2 oscillations (if they are present). In this example the 4-th oscillation is FALSE (\(a_{1}^{local}\)) on the local vector and is present (\(a_{1}^{global} = TRUE\)) on the global vector. Then, the local effect is updated with the information of the multi-omics that come from the global effect. This procedure is done in steady state conditions and after perturbed by treatments multi-omic values where μ is the mean and σ is the standard deviation. Then, the local effect (mov1) and the global effect (mov2) in steady state (see Eq. 2) and after the t-th treatment (see Eq. 3) vectors are discretized into two classes: 0 and 1 (Fig. 4b(1) red-head and blue-head cylinders, respectively). Binary discretized multi-omics in steady-state conditions are obtained considering \(\overline {{pa}_{j}}\) and \( \overline {cai}\) ∀j∈1,...,N and i=1,2 as in the following Eq. 2: $$ \begin{aligned} {mov}_{i_{j}} = \left\{ \begin{array}{ll} 1 & if \ \ \frac{\overline{pa}_{j} \ + \ \overline{cai}_{j}}{2} \geq 0\\ \ 0 & if \ \ \frac{\overline{pa}_{j} \ + \ \overline{cai}_{j}}{2} < 0 \end{array}\right. \end{aligned} $$ Instead, the binary discretized multi-omics perturbed by treatments are obtained considering the protein variation for the t-th treatment \(\overline {pv}^{t}\) and \( \overline {cai}\) ∀j∈1..N and i=1,2 as in Eq. 3 $$ \begin{aligned} mov^{t}_{i_{j}} = \left\{ \begin{array}{ll} 1 & if \ \ \frac{\overline{pv}^{t}_{j} \ + \ \overline{cai}_{j}}{2} \geq 0\\ \ 0 & if \ \ \frac{\overline{pv}^{t}_{j} \ + \ \overline{cai}_{j}}{2} < 0. \end{array}\right. \end{aligned} $$ In some cases, the mov1\(\left (\text {or}~mov^{t}_{1}\right)\) binary discretization is not enough sensitive to discover an alternation. Therefore, in the same positions it is possible to find oscillating multi-omics on mov2 (or \(mov^{t}_{2}\)) and not on mov1. If it happens then the missing oscillating multi-omics on mov1 are substituted with the oscillating multi-omics of mov2. In this way, it is possible to combine in a binary field the information about the system global response and the pathway local response to antibiotics. Formally, as shown in Fig. 4b(2), the i-th local \(\left (a_{i}^{local}\right)\) and global \(\left (a_{i}^{global}\right)\) oscillations are taken in OR. The binary mov (or movt) resulting from the multi-omic pattern is projected on the associated pathway. Normalization processes are suited to deal with the assumption of independent and dependent systematic biases [31]. Moreover, the scale on which data should be included in these processes (global versus local scale) has an extensive application to high-throughput omics analysis [32]. In particular, local normalization has the advantage of correcting systematic stress response bias within small groups of multi-omics. Then, it is possible to account inconsistencies among the multi-omics once they are discretized in a binary field. The local variabilities in standard and perturbed measurement conditions could be more relevant with respect to global normalization even if they could be affected by noise. This is also the reason because we adopted noise robust techniques of data-integration as described in "Data sources and multi-omics data integration" section. However, the local normalization process may over fit the data, reducing accuracy, especially, i.e., if the multi-omics are integrated from genes that are not randomly spotted on the array [33] and subject to local and global responses determined by the interaction of global and local regulatory mechanisms, such the E.coli oxygen response [34]. For these reasons, it is more accurate to combine the information that comes from the global normalization with the local one. An algorithm to discover multi-omics relational adjacency (MORA) MORA is a search algorithm that weights the multi-omics with respect to their positions on the MLS. Its purpose is to assign a high score to the multi-omics that have two properties at the same time: they are adjacent on the sequence and strictly connected on the pathway. The algorithm assigns a score equal to zero to the multi-omics that are adjacent on the sequence but unconnected to the pathway. In Fig. 5, the algorithm weights two adjacent multi-omics evaluating their positions (i.e. j and j+1 on the pattern movi=[e1,e2,…,ej,ej+1,…en]). A high score is assigned when the multi-omics ej and ej+1 are directly connected on the pathway (green dotted lines) or with the shortest path (magenta dotted lines), otherwise a score of 0 is assigned. The multi-omic reciprocal adjacency is computed for all the adjacent couples c. The median value of the MORA weights is a summary index of the reciprocal influences between multi-omics and underlines the interplay among them on the MLS. Note that in multi-omic sequence the propagation of influences of an element becomes gradually weaker as the distance from its neighbouring elements increases: the propagation of influences is limited to the metabolic network average path length (APL) (see Additional file 1: Section S5 - Section 5.0.4), decreasing its influence gradually and in an inversely proportional way with respect to the path length. If sequences and pathways do not present reciprocal influences (RI=0), the oscillating multi-omics lose their significance with respect to the pathways and vice-versa. In these cases, MLS interactions do not present a real biological meaning. Two steps of the MORA algorithm. In the first step (part a), given the average path length (APL), MORA searches the shortest paths between the two adjacent multi-omics ej and ej+1. of length: ψ∈[1, ⌈APL⌉]. The green dotted line indicates paths of length ψ=1 and the magenta dotted lines represents paths of length ψ=1, MORA does not searches a path of length 3 (which would imply ψ=3) because we supposed that the APL = 2. The algorithm then updates the weight vector infl and moves on the next pattern positions where searches for the next adjacent multi-omics (ej+1 and eh=j+2). In the second step (part b) MORA evaluates the shortest paths for ψ=1 and 2. The array of the weights infl is updated, as shown in the step i to i+3, according to the algorithm MORA algorithm takes as input the following parameters: movi, which is the pattern of the i-th MLS, Gi, which is its pathway and APL. Let us define δ=2 as the distance on the sequence from ej to ej+1,∀j∈ [0,n−(δ−1)]. δ is fixed to 2 and identifies the adjacency of positions j and j+1 on the pattern (as it is shown in Fig. 5). Iteratively, each couple of (ej,ej+1) is associated with a couple of nodes in the pathway with the same index for simplicity: (pj,pj+1). Then, the algorithm will search in Gi the shortest paths with the end nodes nodefrom=pj and nodeto=pj+1. Also, we define ψ∈ [1, ⌈APL⌉] as the path distance between pj and pj+1. For example, in Fig. 5, MORA finds a direct link between the pathway nodes (green dotted lines) and a path of length 2 (in magenta dotted lines). We define the array of weights infl as an array of all zeros with the same length of movi. When the algorithm finds the shortest paths on Gi from nodefrom to nodeto, it takes the positions z of the path nodes on the movi and in correspondence of these positions he gives a weight to the vector element inflz (i.e the algorithm in Fig. 5a takes for ψ=2 the positions z={j,j+1,h}). The weights are computed with the following formula: \( w = \frac {1}{\psi _{y} -1} \), for the y -th shortest path found. These weights are summed up in the vector infl as follows: infl[z]=infl[z]+w. If there are c couples with δ=2 and ψ=2 then a perfect adjacency on movi and a direct edge on Gi are present, and then w=1. The weight w becomes progressively smaller if the distance ψ between the end nodes increases. The maximum distance between the end nodes is limited by the upper bound that is properly ⌈APL⌉. The supplementary Section S6-Figure 4 (Additional file 1) describes the whole procedure of the algorithm with a toy example and a related Figure. MORA is tested against several extreme structures proving its correctness. In particular, MORA is shown to be stable in case of cliques (supplementary Figure 4 (Additional file 1) part (b)), in snaked networks (Additional file 1: Figure S4) part (c) and on complete networks (not shown). MORA is given as yardstick for validating and deciding if the measures that are taken in next Sections for oscillating multi-omic patterns and pathways are comparable and at what level of reliability. Algorithm 1 presents a schematic description of the MORA methodology. The code is available at https://github.com/lodeguns/MORA. Oscillating multi-omics on patterns In this work, the multi-omics are discretized in a binary field as described in the previous "Binary discretization of multi-omics taking into account the global and the local effect" section, so that these values are classified into two classes (N=2). A multi-omic pattern presents an alternation if the two adjacent ej and ej+1 in the pattern sequence are subtracted and ∣ej−ej+1∣=1. If the result of the subtraction is 0 then ej and ej+1 are equal and there is not an alternation (i.e. Fig. 1d(2)a1). Furthermore, l is defined as the length of mov and div=l−1 is the number of divisors between the ej. For each pattern, the relative multi-omic pattern score a.s (Eq. 4) is the number of adjacent alternated couples of values divided by N: $$ a.s = \sum_{j=1}^{div} \mid{e_{j} - e_{j+1}}\mid\cdot\frac{1}{N} $$ We are interested in patterns of oscillating multi-omics with values close to the maximal score. We can obtain the alternation score of multi-omic patterns leveraging Eq. 4. A maximal score corresponds to a sequence of fully oscillating multi-omic values, for example movs=0−1−0−1−...0−1 or alternatively movs=1−0−1−0....1−0−1−0. Equation 4 returns the maximal score if and only if the pattern sequence presents fully oscillating multi-omics. The correctness of the last statement is proved in Theorem 1 (Additional file 1: Section S4 - Theorem 1). Fully oscillating sequences of different lengths, l1 and l2, present different scores depending on their length. Thus, it would be better to normalize the score dividing it by the number of divisors. Then, the absolute score of alternation is obtained, as in Eq. 5. $$ w.s = \frac{\sum_{j=1}^{div} \mid{e_{j} - e_{j+1}}\mid\cdot\frac{1}{N}}{div} $$ The maximal absolute score is proved to exist and it is unique for each pattern sequence with a specific length (Additional file 1: Section S4 - Theorem 2). Theorems 1 and 2 are important because they prove the correctness of how to compute the distance d of an observed oscillating multi-omic pattern movobs (i.e., movobs:1−0−1−0−0−1−0−1−1−0) from the maximal absolute score (that can be considered as the ideal score). Thus, we have a powerful instrument to investigate how much the oscillations observed in movobs are found to be distant from the ideal (maximal) oscillations movid. The distance d in Eq. 6 is the classical Manhattan distance: $$ d = N \cdot (w.s({mov}_{id}) - w.s({mov}_{obs})) \geq 0 $$ d is a geometric distances only if movobs and movid have the same length. The similarity index movobs with respect to movid is computed using Eq. 7: $$ \sigma_{obs} = 1 - d $$ Oscillating multi-omics on networks In this Section, we illustrate the effect of oscillating multi-omics when projected on pathways. We consider the Park and Barabási [35] dyadic model on the pathways. In particular, a couple of proteins, linked in the pathway, presents a dyadic property if they both have the same multi-omic value (i.e., the couples of red-red nodes or blue-blue nodes in Fig. 1b(1)). Instead, they show an anti-dyadic property if they have oscillating multi-omics (couples of red-blue nodes). Following the model of Park and Barabási, it is possible to compute the expected value of an anti-dyadic effect on the couples of nodes that present an alternation. Let n1 be the number of pathway nodes with multi-omic values equal to 1, and n0 be the number of pathway nodes with multi-omic value equal to 0. In this way, the total number of nodes in the pathway is N=n1+n0. The number of links between nodes that show the anti-dyadic property are described by the variables m10 and m01. The number of links between nodes that do not show an alternation (dyadic property) are represented by the variables m11 and m00. Therefore, the total number of edges for a pathway is M=m10+m01+m11+m00. Note that the maximal possible number of edges in a directed network is equal to N(N−1). The network density δ [36] of a directed graph is described in the following Eq. 8: $$ \delta_{p} = \frac{M}{N(N-1)} $$ The random variables \(X_{m_{10}}, X_{m_{01}}\), \(X_{m_{11}}\), \(X_{m_{00}}\) (where \(X_{m_{10}} = X_{m_{01}}\)) describe the events of oscillations or non-oscillations in a network. The expected value of observing an alternation (anti-dyadic property) on two nodes given by Eq. 9 is the following: $$ {}E\left[X_{m_{10}}\right] = E\left[X_{m_{01}}\right] = \binom{n_{0}}{1}\binom{n_{1}}{1} \frac{M}{N(N-1)} = n1 \cdot n_{0} \cdot \delta_{p} $$ On the other hand, the expected value of not observing an alternation (dyadic property) is defined by Eqs. 10 and 11: $$ E\left[X_{m_{00}}\right] = \binom{n_{0}}{2}\delta_{p} = \frac{n_{0}(n_{0} -1)}{2} \delta_{p} $$ The last three Equations describe the link of every pair of nodes with a given probability [37, 38] as described by the Gilbert's model for random networks. Therefore, if the counts of m10, m01, m11 and m00 deviate from the expected values described above, then it is possible that the multi-omics on the pathways are disposed in a structured manner, differently from Gilbert's model. The ratio between the observed alternation and its expected value is a direct measure of deviation (a.k.a. magnitude) of the observed pathway from a random configuration. In particular, the pathways present oscillating multi-omics with a magnitude \(\widehat {m_{10}}\) given by Eq. 12 : $$ \widehat{m_{10}} = \frac{m_{10}}{E\left[X_{m_{10}}\right]} $$ Following the properties of the network structure, if the nodes present a dyadic effect, their magnitudes \(\widehat {m_{11}}\) and \(\widehat {m_{00}}\) are given by Eqs. 13 and 14: $$ \widehat{m_{00}} = \frac{m_{00}}{E\left[X_{m_{00}}\right]}. $$ If \(\widehat {m_{01}}\) is greater than 1, it indicates that multi-omic nodes are oscillating more than expected by a random configuration. Multi-omics are not oscillating in the same way when \(\widehat {m_{00}} > 1\) or \(\widehat {m_{11}} > 1\). Due to the configuration of the pathways, it is possible that the anti-dyadic magnitude (see Eq. 12) and both the dyadic magnitudes (see Eqs. 13, 14) present values greater than 1; therefore, a network is mainly oscillating if \(\widehat {m_{10}} > \widehat {m_{11}}\) and \(\widehat {m_{10}} > \widehat {m_{00}}\). The average dyadic-effect\(\widehat {m_{0011}}\) is defined as the average between the magnitudes \(\widehat {m_{11}}\) and \(\widehat {m_{00}}\). The whole procedure performance Given an unweighted graph G(N,E), with N the set of pathway nodes and E the set of edges/reactions, MORA searches the reciprocal influences with a polynomial complexity on N. Additionally, MORA is coupled with the measurement of oscillations making the whole procedure exponential. The function get.shortest.paths is a modified function that implements a breadth-first search (BFS) structure [26] and takes a track of all the shortest paths between two end nodes. The worst case time complexity for this search algorithm is of O(|N|+|E|), but 0≤|E|≤|N|, thus, the complexity could be quadratic on N: O(|N|2). The latter is called for each adjacent couple (N−1) of multi-omics along the sequence plus the one taken considering the couples between the first and the last elements (O(|N|(|N|+|E|)) [39]. Moreover, the computation of the dyadic/anti-dyadic effect requires an exhaustive enumeration of all the possible configurations of the n1 nodes on the whole node set N: \(\binom {N}{n_{1}} = O\left (N^{n_{1}}\right)\). The latter turns out to be exponential in time but it is possible to apply an approximation by analyzing the boundary configurations of a phase diagram [35]. Unfortunately, in this case, due to the high specificity of enzyme-substrate reactions, with the latter approximation a large part of the biological information is lost. The computation of the oscillating multi-omics on sequences require a linear time with respect the number of nodes (O(|N|)). As consequence, the whole procedure is of f\(\left (\mathrm {N}, n_{1}\right) = O\left (N^{n_{1}}\right) + O\left (|N|^{3}\right)) + {O}(|N|)\). Data sources and multi-omics data integration In the Additional file 1: Section S3 the data sources and the differences between static and dynamic omic values are described. Static omic values, as CAI, are those not responsive to antibiotics. Dynamic omic values are those sensitive to treatments, as, for example, mRNA amount, protein abundance and its variation. In particular, for what concerns the transcriptomic layer, the microarray compendia are extracted with relevant between-studies reliabilities [40] as described in the Additional file 1: Section S5 pp.5.0.2. Furthermore, in the proteomic layer, a random effect model is designed with affordable predictors with the aim of obtaining a noise-robust protein variation pv as a fundamental dynamic omic (See [41] and Additional file 1: Section S5 pp.5.0.3 for more details on the model). For what concern the metabolomic layer, a novel protein-centric network of reactions is obtained by integrating two sources [42, 43] (Additional file 1: Section S5 pp.5.0.4). Finally, in the genomic layer, relevant information comes from the codon usage extraction as described in the Additional file 1: Section S5 pp.5.0.1. The experiments were performed on 66 pathways in standard conditions and taking into account the average effect of 69 antibiotic perturbations. The sequence patterns and network motifs are studied on 66 multi-layered structures four times. In fact, four different experimental set-ups were compared on the same dataset: (i) MLS without modifications; (ii) MLS with operon compression; (iii) MLS with path extension; (iv) MLS with operon compression followed by path extension. (see violet block in Fig. 2) The MORA algorithm evaluated the reciprocal influences RI on the 66 x4 MLS (see white block in Fig. 2). Once the associations between the recurring patterns and network motifs were evaluated, the oscillating multi-omics were computed with the following scores: (i) the similarity between observed oscillating multi-omic patterns and the ideal patterns (Eq. 7); (ii) the pathway dyadic/anti-dyadic effect magnitudes, as illustrated in Eqs. 12, 13, 14 (see also green blocks in Fig. 2). For a detailed description of the experimental parameters see in the Additional file 2: Tables S1 and S2. Row names are labelled with their unique E.coli KEGG identifier codes (eco:pathNNNNN). A small case study Multi-omic oscillations of E.coli Glycolysis are shown in the Additional file 1: Figure S5 Section S7. In this case, in standard conditions, the similarity computed as in Eq. 7 is =0.6 (red line) and the \(\widehat {m_{01}}\) is =1.9, while the magnitude of the dyadic effect \(\widehat {m_{0011}}\) is =1.5. These values could change due to the perturbations, as shown in Additional file 1: Figure S5 part (b). In this small case study, a single pathway alternation analysis with the effect of path extensions is shown in Additional file 1: Figure S6 highlighting that with or without path extensions, the oscillations on motifs are preserved. Multi-omic oscillation features are present in standard conditions and show a multi-scale behavior For the 66 pathways and supra-pathways extracted from KEGG the multi-omic oscillations in standard conditions through their associated 66 MLSs and relative modifications are studied. Network Motifs: It is clearly shown (Fig. 6a) that most of the pathways without path extensions (black dots) presents a relevant anti-dyadic effect (median \(\widehat {m_{01}} = 1.87, sd= \pm 0.57\)) even if it is also present a relevant but more variable dyadic effect (median 2.03,sd=±0.81). In the pathways with path extensions (blue dots) the anti-dyadic effect is decreased with a median 1.35,sd=±0.48; contextually, also the average dyadic effect is decreased (median 1.45,sd=±0.57). In some cases, path extensions, more than other MLS modifications, could increase the oscillations. In some other cases MLS modifications introduce new proteins as nodes and new edges as reactions that could decrease the oscillations; it depends on the network topologies. In this case, in the most part of the pathways, the anti-dyadic effect magnitude is decreased even if it remains relevant. In standard conditions, pathways without path extensions present only 18/66 combined MLSs with a σobs>0.7 and \(\widehat {m_{01}} >1\). The anti-dyadic effect magnitudes (y-axis) and the dyadic effect magnitudes (x-axis) of 66 pathways are shown in Figure a. The pink rectangle underlines the area where the pathways present an anti-dyadic effect \(\widehat {m_{01}}\ge 1\), instead the blue rectangle individuates the area where the average dyadic effect is \(\widehat {m_{1100}} \ge 1\). The pathways with path extensions are shown with blue dots while black dots depict the same pathways without deviations. The number on the dots is the KEGG pathway identifier without its suffix path:eco. In Figure b the anti-dyadic effect is shown on the y-axis \(\widehat {m_{01}}\) and the pattern similarity σobs to an ideal oscillating multi-omic pattern on the x-axis. Black dots describe pathways without extensions, and triangles depict those with extensions. The black and blue curves correspond to the two-dimensional kernel density estimation both for the dots and for the triangles. The plot is clearly separable with a binary classifier, individuating principally two bands (the black and the blue ones). Both the plots show that pathways without extensions have a median reciprocal influence =1±0.27 per node. Instead, pathways with extensions present a median reciprocal influences of =2±0.62 per node. Pathways with extensions present present better MORA reciprocal influences than pathways without extensions. The pathway is presented with a big shape on the plot if the RI is > 1.5 (more adjacent). Opposite, pathways with RI ≤ 1.5 are classified as less adjacent Sequence Patterns: In Fig. 6b it is shown how combined and competitor structures interact. Furthermore, pathways with path extensions (blue triangles) are more oscillating on patterns (median σobs=0.73,sd=0.10) with respect to those without extensions (black dots) (median σobs=0.53,sd=0.14). On one hand, pathways with path extensions present a high anti-dyadic effect, on the other hand, they decrement the σobs, moving the density center to a value nearer to 0.5. This means that pathways with path extensions seem to be more in combination than pathways without extensions. Multi-omic pattern operon compression returns similarity scores which are a little higher (median σobs=0.54,sd=0.15) than in unmodified MLSs. Modifying MLSs, coupling operon compression and path extension, leads to lower oscillations in patterns (median σobs=0.70,sd=0.12). The presence of operons on the patterns does not cause always the same effect: due to the compression, in some pathways, for example Glycolysis (KEGG identifier path:eco00010) σobs changes from 0.62 to 0.66, while, in the Citrate cycle (TCA cycle) path:eco00020, σobs changes from 0.38 to 0.62. In other cases, σobs decreases its value from 0.64 to 0.38, as, for example, in the Lysine degradation (KEGG identifier path:eco00310). After all, oscillation is still present for the most part of the pathways without the MLS modifications. In this way, multi-omic oscillations allow to uncover similarities between the network structures, which can reveal the existence of generic organization principles. A comparison of MLSs with and without modifications reveals a multi-scale presence of oscillations in sequences and networks of different dimensions but with a widespread tendency to homeostasis. Multi-omic oscillations are related to MLS reciprocal influences showing a particular interplay between the sequence gene order and the pathway structure It is possible to assess that MLSs are related to multi-omic oscillations underlining the interplay between the gene order and the particular schema of reactions. In both the plots of Fig. 6, pathways without extensions maintain a relevant reciprocal influence (median 1,sd=±0.27), increasing their value for all the pathways with extensions (median 2,sd=±0.63). In both the cases, the reciprocal influences are not near to 0, thus associating the anti-dyadic motifs to the neighboring multi-omics along the patterns. In this way, it is also proved that the network plasticity [19] does not depends only on a particular circuit of reactions but could be investigated through the extent of the MORA reciprocal influences between the sequence gene order and the network structures. From an evolutionary point of view, the sequence gene order and the pathway structures are conserved along the organisms [44, 45]. In future works, it could be interesting to investigate the interplay of gene order and network structures on several species taking as a measure of the evolutionary pressure the comparisons between MORA reciprocal influences. Multi-omic oscillation features change in configuration due to perturbations and reveal different regulatory behavior We measured the multi-omic oscillations on the 66 MLSs perturbed by considering the average effect of 69 antibiotic perturbations. A strong response from the MLSs, as shown in Fig. 7, is obtained when pathways with extensions in standard conditions (black dots) are compared against the average effect of 69 treatments (blue triangles). The treatment effects strongly lower the patterns oscillation score moving the median from σobs=0.72,sd=0.10 to σobs=0.57,sd=0.11. On the other hand, on the networks, they led the median values of \(\widehat {m_{01}}\) from 1.36 to 1.41, with an increase in variability from sd=0.50, to 0.87. These values show that the organism, in response to the treatments, activates other oscillating circuits on the same pathway, silencing some others. For example, in Fig. 7, base excision repair (KEGG pathway id eco03410 with extension) reasonably increases its anti-dyadic effect of more than 0.5. Mismatch repair (KEGG pathway eco03430) also increases its anti-dyadic effect magnitude, and in this case, its oscillating similarity on the pattern is lowered. The same is for the Flagellar assembly pathway (KEGG pathway eco02040) that, due to the label overlapping not shown on the plot, in standard conditions, is near to the black node 360 while in the treatments it is behind the blue triangle 280, always in Fig. 7. The structures (sequence and pathway) that form the MLS are defined in combination if oscillating multi-omics are present at the same time on the patterns and on the network motifs. Instead, when oscillating multi-omics in one structure are found on patterns and not on pathways or vice-versa, then these structures are defined in competition. In our case, it is observed that the combined structures in standard conditions are 47/66 considering pathways with extension, those with operon compression are 14/66, while those with operon compression and extension are 43/66. The average effect of the 69 treatments causes only 25/66 MLSs in combination. Similar results are obtained considering MLSs with operon compression and with operon compression and path extension. Some structures that are combined in steady state conditions due to the perturbations become competitors and vice-versa. Configuration changes imply the activation or the deactivation of oscillating multi-omic circuits on the pathways (as shown in Fig. 8) highlighting the presence of different and unbalanced regulatory functions. The anti-dyadic effect magnitudes \(\widehat {m_{01}}\) (y-axis) and the similarity score magnitudes σobs (x-axis) of 66 pathways are shown in Figure. The 66 pathways with extensions subject to the average effect of 69 treatments are shown with the blue triangles. The same pathways in steady state conditions are represented with black dots. The pathway is presented with a big shape on the plot if the RI is > 1.5 (more adjacent). Opposite, pathways with RI ≤ 1.5 are classified as less adjacent An MLS with high reciprocal influences (RI) but with a low number of oscillating multi-omics is shown in (1). On this MLS, due to the effect of the treatments t, an oscillating multi-omic circuit is activated (orange links in the yellow circle) and is deactivated another one. The MLS become combined due to the effect of t because both the pattern and the pathway show oscillating multi-omics at the same time. Opposite, in (2), the effect of treatments activate and deactivate the same oscillating multi-omic circuits, but, due to the changed pattern elements order, only the pathway shows oscillating multi-omics, instead the pattern shows a low number of oscillations. In this case, the structures are defined competitor. The change of only two multi-omic values (p3 and p5) on the overall pathway and on the pattern (e3 and e5) affect the whole recurring multi-omic pattern and its MLS The proposed methodology with regard to performances could be comparable with well-known mining methodologies of sequence patterns [46] and dyad motifs [35, 47]. From the biological point of view, it is based on an extensive perturbation multi-omic analysis related to network and sequence structures. In this field, recent studies are focused on frequency patterns, that, conditioned by biochemical oscillators, are activated or deactivated on regulatory network circuits. In particular, it is proven, that different types of regulatory functions appear to be related to particular network structures to such an extent that different biochemical oscillators are associated with specific structures [48]. Furthermore, some other studies are focused on bacterial network motifs and sequences which deepen the topological features on complex structures [49]. As a consequence, our methodology is focused on multi-omic patterns and motifs by putting them in a biological and structural relation. In this way, it is possible to leverage the topological interplay between networks and sequences for understanding the effect of perturbations and the role of regulatory functions. Consequently, discovered patterns and motifs are considered part of the same multi-layered structure (MLS) and they come from quite easily data integration processes useful to compare dynamic omic sources and perturbations (protein variation, mRNA amounts, etc.) with static omic sources (codon usage, gene order, pathways). Therefore, it is analyzed the extent of oscillating multi-omics and their reciprocal influences investigating whether sequence patterns and network motifs are combined or competitor. In this setting, we have shown how perturbations alter the MLS dynamically. These changes were proved to trigger the activation and deactivation of oscillating multi-omic circuits with major implications in the metabolic response to antibiotics. Moreover, unknown structural features are revealed, e.g. it is shown that the gene order and the bacterial reaction circuits reveal a strong interplay in combined structures. MLSs were studied when subject to modifications (operon compression and path extensions) considering multi-scale multi-omic metabolic networks. When the motifs present a lack of oscillating multi-omics, then through the path extensions we have seen how the metabolic network affects the recurring patterns more than the operon compression. In this setting, some patterns show high scores while others not. The reason is that, in some cases, the network helps the recurring pattern to maintain a structural oscillation, while, in other cases, the network influences negatively the oscillations. In this way, the variable multi-omic quantities could be analyzed in order to discover unknown regulation mechanisms between the different omic-layers. Impressively, competitor MLSs, after a perturbation, could become combined showing a sort of synchronization used to realize a catabolic or an anabolic process in an optimized manner. Furthermore, the latter observations coupled with the discretization of multi-omics depict the complexity of metabolic processes and their response to antibiotics unveiling the rules behind the metabolic network robustness and plasticity [19]. After all, MORA reciprocal influences between motifs and patterns are obtained in a reasonable time (see 2) on an almost complete dataset. In fact, the whole protein-centric network is of 1.644 vertices and 369.863 edges and, in our knowledge, it does not exist a larger network (in terms of vertices) on which to project more multi-omics. The selected experiments come from two compendia specifically suited for antibiotic treatments (see Paragraph Data sources and multi-omics data integration). In order to make results comparable, it is possible to extend the whole procedure to other experiments considering perturbations of the same typology. Unfortunately, the complete proposed methodology is exponential (see Paragraph The whole procedure performance) because of the measurement of oscillating multi-omics on networks (see Paragraph Oscillating multi-omics on networks). It is possible to achieve ad hoc algorithmic improvements reducing the pathway redundancy [50] or saving and reusing the computations done on overlapped pathway sub-circuits/sub-graphs in specialized data-structures. After all, functional plasticity, homeostasis, redundancy, and promiscuity conserved biological aspects of metabolic networks and biological processes which makes it possible to survive to external perturbations [51]. Therefore, it is a better choice to not remove redundancy preserving the biological mining of the research. For the same thing it is recommended to pay attention in the network edge pruning due to the promiscuity of small metabolites (i.e H2O,O2,AMP, etc …) [25, 52]. Oscillating multi-omic patterns associated with dynamic rates of substrate/compound transformations might lead to predicting some specific biochemical processes also in presence of missing data, by using inferential methods based on the profiling of reaction velocities and their dynamics. Future improvements might lead us to study oscillations with N>2 classes. In literature there are several methodologies to define N as a discrete space of features [53]. We suggest to apply a consensus criterium in order to project multi-omics on a discrete space with N>2 because there is not an a priori best methodology. For example, we adopted the Sturge's formula in one of our precedent works [12]. Once the number of discretisation levels N is decided, the procedure to obtain σobs is very similar to the one described in the paragraph Oscillating multi-omics on patterns. Note that if we have a set of discretization levels Σ={0,1,2,..N}, then each discretisation level contributes to the pattern oscillation by its fraction 1/|Σ|. The ideal σobs is equal to \(\frac {(max(\Sigma)-0) * div}{max(\Sigma)*div} = 1\). It follows that the ideal oscillation with N>2 assumes the form of a series of min-max discretisation levels: max(Σ)−min(Σ)−max(Σ)…min(Σ)−max(Σ). More the adjacent values are nearer to max(Σ) or min(Σ) the less we have oscillating multi-omics. The multi-class dyadic effect follows the same rules presented above where the maximal oscillation is equal to N and the others are ∈ [0,N]. In this paper a multi-omic integration based methodology is introduced to analyse bacterial oscillating multi-omics on sequence patterns and network motifs. The subject of our analysis is E.coli. The lack of methodologies for multi-omics integration decreases the chance to detect emerging motifs and patterns across sequences, networks or pathways. The current need of analytics to compare and test metabolic models, the accurate design of a new pathway or the redesign of an existing metabolic pathway or the experimental validation of an in silico metabolic model using a nearby species require the information of how multi-omics data build up multi-scale phenotypes. The high level comparison is based on novel algorithms which take the low level approach at pathway level framework and generate multi-scale multi-omic metabolic networks. The goal is to find relations between oscillating multi-omic patterns on sequences and oscillating motifs on metabolic networks. Recurring oscillating multi-omic patterns are discovered to be related to multi-omic metabolic network motifs. This last novel feature can be related to the highly conserved gene order and to the highly specificity of enzymatic reactions and their network topology. The discovered motifs can be not only useful in the study of bacterial phenotypic responses but also in applications of metabolic engineering and optimizations. Then, this work can contribute to the study and to the creation of new interesting metabolic circuits based on binary multi-omic network motifs and their recurring patterns. Our framework is implemented in a software written in R which provides effective and friendly means to design intervention scenarios in the perspective of the synthetic biology. Alphabetical list of abbreviations: APL: Average path length CAI: Codon adaptation index Francesco Bardozzo MLS: Multi-layered structures MORA: Multi-omic reciprocal adjacencies PA: PL: Pietro Lió PV: Protein variation Roberto Tagliaferri Kohl M, Megger DA, Trippler M, Meckel H, Ahrens M, Bracht T, Weber F, Hoffmann A-C, Baba HA, Sitek B, et al.A practical data processing workflow for multi-omics projects. Biochim Biophys Acta (BBA) - Protein Proteomics. 2014; 1844(1):52–62. Bersanelli M, Mosca E, Remondini D, Giampieri E, Sala C, Castellani G, Milanesi L. Methods for the integration of multi-omics data: mathematical aspects. BMC Bioinformatics. 2016; 17(Suppl 2):15. Angione C, Conway M, Lió P. Multiplex methods provide effective integration of multi-omic data in genome-scale models. BMC Bioinformatics. 2016; 17(4):83. Menichetti G, Remondini D, Panzarasa P, Mondragón RJ, Bianconi G. Weighted multiplex networks. PLoS ONE. 2014; 9(6):97857. Suravajhala P, Kogelman LJ, Kadarmideen HN. Multi-omic data integration and analysis using systems genomics approaches: methods and applications in animal production, health and welfare. Genet Sel Evol. 2016; 48(1):38. Mesiti M, Jiménez-Ruiz E, Sanz I, Berlanga-Llavori R, Perlasca P, Valentini G, Manset D. Xml-based approaches for the integration of heterogeneous bio-molecular data. BMC Bioinformatics. 2009; 10(Suppl 12):7. Serra A, Fratello M, Fortino V, Raiconi G, Tagliaferri R, Greco D. Mvda: a multi-view genomic data integration methodology. BMC Bioinformatics. 2015; 16(1):261. Chiappino-Pepe A, Pandey V, Ataman M, Hatzimanikatis V. Integration of metabolic, regulatory and signaling networks towards analysis of perturbation and dynamic responses. Curr Opin Syst Biol. 2017. Zhang W, Li F, Nie L. Integrating multiple 'omics' analysis for microbial biology: application and methodologies. Microbiology. 2010; 156(2):287–301. PubMed CAS Article Google Scholar Aksenov SV, Church B, Dhiman A, Georgieva A, Sarangapani R, Helmlinger G, Khalil IG. An integrated approach for inference and mechanistic modeling for advancing drug development. FEBS Lett. 2005; 579(8):1878–83. Ebrahim A, Brunk E, Tan J, O'brien EJ, Kim D, Szubin R, Lerman JA, Lechner A, Sastry A, Bordbar A, et al.Multi-omic data integration enables discovery of hidden biological regularities. Nat Commun. 2016; 7:1–9. Bardozzo F, Lió P, Tagliaferri R. Multi omic oscillations in bacterial pathways. In: Neural Networks (IJCNN), 2015 International Joint Conference On. Killarney: IEEE: 2015. p. 1–8. Chakraborty S, Nag D, Mazumder TH, Uddin A. Codon usage pattern and prediction of gene expression level in bungarus species. Gene. 2017; 604:48–60. Ishii N, Tomita M. Multi-omics data-driven systems biology of e. coli. In: Systems biology and biotechnology of Escherichia coli, volume 1: 2009. p. 41–57. Toyoda T, Wada A. Omic space: coordinate-based integration and analysis of genomic phenomic interactions. Bioinformatics. 2004; 20(11):1759–65. Shen-Orr SS, Milo R, Mangan S, Alon U. Network motifs in the transcriptional regulation network of escherichia coli. Nat Genet. 2002; 31(1):64–8. Kim M, Rai N, Zorraquino V, Tagkopoulos I. Multi-omics integration accurately predicts cellular state in unexplored conditions for escherichia coli. Nat Commun. 2016; 7:1–12. Fondi M, Liò P. Multi-omics and metabolic modelling pipelines: challenges and tools for systems microbiology. Microbiol Res. 2015; 171:52–64. Almaas E, Oltvai Z, Barabási A-L. The activity reaction core and plasticity of metabolic networks. PLoS Comput Biol. 2005; 1(7):68. Blattner FR, Plunkett G, Bloch CA, Perna NT, Burland V, Riley M, Collado-Vides J, Glasner JD, Rode CK, Mayhew GF, et al.The complete genome sequence of escherichia coli k-12. science. 1997; 277(5331):1453–62. Wang M, Weiss M, Simonovic M, Haertinger G, Schrimpf SP, Hengartner MO, von Mering C. Paxdb, a database of protein abundance averages across all three domains of life. Mol Cell Proteomics. 2012; 11(8):492–500. PubMed PubMed Central CAS Article Google Scholar Mao F, Dam P, Chou J, Olman V, Xu Y. Door: a database for prokaryotic operons. Nucleic Acids Res. 2008; 37(suppl_1):459–63. Chung F, Lu L, Dewey TG, Galas DJ. Duplication models for biological networks. J Comput Biol. 2003; 10(5):677–87. Edwards MT, Rison SC, Stoker NG, Wernisch L. A universally applicable method of operon map prediction on minimally annotated genomes using conserved genomic context. Nucleic Acids Res. 2005; 33(10):3253–62. Light S, Kraulis P. Network analysis of metabolic enzyme evolution in escherichia coli. Bmc Bioinformatics. 2004; 5(1):1. Csardi G, Nepusz T. The igraph software package for complex network research. InterJournal, Complex Syst. 2006; 1695(5):1–9. Cho B-K, Zengler K, Qiu Y, Park YS, Knight EM, Barrett CL, Gao Y, Palsson BØ. The transcription unit architecture of the escherichia coli genome. Nat Biotechnol. 2009; 27(11):1043–9. Dandekar T, Snel B, Huynen M, Bork P. Conservation of gene order: a fingerprint of proteins that physically interact. Trends Biochem Sci. 1998; 23(9):324–8. Wang M, Herrmann CJ, Simonovic M, Szklarczyk D, Mering C. Version 4.0 of paxdb: Protein abundance data, integrated across model organisms, tissues, and cell-lines. Proteomics. 2015; 15(18):3163–8. Sharp PM, Li W-H. The codon adaptation index-a measure of directional synonymous codon usage bias, and its potential applications. Nucleic Acids Res. 1987; 15(3):1281–95. Ludbrook J. Special article comparing methods of measurement. Clin Exp Pharmacol Physiol. 1997; 24(2):193–203. Callister SJ, Barry RC, Adkins JN, Johnson ET, Qian W-J, Webb-Robertson B-JM, Smith RD, Lipton MS. Normalization approaches for removing systematic biases associated with mass spectrometry and label-free proteomics. J Proteome Res. 2006; 5(2):277–86. Quackenbush J. Microarray data normalization and transformation. Nat Genet. 2002; 32(Supp):496. Bettenbrock K, Bai H, Ederer M, Green J, Hellingwerf KJ, Holcombe M, Kunz S, Rolfe MD, Sanguinetti G, Sawodny O, et al.Towards a systems level understanding of the oxygen response of escherichia coli. Adv Microb Physiol. 2014; 64:65–114. Park J, Barabási A-L. Distribution of node characteristics in complex networks. Proc Natl Acad Sci. 2007; 104(46):17916–20. Schaeffer SE. Graph clustering. Comput Sci Rev. 2007; 1(1):27–64. Albert R, Barabási A-L. Statistical mechanics of complex networks. Rev Mod Phys. 2002; 74(1):47. Goldenberg A, Zheng AX, Fienberg SE, Airoldi EM, et al.A survey of statistical network models. Found Trends®; Mach Learn. 2010; 2(2):129–233. Erciyes K. Complex networks: an algorithmic perspective, volume 1: CRC Press; 2014, pp. 93–4. Csárdi G, Franks A, Choi DS, Airoldi EM, Drummond DA. Accounting for experimental noise reveals that mrna levels, amplified by post-transcriptional processes, largely determine steady-state protein levels in yeast. PLoS Genet. 2015; 11(5):1005206. Winter B. Linear models and linear mixed effects models in R with linguistic applications. CoRR. 2013; abs/1308.5499:1–42. http://arxiv.org/abs/1308.5499, https://dblp.org/rec/bib/journals/corr/Winter13. Kanehisa M, Goto S. Kegg: kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000; 28(1):27–30. Keseler IM, Bonavides-Martínez C, Collado-Vides J, Gama-Castro S, Gunsalus RP, Johnson DA, Krummenacker M, Nolan LM, Paley S, Paulsen IT, et al.Ecocyc: a comprehensive view of escherichia coli biology. Nucleic Acids Res. 2009; 37(suppl 1):464–70. Kelley BP, Sharan R, Karp RM, Sittler T, Root DE, Stockwell BR, Ideker T. Conserved pathways within bacteria and yeast as revealed by global protein network alignment. Proc Natl Acad Sci. 2003; 100(20):11394–99. Tamames J. Evolution of gene order conservation in prokaryotes. Genome Biol. 2001; 2(6):0020–1. Yu C-C, Chen Y-L. Mining sequential patterns from multidimensional sequence data. IEEE Trans Knowl Data Eng. 2005; 17(1):136–40. Vazquez A, Dobrin R, Sergi D, Eckmann J-P, Oltvai Z, Barabási A-L. The topological relationship between the large-scale attributes and local interaction patterns of complex networks. Proc Natl Acad Sci. 2004; 101(52):17940–5. Kang JH, Cho K-H. A novel interaction perturbation analysis reveals a comprehensive regulatory principle underlying various biochemical oscillators. BMC Syst Biol. 2017; 11(1):95. Gao S, Chen A, Rahmani A, Zeng J, Tan M, Alhajj R, Rokne J, Demetrick D, Wei X. Multi-scale modularity and motif distributional effect in metabolic networks. Curr Protein Pept Sci. 2016; 17(1):82–92. Vivar JC, Pemu P, McPherson R, Ghosh S. Redundancy control in pathway databases (recipa): an application for improving gene-set enrichment analysis in omics studies and "big data" biology. Omics J Integr Biol. 2013; 17(8):414–22. Güell O, Sagués F, Serrano MÁ. Essential plasticity and redundancy of metabolism unveiled by synthetic lethality analysis. PLoS Comput Biol. 2014; 10(5):1003637. Carbonell P, Lecointre G, Faulon J-L. Origins of specificity and promiscuity in metabolic networks. J Biol Chem. 2011; 286(51):43994–4004. Dougherty J, Kohavi R, Sahami M. Supervised and unsupervised discretization of continuous features. In: Machine Learning Proceedings 1995. Tahoe City: Elsevier: 1995. p. 194–202. R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2016. R Foundation for Statistical Computing. https://www.R-project.org/. Authors would like to thanks Dr. Angela Serra and Dr. Paola Galdi for the read back of the manuscript. Publication of this article is partly funded by the Universities of Cambridge and Salerno. Algorithm and procedures: Multi-omic oscillation analyses were carried out using R [54] with custom scripts which may be downloaded from GitHub repository (github.com/lodeguns/MORA). Furthermore, an implementation of MORA and the complete procedure of computation for identifying oscillating multi-omics on sequences and pathways is given, with some ad hoc plots and toy examples. Data: In these repository files there are all the data divided into pathways under the effect of antibiotic perturbations and in standard conditions. Furthermore, the CAI, PA, PV data have been uploaded. E.coli protein centric metabolic network: Finally, the whole E.coli integrated metabolic network obtained merging the data from EcoCyc [43] and KEGG [42], and an associated matrix for identifying operons and protein complexes are provided in the repository. About this supplement This article has been published as part of BMC Bioinformatics Volume 19 Supplement 7, 2018: 12th and 13th International Meeting on Computational Intelligence Methods for Bioinformatics and Biostatistics (CIBB 2015/16). The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-19-supplement-7. NeuRoNe Lab, DISA-MIS, University of Salerno, Via Giovanni Paolo II 132, Salerno, 84084 Fisciano, Italy Francesco Bardozzo & Roberto Tagliaferri Computer Laboratory, Department of Computer Science, University of Cambridge, 15 JJ Thomson Ave, Cambridge, CB3 0FD, UK FB PL RT conceived the study and devised the methodology. FB collected the data, built the models and wrote the code. FB PL RT wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Roberto Tagliaferri. No human, animal or plant experiments were performed in this study, and ethics committee approval was therefore not required. Additional file 1 This file describes all the technical details and procedures used for the multi-omic data-integration. In addition, some Sections show some math proofs for what is concerning the oscillating multi-omics, a toy example for the multi-omic relational adjacencies and some concrete examples of pathway and patterns. (PDF 767 kb) It is a PDF document which contains 66 measures of dyadic/anti-dyadic effects, oscillation similarity scores, and reciprocal influences computed with MORA. Data are reported in standard conditions and considering the average effect of 69 treatments. (PDF 108 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Bardozzo, F., Lió, P. & Tagliaferri, R. A study on multi-omic oscillations in Escherichia coli metabolic networks. BMC Bioinformatics 19, 194 (2018). https://doi.org/10.1186/s12859-018-2175-5 Multi-omics omic regularities Antibiotic response Multi-omic metabolic networks Multi-omic motifs
CommonCrawl
Multi-encoder attention-based architectures for sound recognition with partial visual assistance Wim Boes ORCID: orcid.org/0000-0001-7344-61161 & Hugo Van hamme1 Large-scale sound recognition data sets typically consist of acoustic recordings obtained from multimedia libraries. As a consequence, modalities other than audio can often be exploited to improve the outputs of models designed for associated tasks. Frequently, however, not all contents are available for all samples of such a collection: For example, the original material may have been removed from the source platform at some point, and therefore, non-auditory features can no longer be acquired. We demonstrate that a multi-encoder framework can be employed to deal with this issue by applying this method to attention-based deep learning systems, which are currently part of the state of the art in the domain of sound recognition. More specifically, we show that the proposed model extension can successfully be utilized to incorporate partially available visual information into the operational procedures of such networks, which normally only use auditory features during training and inference. Experimentally, we verify that the considered approach leads to improved predictions in a number of evaluation scenarios pertaining to audio tagging and sound event detection. Additionally, we scrutinize some properties and limitations of the presented technique. Numerous sounds carry meaning relevant to everyday life: Speech is a particularly important subclass, but more general acoustic events, such as the screaming of a baby or the ringing of an alarm bell, can obviously also be of great importance to humans. In this light, it is only logical that sound recognition tasks are quickly becoming significant machine learning subjects. The most prominent related topics are aggregated in a yearly contest, the Detection and Classification of Acoustic Scenes and Events (DCASE) challenge. Its most recent version [1] featured multiple subtasks revolving around classification of environmental events: If coupled with estimation of temporal boundaries, this problem is usually called sound event detection, else, it is typically referred to as audio tagging. Other subjects include but are not limited to spatial localization and automated captioning of auditory inputs. The research presented in this project deals with the integration of visual information into sound recognition systems. Prior efforts have shown that this process can lead to enhanced predictions: [2,3,4] have shown that employing audiovisual variants of deep learning architectures, such as feedforward and attention-based neural networks, can be beneficial for audio tagging. In [5], it is shown that applying early (feature) fusion to convolutional recurrent models can provide improvements for sound event detection as well. Many commonly used data sets for sound recognition tasks in some way stem from Audio Set [6], which is a very large-scale cluster of auditory segments originated from YouTube. These clips are weakly labeled in the sense that the semantic categories of occurring audio events are given, but other details, such as their on- and offsets, are excluded. Derivations usually retain only a manually controlled subset of the full collection and occasionally append extra information: As an example, AudioCaps [7] attaches multiple descriptive text captions to each comprised sample. As a consequence of much sound recognition data having its origin in YouTube, a multimedia library, it is straightforward to retrieve videos and eventually incorporate these into systems designed for related tasks. This is the approach taken by works mentioned earlier in this section. However, this procedure runs into one specific issue: Content may have been removed from this platform at some point in time, e.g., because of account deletions or copyright claims. Consequentially, the source material can not necessarily be utilized to perform computations for all samples of the considered collection. On the acoustic side, this is usually not a problem: Curators of data sets derived from Audio Set [6] often ensure availability of all audio snippets by maintaining a separate database of copies. Having said that, for the visual information, this kind of careful conservation is unfortunately not customary. This situation can be regarded as an intense expression of the problem of missing data. As outlined in [8], one possible approach to this matter is to simply discard all data entries with absent values. This is also the route that has been taken in the previously referenced projects related to sound recognition with visual assistance. Another, less destructive option involves imputation or replacement of missing entries. This technique can be utilized in conjunction with deep learning models, such as generative adversarial networks [9], and currently occupies most of the literary space. Applications can be found in, among others, the domains of healthcare [10], biomedical data mining [11], recommendation systems [12] and speech recognition [13]. The downside to imputation techniques is that, in order for them to work properly, they require quite strong assumptions about the statistics of the complete data [14]. For example, many algorithms are based on the so-called missing at random condition: In this hypothesis, the absent values are connected to the known entries through variables which may or may not be hidden. This conjecture can undoubtedly be justified in some cases, such as well-structured time series, but is definitely not universally applicable. In the context of this work, these assumptions are hard to defend: We deal with noisy audiovisual clips for which the two modalities are relatively decoupled. There are two frequently occurring issues in this regard. Firstly, while the sound is guaranteed to be accurately labeled, the curators of the used data set make no such promises for the visual component. For instance, the videos of some samples consist of nothing more than a meaningless still image or even a black screen. Secondly, even for the examples with real visual information, there is often a severe lack of semantic or temporal synchronization between the two streams. Nevertheless, in this project, we still seek to tackle this specific manifestation of the problem of missing values related to sound recognition based on audiovisual clips originated from YouTube. To this end, we stray away from the unsatisfactory deletion procedure, which has been employed in previous works on the considered subject, and imputation, which is unrealistic in this instance as explained before. Instead, we propose an approach based on dynamically weighted fusion of intermediary auditory and visual features. This is done by adapting the multi-encoder framework presented in [15], which was originally utilized to achieve more robust predictions for speech recognition, without necessarily increasing time and/or memory complexity. This work is organized as follows: In Section 2, we elaborate upon the proposed method. In Section 3, we go into the performed experiments: We provide a detailed description of the used setup and analyze obtained results. Finally, in Section 4, we summarize the most important conclusions of the conducted research. In this section, the proposed method is discussed. We adapt and extend the middle fusion approach presented in [15] to produce a process that allows us to combine auditory inputs with partially available visual features, during training as well as inference. Section 2.1 goes into the attention-based neural networks which are employed as base components of the considered systems. Next, in Section 2.2, the multimodal multi-encoder learning framework is fully explained. Attention-based architectures In this section, we elaborate upon the attention-based neural networks which are used throughout this work. First, we provide a detailed description of the transformer [16], which at first was used to tackle machine translation but has since been used for many other purposes. Afterwards, we examine the conformer model [17], which augments the previously mentioned architecture with convolutional blocks and was originally designed for automatic speech recognition. In the outline below, these systems are presented in concrete forms suitable for the tasks relevant to this work, i.e., audio tagging and sound event detection. Transformer model The transformer for joint audio tagging and sound event detection is schematically illustrated in Fig. 1. It strongly resembles the system described in [18], which itself is a variant of the original model [16]. Diagram of transformer model for sound recognition The transformer is a neural network that converts sequential inputs into a series of probabilities. In the context of this project, these values indicate which sounds are present during each time frame of an audio(visual) recording. Similar to the approach taken in the BERT model [19], built for natural language processing tasks, we append a learnable classification token to the set of features at the decoder side. Because of this change, the output of the system will contain an extra vector, which can be used to obtain clip-level predictions. The encoder and decoder blocks of the transformer model (in this project, we use 3 of each) closely resemble each other. Their architectures consist of a combination of layer normalization operations [20], residual connections [21], feedforward components — which, in this case, are made up of two consecutive layers with 512 ReLU and 128 linear neurons respectively — and perhaps most importantly, multi-head attention modules. Also, dropout [22] with a rate of 0.1 is used after each of the aforementioned feedforward submaps, this is not explicitly shown in Fig. 1. The multi-head attention mechanism [16] performs a content-based comparison between two sets of features, referred to as queries and keys respectively. This is done by computing scaled dot products. Afterwards, the resulting so-called attention weights are multiplied with another series of inputs, namely, the values. This operation is mathematically expressed in Eq. 1. $$\begin{aligned} A(Q, K, V) = \text {softmax}\left( \dfrac{QK^T}{\sqrt{d_k}}\right) V \end{aligned}$$ In Eq. 1, Q, K and V are the matrices containing queries, keys and values respectively. The scaling parameter in the denominator of the softmax function, \(d_k\), refers to the size of the keys. In this project, we ensure this number is equal to 128 at all times. The multi-head module extends the simple attention calculation given in Eq. 1 by applying linear mappings to the queries, keys and values before this computation and repeating the entire process several times. Eventually, the outcomes of the so-called multiple (in this work, we stick to 3) attention heads are concatenated and fed through a last projection layer with 128 units to obtain the final result. A practical detail to be added to this description is that dropout [22] with a rate of 0.1 is also applied to all attention weights and outputs of the considered neural network blocks. All components of the transformer model that have been discussed this far perform content-based computations and ignore sequential information. This shortcoming is mitigated by adding learnable positional encodings [23] to the inputs: These are trainable vectors representing the absolute (temporal) locations of all frames in the used feature sequences. Previously, transformer encoders have successfully been used to tackle sound event detection [24]. When only one set of inputs is employed, as is the case for the cited work, it does indeed not make sense to utilize the full structure: The addition of a decoder would not add functionality but only increase model complexity. However, in this project, we attempt to exploit multiple sequences of features, and in that case, using the complete transformer as a basis is more appropriate. This is further elaborated upon in Section 2.2. Conformer model The conformer encoder [17] is an extension of the corresponding component in the transformer. Compared to the latter architecture, which focuses on finding global dependencies between data frames, modules are added to substantially improve its ability to perform local processing. Figure 2 depicts an appropriately designed model [24] based on this deep learning entity for joint audio tagging and sound event detection. Diagram of conformer encoder model for sound recognition A full explanation of all components in this model can be found in the original work [17]. In what follows, we mainly focus on the surface-level differences between transformer and conformer encoders, while ignoring some less important and finer details. Firstly, the feedforward submodule in the transformer encoder is replaced by two such structures in the conformer variant. The associated residual connections are foreseen of a halving operation, akin to the Macaron neural network proposed in [25]. Also, instead of a rectified linear unit (ReLU), the more complex Swish activation function [26] is employed. Secondly, the multi-head attention block explained in the previous section is exchanged for a version that also incorporates relative positional encodings [27]: By injecting these (learnable) embeddings, representing relative distances between feature vectors, the ability of this module to perform location-based (as opposed to content-based) processing is notably augmented. Thirdly, and perhaps most importantly, a convolutional subnetwork is inserted into the architecture. This component is what allows the conformer encoder to perform local computations, in contrast to the transformer variant, which is only capable of dealing with global (and mostly content-based) dependencies. This extra module consists of the following sequence of operations: pointwise convolution coupled with a gated linear unit [28], depthwise convolution (with a kernel size of 7, as in [24]), batch normalization [29] with a momentum of 0.9, application of the Swish activation function [26], another pointwise convolution and finally, dropout [22] with a rate of 0.1. Conformer encoders were originally designed for automatic speech recognition but, as demonstrated, have also been employed for sound recognition [24]. This neural structure cannot deal with more than one series of inputs, which is necessary for our purposes. Luckily, it can easily be expanded into a more appropriate encoder-decoder architecture by following the blueprint of the transformer model shown in Fig. 1. Simplified diagram of encoder-decoder conformer model for sound recognition In the rest of this work, when we mention the conformer system, we refer to the model of which a simplified diagram is drawn in Fig. 3. It is structurally similar to the architecture in Fig. 1, but instead of transformer-based components, it utilizes a conformer encoder and decoder. The latter building block is derived from the former by simply adding a multi-head relative cross-attention module (with corresponding layer normalization and residual connection) after the self-attention variant, analogously to the transformer. Multimodal multi-encoder learning framework In this section, we discuss the proposed multimodal multi-encoder learning framework. To this end, we first review the original middle fusion algorithm described in [15]. Afterwards, we explain the adjustments that have to be made to ensure the resulting approach is suited to tackle the problem at hand, i.e., sound recognition with partial visual assistance. Figure 4 shows a simplified diagram of the middle fusion multi-encoder learning framework. Compared to the base architectures discussed in Section 2.1, the cross-attention modules of all decoder blocks are duplicated a number of times, depending on the amount of input sequences supplied to corresponding encoder structures. The vectors produced by these copies are interpolated in a linear fashion to obtain intermediate features which are used downstream in the model. Simplified diagram of multi-encoder framework In [15], this principle is applied in the context of automatic speech recognition: Features representing the spectral magnitude and phase components of the used auditory data are combined using this multi-encoder approach to enhance the performance of the decoder, which is charged with the task of transforming input characters into output token probabilities. Fixed interpolation weights are used for calculating the relevant convex sums, biased towards the (generally) more salient magnitude representations. The cited work also investigates other configurations, such as a late fusion version and a variant with tied encoder parameters, which can be used to limit memory complexity. Preliminary experiments have shown that these options do not provide additional benefits in the context of this project, and thus, they are not discussed further. As explained in Section 1, we want to build a system that can perform sound recognition using multimodal data, of which the auditory component is always at hand, but the visual information is only partially available. To this end, we adopt the framework depicted in Fig. 4, but in contrast to the original approach, we set the multi-encoder interpolation parameters dynamically, both during training and testing: When series of features, i.e., those related to vision, are missing, the weights associated with these sequences are set to 0. Naturally, the others are scaled in order for the total to remain equal to 1. In Section 3, we also explore a novel weighting scheme for the learning phase. We detail the performed experiments involving the proposed multi-encoder framework, elaborated upon in Section 2, meant to deal with the considered problem of missing values. In Section 3.1, the setup is discussed, while Section 3.2 analyzes the obtained results. In this section, we fully lay out the experimental setup. Firstly, the used data set and the features that are extracted from this collection are elaborated upon. Next, we report the postprocessing steps applied to convert the output probabilities of the models described in Section 2 into binary predictions, and we go into the metrics employed to gauge the performance of the examined systems. Finally, for completeness, we list all relevant training and testing hyperparameters. In this work, we use the data associated with task 4 [30] of the DCASE 2020 challenge [31]. This large-scale set contains recordings with a maximum length of 10 seconds. Each instance features a number of potentially overlapping auditory events belonging to the 10 possible environmental classes itemized in Table 1. The collection is split into multiple partitions. The amount of samples in each subset is listed in Table 2. Table 1 Labeled sound categories in data set Details about the occurring auditory events are provided in different ways for the distinct collections mentioned in Table 2. The strongly supervised training, validation and evaluation partitions encompass full annotations of all occurring sounds, including on- and offsets. The samples of the other subsets are either weakly supervised, i.e., only clip-level labels are available, or simply do not contain any relevant information at all. Table 2 Number of samples per partition in data set The recordings in the strongly supervised training set originate from Freesound, a collaborative database of sounds, and only contain auditory information. All other clips come from YouTube, a very well-known large-scale multimedia library. For most of these examples, visual information can also be extracted. However, as thoroughly explained in Section 1, this is not possible for all data points: On average, for 15.5% of those samples, the video can no longer be downloaded. Table 3 Hyperparameters of convolutional feature extractor In this project, we investigate multiple ways of preprocessing the auditory and visual streams of the considered data. In Section 3.2, the discussion on the results of the experiments includes an analysis of which features lead to improvements under the proposed framework for a number of evaluation scenarios. Spectral auditory features We first resample the audio streams to 16 kHz and apply peak amplitude normalization. Next, log mel magnitude spectrograms with 64 frequency bins are extracted using a window size of 1024 samples (corresponding to 64 ms) and a hop length of 313 samples (corresponding to 20 ms). For a clip of 10 seconds, this results in 512 frames. Lastly, per frequency bin standardization is performed, based on the statistics of the training data. Compared to the other (pretrained) embeddings described below, these features are fairly rudimentary. To create higher-level auditory vectors which can be supplied as inputs to the models described in Section 2, we use a feature extractor with a similar architecture as in the baseline model [32] for task 4 [30] of the DCASE 2020 challenge [31]. It is a convolutional network which is made up of seven consecutive stacks, which each perform the following five operations: convolution, batch normalization [29] with a momentum of 0.9, application of the ReLU function, dropout [22] with a rate of 0.1 and average pooling. The hyperparameters of the convolutional layers are listed per block in Table 3. In Table 3, the first and second numbers of each tuple in the columns on kernel size and stride relate to time and frequency axes respectively. As can be inferred from this list, this convolutional feature extractor reduces the frequency dimension of the spectral map to one and has a total temporal pooling factor of 8: For a clip of 10 seconds, this results in the original series being transformed into a sequence of 64 vectors. This convolutional feature extractor is inserted into the models described in Section 2 and the complete architecture is trained in an end-to-end manner. OpenL3 visual features OpenL3 [33] is an embedding model designed to predict correspondence between auditory and visual streams, trained in a self-supervised way. It is pretrained on Audio Set [6]. To obtain pretrained visual features, still images are first sampled from the available visual streams at a rate of about 6.5 fps. The frames are fed into the video subnetwork of OpenL3 [33]. For a recording of 10 seconds, these steps lead to 64 512-dimensional vectors. Temporally coherent visual features Still images are first sampled from the visual streams at a rate of about 6.5 fps. Next, these frames are passed through the video embedding model described in [34], trained in a self-supervised manner using a loss optimizing temporal coherency. For a clip of 10 seconds, this procedure results in a series of 64 2048-dimensional vectors. VGG16 visual features Still images are sampled from the visual streams at a rate of about 6.5 fps. The resulting frames are fed into VGG16 [35], a convolutional model for image classification, pretrained on the ImageNet data set [36]. The 4096-dimensional outputs of the last feedforward layer of this neural network are used as feature sequences in this project. For a recording of 10 seconds, these steps lead to 64 vectors. Postprocessing To calculate clip-level metrics, the relevant probabilities are converted into binary decisions by employing class-wise thresholds, optimized on the validation partition of the employed data set. The search space for these hyperparameters is restricted to the linear span ranging from 0.1 to 0.9 in steps of 0.1. To obtain sound event detection scores, the frame-level probabilities are transformed via the following process: Firstly, they are binarized, and secondly, the decisions are passed through a median smoothing operation. The hyperparameters associated with these steps are (separately) optimized on the validation partition of the employed data set, per sound category as well as per used evaluation metric. The search space for the thresholds is limited to a linear span ranging from 0.1 to 0.9 in steps of 0.1. For the filter sizes, values from 1 to 31 are tested in increments of 2. Other postprocessing operations can be applied to further enhance the predictions of the considered systems. For example, in [24], model ensembling is employed in the form of score-level fusion. In this work, we choose not to take any further steps as the number of possibilities is seemingly unlimited. Also, the main goal of this project is not necessarily to achieve the perfectly optimized model, rather, we want to demonstrate the effectiveness of the proposed method. To evaluate the considered models for both audio tagging and sound event detection, we use a variety of metrics representing distinct scenarios, involving different requirements in terms of temporal localization. These scores are specifically chosen because of their prevalence in the field of sound recognition. Clip-based F1 score (CBF1) For audio tagging, we use the micro-averaged clip-based F1 score [37]. This metric was also used in the DCASE 2017 challenge [38]. Segment-based F1 score (SBF1) We use the micro-averaged segment-based F1 score based on slices of 1 s [37] to quantify the effectiveness of the predictions of the proposed models. Because of the relatively long slice length, this metric is suitable for investigating coarse-grained sound event detection performance. It was also utilized in the DCASE 2017 challenge [38]. Event-based F1 score (EBF1) We use the macro-averaged event-based F1 score with tolerances of 200 ms for onsets and 20% of the lengths of the audio events (up to a maximum of 200 ms) for offsets [37]. Because of the strict localization requirements, this metric is suitable for investigating fine-grained sound event detection performance. It was also utilized in the DCASE 2018 [39], 2019 [40] and 2020 [31] challenges. Polyphonic sound event detection scores (PSDS) We use two polyphonic sound event detection scores [41], representing distinct evaluation scenarios. In the rest of this work, they are referred to as PSDS1 and PSDS2. The former imposes strict requirements on the temporal localization accuracy, the latter is more lenient in this regard. The hyperparameters for these measures are summarized in Table 4. One of the default postprocessing steps described before is left out in this case: These scores are computed using 50 fixed operating points, in which thresholds linearly distributed from 0.01 to 0.99 (with a step size of 0.02) are used to convert probabilities into binary decisions. These metrics were also utilized in the DCASE 2021 [1] challenge. Hyperparameters In this section, we provide details on the used training and testing procedures for the sake of completeness. PyTorch [42] is utilized to implement all of the work. Table 4 PSDS hyperparameters Elaborate explanations of the selection processes for the encoder interpolation weights of the considered models are omitted from this section, as these procedures vary per experimental setting. Instead, these descriptions are left for the relevant parts in Section 3.2. As already outlined in Section 3.1, the data employed in this research project is heterogeneously annotated, and thus, combining all available samples into the learning procedure is challenging, especially when it comes to the unlabeled instances. To deal with this difficulty, mean teacher training [43] is performed. In this framework, two models called the student and the teacher are utilized. They share the same architecture, but their parameters are updated differently. The student system is trained regularly, i.e., a differentiable objective is minimized. However, the weights of the teacher are computed as the exponential moving average of the student parameters with a multiplicative decay factor of 0.999 per training iteration. The loss employed to train the student consists of four terms: The first two are clip-level and frame-level binary cross entropy functions, which are only computed for the weakly and strongly labeled clips respectively. The other components are mean-squared error consistency costs between the clip-level and frame-level output probabilities of the student and teacher models, which can be computed for all data, including the unannotated samples. The classification and consistency terms are summed with weights 1 and 2 respectively to obtain the final objective. Table 5 Results of uni- and bi-encoder attention-based models During training, data augmentation is also employed in the form of mixup [44], which comes down to creating extra learning examples (and associated labels) by linearly interpolating the original samples. We use this method with an application rate of 33%. The mixing ratios are randomly sampled from a beta distribution with shape parameters equal to 0.2. Models are trained for 100 epochs. Per epoch, 250 batches of 128 samples are given to the networks. Each batch contains 32 strongly labeled, 32 weakly labeled and 64 unlabeled examples. Rectified Adam [45, 46] is employed to train the weights of the student systems. Learning rates start at 0.001 and decay multiplicatively with a factor of 0.1 per 10000 iterations. Metrics are calculated on probabilities produced by student models after the last training epoch. In this section, all experimental results are listed and analyzed. We report evaluation scores which have been averaged over 20 training runs (with independent initializations of all model parameters) to ensure reliability, as well as the associated standard deviations. Preliminary experiments have indicated that architectures only employing visual features underperform badly with regard to sound event detection. These outcomes are in line with findings divulged in prior research [5]. Clearly, on their own, these pretrained vectors are not able to properly perform temporal segmentation on the considered multimodal data. As a consequence of this observation, the following design choice has been made: All of the explored systems take spectral auditory maps as inputs to their decoders and to one of their encoders. In other words, in what follows, models without acoustic features are not considered. Uni-encoder attention-based models Table 5 contains the results obtained by baseline models which do not utilize any visual information at all and thus do not run into the examined missing data problem: Each of these systems uses auditory features as inputs to its decoder as well as its single encoder. In [24], transformer and conformer encoders have been used in a very similar way to tackle sound event detection on the same data set. The performance values reported in the cited work for such models that do not use any type of ensemble method are comparable to those in Table 5. The remaining small disparities can partially be attributed to architectural differences, since the referenced systems do not include decoder components, unlike is the case in this project. Interestingly, models using the transformer architecture as a base outperform those using conformer components in terms of most considered performance metrics, in contrast to what is reported in [24]. This trend will reappear in the results of systems exploiting visual information (or equivalently, using multiple encoders), which are discussed in the following sections. However, the differences are too small and/or inconsistent to infer conclusions on the superiority of one or the other. Bi-encoder attention-based models Table 5 also lists the scores obtained by models that use two encoders: one takes in spectral auditory maps, the other accepts a type of pretrained visual features. The interpolation weights associated with these two encoders are determined by maximizing the performance of the models on the validation partition of the data set at hand. Specifically, we choose the hyperparameters which lead to the highest event-based F1 scores, and empirically, we find that this also leads to near-optimal values in terms of the other considered evaluation metrics. For the encoder weight associated with acoustic information, we test the following options during training as well as inference: 0.25, 0.5, 0.75 and 0.875. When visuals are unavailable, this number is set to 1, as explained in Section 2.2. Discussion on pretrained visual features A comparison between the results of the bi-encoder systems incorporating visual information and the scores obtained by the baseline models only utilizing acoustic inputs, both listed in Table 5, demonstrates that the proposed method can be useful, but the choice of pretrained vision-related vectors is crucial. Particularly, we observe that adding VGG16 embeddings does not globally lead to improvements, but only for clip- and segment-based F1 scores. However, the inclusion of OpenL3 and temporally coherent features does provide consistent and substantial performance boosts. These findings are in agreement with outcomes published in prior research. In [5], it is shown that adding VGG16 embeddings can lead to improvements for audio tagging and (partly) coarse-grained sound event detection, but when it comes to stricter segmentation, these vectors do not add any value. This can be explained by the fact that the system these embeddings are extracted from is designed for a problem without time-related aspects, i.e., image classification. This is not the case for OpenL3 and temporally coherent visual features, as the associated models are pretrained for tasks encompassing temporal facets, which explains the disparity discussed in the previous paragraph. Table 6 Results of bi-encoder attention-based models with random encoder interpolation weights during training Discussion on metrics Independent model initialization through random seeding of learnable parameters does not cause a lot of inconsistency with regard to the clip- and segment-based F1 scores, which measure audio tagging and coarse-grained sound event detection performance respectively. This allows us to conclude with certainty that systems employing the proposed method outperform the baseline when it comes to these metrics. For the event-based F1 measure, targeting comparatively strict temporal segmentation, the standard deviations across training runs listed in Table 5 are slightly bigger, but still small enough to be able to make useful inferences. However, the variability is significantly greater (in relative terms) for PSDS1 and especially PSDS2, which means more caution should be exercised when interpreting those results. Discussion on interpolation weights We find that changing the encoder weights employed while training causes relatively limited fluctuations in terms of the considered metrics. However, modifying the inference hyperparameters can cause significant performance drops. This happens in particular when the interpolation weight for the acoustic stream is too low. Based on this observation, we present a novel way of setting these hyperparameters. While learning, we randomize this decision process: The interpolation weight associated with the acoustic input features is sampled from a uniform distribution between 0.25 and 1 per batch. It would not be logical to let this value go all the way to 0, as in that specific case, the model would have to rely on visual information only. For reasons explained in detail in Section 1, this is not a good idea as the auditory stream is generally much more salient. For inference, the default procedure for determining these weights, involving optimization based on validation data, is retained. The results obtained when using this adapted method are presented in Table 6. The reported scores are very comparable to those in Table 5, for models that do not use randomized encoder interpolation weights during training. They are not much (or in some cases, any) better, but this altered learning procedure still has a significant benefit: The hyperparameter optimization process becomes less tedious and time-consuming in this case, which is valuable from a practical point of view. Table 7 Split results of uni- and bi-encoder attention-based models Discussion on merit of multi-encoder learning In this section, we go into the merit of the multi-encoder learning framework by further scrutinizing the performance boosts reported in Table 5. To this end, we first split the test data into two sets, using the availability of visual information as partitioning criterion. We report the results obtained by the uni-encoder baseline as well as the proposed (non-randomized) bimodal models on these two disjoint collections. Additionally, we disclose performance scores produced by the latter systems on the evaluation data with visual assistance when the interpolation weight for the acoustic stream is forced to 1, and vision-related inputs are ignored. These outcomes are all given in Table 7. Standard deviations are not included to maintain clarity and conserve space. Even though the bi-encoder attention-based architectures are designed to utilize inputs from two modalities, when vision-related inputs are forcibly ignored by setting the associated linear interpolation weights to 0, these models still perform on par to the uni-encoder baseline. This trend appears when inspecting the scores obtained on both of the aforementioned evaluation data splits. The only exception to this rule are the systems with pretrained VGG16 embeddings, but this is not entirely unexpected behavior, since these vectors are inappropriate for the tasks at hand as discussed before. This finding seems to indicate that, on the condition that the employed sequences of features are selected appropriately, the multi-encoder framework can be applied relatively safely, without fear of decreasing performance, regardless of the actual degree of accessibility of visual recordings. Table 8 Results of tri-encoder attention-based models When looking at the performance differences between the unimodal baseline and the bi-encoder models on the evaluation data that includes visual inputs, we find that there are consistently substantial improvements — once again, with the sole exception being systems utilizing pretrained VGG16 features. This is obviously desirable as the whole purpose of this research project is to be able to exploit multimodal inputs even when not all sources are always available. Unfortunately, these findings also uncover the limitations of the proposed multi-encoder framework: In this instance, representing the important situation of data originating from YouTube, vision-related content is inaccessible for about 15.5% of all samples. For this case, the gains our proposed method is able to achieve are certainly worthwhile. If the amount of examples with visual assistance would diminish further, the performance boosts would also inevitably lessen, up to a point where it may no longer be worth the extra effort. Tri-encoder attention-based models Table 8 lists the scores obtained by models that use three encoders: One takes in spectral auditory maps, the two others accept OpenL3 and temporally coherent visual features respectively. Pretrained VGG16 embeddings are not investigated any further: As discussed at length in the previous sections, in the bi-encoder experimental configuration, these vectors have already been found to be unsuitable for the tasks at hand, particularly for fine-grained sound event detection. The interpolation hyperparameters are decided using procedures similar to those employed in the previously described experiments. As a first option, their values are chosen by optimizing event-based F1 scores on the validation partition of the considered data set. We test the following possibilities during training as well as inference for the auditory information weight: 0.25, 0.5, 0.75 and 0.875. Alternatively, the learning weights linked to the acoustic inputs are sampled randomly from a uniform distribution ranging from 0.25 to 1. To limit the search space in both of the foregoing cases, we choose to split the remaining share equally between the encoders connected to the visual streams. When comparing results obtained by audio-only models (in Table 5) to those of bi-encoder systems also incorporating OpenL3 or temporally coherent visual embeddings (in Table 5 and Table 6), we mostly find consistent and substantial improvements. This has been discussed at length in the previous section(s). When inspecting the scores produced by tri-encoder architectures utilizing all three sets of relevant features (Table 8), we observe additional performance increases. However, these boosts are far less pronounced. This makes sense as, in contrast to the initial switch from unimodal to bi-encoder models, we are not integrating a new modality into the systems, we are simply using more sets of (visual) features. It is very likely that much of the information captured in one series of vision-related embeddings is also present in the other, leading to diminishing returns in terms of performance when combining them. However, this experiment does demonstrate the flexibility of the proposed framework and how it could be employed when more than two groups of complementary features are available. We proposed a dynamic multi-encoder approach to deal with the problem of missing values in the context of multimodal sound recognition. This particular situation frequently occurs since many pertinent data sets stem from YouTube, which provides a noteworthy opportunity but also poses a serious challenge: Visual inputs can be taken advantage of to enhance audio tagging and sound event detection models, which traditionally only employ acoustic information. However, vision-related features may not be accessible for all data points due to a variety of availability issues. We applied the aforementioned method to state-of-the-art attention-based neural network architectures. We performed experiments using the data set associated with task 4 of the DCASE 2020 challenge and verified that the presented framework can lead to noteworthy performance boosts in a selection of different evaluation settings. We thoroughly investigated the outcomes of said trials and analyzed some properties and limitations of the introduced technique. Among other things, we showed that improvements were contingent on a good choice of (pretrained) visual features to be used in conjunction with spectral auditory maps and we demonstrated that the proposed method even holds up when all vision-related inputs are ignored. The proposed framework is naturally flexible, and consequentially, there are some compelling possibilities for future research involving this principle. Firstly, further attention could be directed to the way encoder interpolation hyperparameters are set during training and inference: In this work, we started with fixed weights, optimized on held-out validation data. Afterwards, we showed that randomizing these values during training can lead to similar results and significantly less tuning. It could be interesting to investigate more complex schemes, such as data-dependent weighting. Secondly, other sets of features could be explored. Here, we stuck to utilizing pretrained visual features on top of rudimentary spectral auditory maps, but pretrained auditory features might be worthwhile as well. Lastly, this technique is in no way tied to sound recognition, and it could easily be applied to other research tasks which also encounter missing value problems. The data set analyzed during the current study is available in the DCASE 2020 task 4 repository, http://dcase.community/challenge2020/task-sound-event-detection-and-separation-in-domestic-environments. DCASE: Detection and Classification of Acoustic Scenes and Events ReLU: Rectified linear unit CBF1: Clip-based F1 score SBF1: Segment-based F1 score EBF1: Event-based F1 score PSDS: Polyphonic sound event detection score F. Font, A. Mesaros, D.P.W. Ellis, E. Fonseca, M. Fuentes, B. Elizalde, Proceedings of the 6th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2021) (Universitat Pompeu Fabra, Spain, 2021) S. Parekh, S. Essid, A. Ozerov, N.Q.K. Duong, P. Pérez, G. Richard, Weakly supervised representation learning for audio-visual scene analysis. IEEE/ACM Trans. Audio Speech Lang. Process. 28, 416–428 (2019) W. Boes, H. Van hamme, in Proceedings of the 27th ACM International Conference on Multimedia. Audiovisual transformer architectures for large-scale classification and synchronization of weakly labeled audio events (ACM, Nice, France, 2019), pp. 1961–1969 Y. Yin, H. Shrivastava, Y. Zhang, Z. Liu, R.R. Shah, R. Zimmermann, in Proceedings of the AAAI Conference on Artificial Intelligence. Enhanced audio tagging via multi-to single-modal teacher-student mutual learning (AAAI, Palo Alto, CA, USA, 2021), pp. 10709–10717 W. Boes, H. Van hamme, in Proceedings of Interspeech 2021. Audiovisual transfer learning for audio tagging and sound event detection (ISCA, Brno, Czechia, 2021), pp. 2401–2405 J.F. Gemmeke, D.P.W. Ellis, D. Freedman, A. Jansen, W. Lawrence, R.C. Moore, M. Plakal, M. Ritter, in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Audio Set: An ontology and human-labeled dataset for audio events (IEEE, New Orleans, LA, USA, 2017), pp. 776–780 C.D. Kim, B. Kim, H. Lee, G. Kim, in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). AudioCaps: Generating captions for audios in the wild (ACL, Minneapolis, MN, USA, 2019), pp. 119–132 E. Tlamelo, M. Thabiso, M. Dimane, S. Thabo, M. Banyatsang, T. Oteng, A survey on missing data in machine learning. J. Big Data 8 (2021), pp. 1-37 D. Ramachandram, G.W. Taylor, Deep multimodal learning: A survey on recent advances and trends. IEEE Signal Proc Mag. 34, 96–108 (2017) T.D. Le, R. Beuran, Y. Tan, in 2018 10th International Conference on Knowledge and Systems Engineering. Comparison of the most influential missing data imputation algorithms for healthcare (IEEE, Ho Chi Minh City, Vietnam, 2018), pp. 247–251 B.O. Petrazzini, H. Naya, F. Lopez-Bello, G. Vazquez, L. Spangenberg, Evaluation of different approaches for missing data imputation on features associated to genomic data. BioData mining. 14, 1–13 (2021) Y. Lee, S.-W. Kim, S. Park, X. Xie, in Proceedings of the 2018 World Wide Web Conference. How to impute missing ratings? Claims, solution, and its application to collaborative filtering (International World Wide Web Conferences Steering Committee, Lyon, France, 2018), pp. 783–792 K.E. Kafoori, S.M. Ahadi, Robust recognition of noisy speech through partial imputation of missing data. Circ Syst Sig Process. 37, 1625–1648 (2018) R..J.. Little, Little, Missing data assumptions. Ann Rev Stat Appl 8, 89–107 (2021) T. Lohrenz, Z. Li, T. Fingscheidt, in Proceedings of Interspeech 2021. Multi-encoder learning and stream fusion for transformer-based end-to-end automatic speech recognition (ISCA, Brno, Czechia, 2021), pp. 2846–2850 A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, Ł. Kaiser, I. Polosukhin, in Advances in Neural Information Processing Systems. Attention is all you need (NeurIPS, Long Beach, CA, USA, 2017), pp. 5998–6008 A. Gulati, J. Qin, C.-C. Chiu, N. Parmar, Y. Zhang, J. Yu, W. Han, S. Wang, Z. Zhang, Y. Wu, R. Pang, in Proceedings of Interspeech 2020. Conformer: Convolution-augmented transformer for speech recognition (ISCA, Shanghai, China, 2020), pp. 5036–5040 R. Xiong, Y. Yang, D. He, K. Zheng, S. Zheng, C. Xing, H. Zhang, Y. Lan, L. Wang, T. Liu, in International Conference on Machine Learning. On layer normalization in the transformer architecture (PMLR, Vienna, Austria, 2020), pp. 10524–10533 J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, in Proceedings of NAACL-HLT, BERT: Pre-training of deep bidirectional transformers for language understanding (ACL, Minneapolis, MN, USA, 2019) pp. 4171–4186 J.L. Ba, J.R. Kiros, G.E. Hinton, Layer normalization. arXiv preprint arXiv:1607.06450. (2016) K. He, X. Zhang, S. Ren, J. Sun, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Deep residual learning for image recognition (IEEE, Los Alamitos, CA, USA, 2016), pp. 770–778 N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014) MathSciNet MATH Google Scholar J. Gehring, M. Auli, D. Grangier, D. Yarats, Y.N. Dauphin, in International Conference on Machine Learning. Convolutional sequence to sequence learning (PMLR, Sydney, Australia, 2017), pp. 1243–1252 K. Miyazaki, T. Komatsu, T. Hayashi, S. Watanabe, T. Toda, K. Takeda, in Proceedings of the 5th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2020). Conformer-based sound event detection with semi-supervised learning and data augmentation (Zenodo, Tokyo, Japan, 2020), pp. 100–104 Y. Lu, Z. Li, D. He, Z. Sun, B. Dong, T. Qin, L. Wang, T.-Y. Liu, in ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations. Understanding and improving transformer from a multi-particle dynamic system point of view (OpenReview, Addis Ababa, Ethiopia, 2020) P. Ramachandran, B. Zoph, Q.V. Le, Searching for activation functions. arXiv preprint arXiv:1710.05941 (2017) Z. Dai, Z. Yang, Y. Yang, J.G. Carbonell, Q. Le, R. Salakhutdinov, in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Transformer-xl: Attentive language models beyond a fixed-length context (ACL, Florence, Italy, 2019), pp. 2978–2988 Y.N. Dauphin, A. Fan, M. Auli, D. Grangier, in International Conference on Machine Learning. Language modeling with gated convolutional networks (PMLR, Sydney, Australia, 2017), pp. 933–941 S. Ioffe, C. Szegedy, in International Conference on Machine Learning. Batch normalization: Accelerating deep network training by reducing internal covariate shift (PMLR, Lille, France, 2015), pp. 448–456 N. Turpault, R. Serizel, A. Parag Shah, J. Salamon, in Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019). Sound event detection in domestic environments with weakly labeled data and soundscape synthesis (New York University, New York, NY, USA, 2019), pp. 253–257 O. Nobutaka, H. Noboru, K. Yohei, M. Annamaria, I. Keisuke, K. Yuma, K. Tatsuya, Proceedings of the 5th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2020) (Zenodo, Japan, 2020) Turpault, N. Serizel, R. in Proceedings of the 5th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2020). Training sound event detection on a heterogeneous dataset (Zenodo, Tokyo, Japan, 2020), pp. 200–204 J. Cramer, H.-H. Wu, J. Salamon, J.P. Bello, in 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Look, listen, and learn more: Design choices for deep audio embeddings (IEEE, Brighton, UK, 2019), pp. 3852–3856 J. Knights, B. Harwood, D. Ward, A. Vanderkop, O. Mackenzie-Ross, P. Moghadam, in 2020 25th International Conference on Pattern Recognition (ICPR). Temporally coherent embeddings for self-supervised video representation learning (IEEE, Milan, Italy, 2021), pp. 8914–8921 K. Simonyan, A. Zisserman, in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Very deep convolutional networks for large-scale image recognition (OpenReview, San Diego, CA, USA, 2015) O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A.C. Berg, L. Fei-Fei, ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. (IJCV) 115, 211–252 (2015) A. Mesaros, T. Heittola, T. Virtanen, Metrics for polyphonic sound event detection. Appl. Sci. 6 (2016), p. 162 T. Virtanen, A. Mesaros, T. Heittola, A. Diment, E. Vincent, E. Benetos, B.M. Elizalde, Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017 Workshop (DCASE2017) (Tampere University of Technology, Germany, 2017) M.D. Plumbley, C. Kroos, J.P. Bello, G. Richard, D.P.W. Ellis, A. Mesaros, Proceedings of the Detection and Classification of Acoustic Scenes and Events 2018 Workshop (DCASE2018) (Tampere University of Technology, United Kingdom, 2018) M. Mandel, J. Salamon, D.P.W. Ellis, Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019) (New York, United States of America, 2019) Ç. Bilen, G. Ferroni, F. Tuveri, J. Azcarreta, S. Krstulović, in 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). A framework for the robust evaluation of sound event detection (IEEE, Barcelona, Spain, 2020), pp. 61–65 A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, S. Chintala, in Advances in Neural Information Processing Systems. PyTorch: An imperative style, high-performance deep learning library (NeurIPS, Vancouver, Canada, 2019), pp. 8026–8037 A. Tarvainen, H. Valpola, in Advances in Neural Information Processing Systems. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results (NeurIPS, Long Beach, CA, 2017), pp.1195–1204 H. Zhang, M. Cisse, Y.N. Dauphin, D. Lopez-Paz, in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30-May 3, 2018, Conference Track Proceedings. mixup: Beyond empirical risk minimization (PMLR, Vancouver, Canada, 2018) D.P. Kingma, J. Ba, in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Adam: A method for stochastic optimization (PMLR, San Diego, CA, USA, 2015) L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, J. Han, in 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, Conference Track Proceedings. On the variance of the adaptive learning rate and beyond (PMLR, New Orleans, LA, USA, 2019) This work was supported by a PhD Fellowship of Research Foundation Flanders (FWO-Vlaanderen) and the Flemish Government under "Onderzoeksprogramma AI Vlaanderen". ESAT, KU Leuven, Leuven, Belgium Wim Boes & Hugo Van hamme Wim Boes Hugo Van hamme Wim Boes conducted the research and performed the experiments. Hugo Van hamme guided and supervised the project. All authors read and approved the final manuscript. Correspondence to Wim Boes. Boes, W., Van hamme, H. Multi-encoder attention-based architectures for sound recognition with partial visual assistance. J AUDIO SPEECH MUSIC PROC. 2022, 25 (2022). https://doi.org/10.1186/s13636-022-00252-9 Multi-encoder Conformer Audio tagging Sound event detection Multimodal data Audiovisual data Recent advances in computational sound scene analysis
CommonCrawl
Real-time embedded object detection and tracking system in Zynq SoC Qingbo Ji1, Chong Dai1, Changbo Hou ORCID: orcid.org/0000-0002-6421-34811 & Xun Li1 With the increasing application of computer vision technology in autonomous driving, robot, and other mobile devices, more and more attention has been paid to the implementation of target detection and tracking algorithms on embedded platforms. The real-time performance and robustness of algorithms are two hot research topics and challenges in this field. In order to solve the problems of poor real-time tracking performance of embedded systems using convolutional neural networks and low robustness of tracking algorithms for complex scenes, this paper proposes a fast and accurate real-time video detection and tracking algorithm suitable for embedded systems. The algorithm combines the object detection model of single-shot multibox detection in deep convolution networks and the kernel correlation filters tracking algorithm, what is more, it accelerates the single-shot multibox detection model using field-programmable gate arrays, which satisfies the real-time performance of the algorithm on the embedded platform. To solve the problem of model contamination after the kernel correlation filters algorithm fails to track in complex scenes, an improvement in the validity detection mechanism of tracking results is proposed that solves the problem of the traditional kernel correlation filters algorithm not being able to robustly track for a long time. In order to solve the problem that the missed rate of the single-shot multibox detection model is high under the conditions of motion blur or illumination variation, a strategy to reduce missed rate is proposed that effectively reduces the missed detection. The experimental results on the embedded platform show that the algorithm can achieve real-time tracking of the object in the video and can automatically reposition the object to continue tracking after the object tracking fails. Object tracking has always been a focus of research in the field of computer vision. With the in-depth study of visual tracking algorithms by researchers, their scientific and theoretical basis has continued to improve, which has greatly promoted the development of surveillance systems, perceptual user interface, intelligent robotics, vehicle navigation, and intelligent transportation systems [1–3]. Two important factors to consider when designing an embedded tracking system are the real-time performance of the algorithms and their robustness [4]. Due to the complexity of the scenes, object tracking algorithms still face various challenges [5]. Improving the speed and robustness of the tracking algorithm has been the focus of many scholars. Henrique et al. [6] proposed the circulant structure of tracking-by-detection with kernels (CSK) algorithm, which applies the cyclic matrix theory and the kernel method to correlation filter tracking. The CSK algorithm greatly improves the operation speed of the tracking algorithm, but the robustness of the algorithm is poor because it uses only grayscale features. Henrique et al. [7] proposed a kernel correlation filter (KCF) tracking algorithm based on multiple channel gradient direction histograms. The algorithm is an improvement on CSK, and the robustness has been greatly improved. In order to reduce cumulative error in the tracking process, Hare et al. [8] proposed an adaptive tracking framework based on structured output prediction to improve the robustness of the algorithm. However, the object's representation of the algorithm does not combine multiple features, which leads to tracking failure in more complex scenes. The Staple algorithm proposed by Bertinetto et al. [9] combines the discriminative scale space tracker with color histogram tracking, which improves the adaptability of the algorithm to object deformation, but it does not perform well in scenes with illumination variations. Although the real-time performance of the methods described above is good, their artificial design features mean that they often fail to fully characterize the essential attributes of the object, which results in the algorithm performing well only in a specific scene such as occlusion, illumination variations, and motion blurring, and exhibiting poor tracking performance in complex environments [10]. Recent years have seen improvement in the ability of deep learning methods to provide accurate recognition and prediction [11–14]. Object tracking technology has made a breakthrough with the latest advances in deep learning [15]. Li et al. [16] applied online convolution neural networks to the task of object tracking. A small-scale convolutional neural network was designed for training and updating object samples stored online. However, this method trains the network online, which means the network cannot be fully trained, resulting in low detection and tracking accuracy. Wang et al. [17] studied the characteristics of the convolution neural network model. By analyzing the image features output from each layer of the network model, a feature selection network is constructed for matching and tracking. However, due to the large scale of the network model, it is impossible to achieve real-time tracking on the embedded platform. Gao et al. [18] proposed an update-pacing frame work based on the ensemble post-processing strategy to mitigate the model drifting of discriminative trackers. The framework initializes a set of trackers to update model in different intervals and selects the tracker with smallest deviation score as the robust tracker for the later tracking. However, the MTS framework performs not so well in real-time tracking when the number of trackers increases. With the increasing application of computer vision technology in driverless, robot, and other mobile devices, it is urgent to design a hardware acceleration method with excellent performance on embedded devices. Because the target detection algorithm needs to complete a large number of complex mathematical calculations, it puts forward higher requirements for the performance of embedded hardware, algorithm selection and algorithm based improvement. Due to the target detection algorithm needs to complete a large number of complex mathematical calculations, it puts forward higher requirements for the performance of embedded hardware, algorithm selection, and algorithm-based improvement. At present, convolutional neural network is usually deployed in embedded system to quantify the weight or activation value, that is, to convert the data from 32-bit floating-point type to low-integer type, such as binary neural network(BNN), ternary weight network (TWN), and XNORNet. However, there are still some shortcomings in the current quantization methods in the trade-off between accuracy and calculation efficiency. Many quantization methods compress the network in different degrees and save storage resources, but they can not effectively improve the calculation efficiency in the hardware platform. In 2017, Xilinx proposed to quantify the weight of convolutional neural network from 32-bit to fixed-point 8-bit, adopted the method of software and hardware co-design, and realized the hardware acceleration of the model by FPGA, which met the real-time requirements of convolutional neural network in embedded system. The main research content of this paper is to implement target detection and tracking in embedded hardware and software system. The task of target detection and tracking requires the algorithm to be real-time and robust. In order to solve the problems of poor real-time tracking performance of convolutional neural network and low robustness of tracking algorithm in complex scenes, a fast and accurate video real-time detection and tracking algorithm for embedded system is proposed. It is a tracking algorithm with object detection, which is based on a deep convolutional neural network single-shot multibox detector (SSD) [19] model and kernelized correlation filters(KCF) object tracking algorithm. In order to improve the real-time performance of the algorithm in the embedded system, the SSD model is quantized and compressed in Xilinx DNNDK (Deep Neural Network Development Kit) development environment, and the compressed model is deployed to the embedded system by the method of software and hardware co-design. The main work of this paper is as follows: (1) To achieve higher robustness in complex scenes, a deep learning method is applied to object detection. Due to the high detection accuracy of the SSD model, that model is used to locate the object. It is important to mention that we are not proposing a new or improved version of SSD, but rather a method for the hardware-software co-design of embedded systems based on System on Chip (SoC). (2) To achieve higher speeds, a KCF is applied to object tracking. In complex scenes, such as fast movement, camera shake, and occlusion, the KCF algorithm tends to track failures. After the failure, the KCF model is updated with the wrong object samples, which will contaminate the model, causing it to be unable to continue tracking the object [20]. This paper proposes a validity detection mechanism of tracking results to judge whether the tracking fails or not, so as to decide whether to update the model. (3) Since the missed rate of the SSD model in the scenes of blurred motion and illumination variation is high, this paper introduces a strategy to reduce the missed rate. When missed detection occurs in the process of object detection, the object position in the current frame is predicted according to the position of the object in the previous two frames, so as to reduce the missed rate. (4) In order to improve the real-time performance of the algorithm in the embedded hardware and software system, the SSD model is quantized as an 8-bit fixed-point model, the algorithm was partitioned through the way of hardware and software co-design, and part of the tasks was completed by ARM and FPGA, respectively. In this way, the advantages of ARM and FPGA are fully exploited, and the real-time performance is achieved without loss of accuracy. The rest of this paper is organized as follows. Section 2 introduces the real-time detection-tracking algorithm for embedded systems, including a validity detection mechanism of tracking results, and the strategy to reduce the missed rate. In Section 3, the method for hardware and software co-design of embedded systems based on SoC is described. Section 4 compares the results with other representative methods. Finally, the conclusion is given in Section 5. Proposed method In this section, we will introduce our algorithm for real-time object detection and tracking in embedded systems. In order to achieve an adequate real-time performance, the algorithm obtains the object box information by the SSD model only on the key frame. The object box information includes the location and size of the object. KCF target tracking algorithm separates target and background through discriminant framework, so as to achieve the goal of target tracking. The KCF model is trained through samples, which are obtained by cyclic shifting from the region inside the object box. In order to avoid the contamination of the KCF model caused by tracking failure, this paper introduces the validity detection mechanism of tracking results to evaluate whether tracking has failed or not and then choose to update the model or retrain it through SSD object detection results. A strategy is introduced to reduce the missed rate of the SSD model in motion-blurred and illumination-variation scenes. The overall flow of the algorithm is shown in Fig. 1. The first step is to run either the SSD object detection algorithm or the KCF object tracking algorithm on the frame i (image Ii): $$ S(I_{i})= \left\{\begin{array}{ll} SSD(I_{i}),&\ \text{{i} mod \(N = 0\) or \(fr = 1\)}\\ KCF(I_{i}),&\ \text{otherwise} \end{array}\right. $$ Flowchart of the proposed method where S(Ii) indicates the detection or tracking method for Ii,SSD(Ii) is the SSD object detection method, KCF(Ii) is the KCF tracking method. N is a constant of value 20 in this paper. fr is a flag of value 1 when the validity detection mechanism of tracking results fails. For SSD object detection or KCF object tracking, the tracking or detection can be expressed as: $$ \left\{\begin{array}{ll} L_{s}{\left(l_{i},c_{i},r_{i},n_{i}\right)}=S(I_{i}),&\ \text{\(S(I_{i}) = SSD(I_{i})\) or \(F\left(r_{i},r_{i-1}\right) = 0\)}\\ L_{K}(r_{i})=S(I_{i}),&\ \text{otherwise} \end{array}\right. $$ where Ls(li,ci,ri,ni) is the result of SSD object detection, li represents the object category, ci is the confidence of the category, ri is the object box of the detection result, and ni is the object number. F(ri,ri−1) is the result of validity detection mechanism of tracking results. The calculation method is given in Section 2.1. LK(ri) is the result of the KCF tracking. If ni is 0—that is, no object is detected—the strategy of reducing the missed detection is used to reduce the missed rate. The calculation method is given in Section 2.2. Otherwise, based on the image blocks contained in ri, the samples needed for KCF training can be obtained by cyclic shifting, so as to train the object initial position model for subsequent object tracking. Validity detection mechanism of tracking results The KCF tracking algorithm in this paper is updated by linear interpolation as shown in Equation 3 [7]: $$ \left\{\begin{array}{l} \alpha_{t} = (1 - \eta) \times {\alpha_{t - 1}} + \eta \times {{\alpha_{t}}'}\\ {x_{t}} = (1 - \eta) \times {x_{t - 1}} + \eta \times {{x_{t}}'} \end{array}\right. $$ where η is an interpolation coefficient, which characterizes the learning ability of the model for new image frames, αt is the classifier model, and xt is the object appearance template. It can be seen that the KCF algorithm does not consider whether the prediction result of the current frame is suitable for updating the model. When the tracking results deviate from the real object due to occlusion, motion blur, illumination variation, and other problems, Equation 3 will incorporate the wrong object information into the model, which will gradually contaminate the tracking model and eventually lead to the failure of subsequent object tracking tasks. In order to avoid inaccurate tracking caused by model contamination, it is necessary to judge whether tracking failure has occurred in time. In the process of tracking, the differences between the object information of adjacent frames can be expressed by the correlation of the object area. When the tracking is successful, the difference between the object regions of adjacent frames is very small and the correlation is very high. When the tracking fails, the object regions of adjacent frames will change greatly, and the correlation will also change significantly. Therefore, this paper adopts the correlation of frame objects to judge whether or not tracking fails. Considering that the application object of this algorithm is an embedded system, in order to improve the real-time performance of the algorithm, we only use low-frequency information to calculate the correlation. The information in the image includes high-frequency and low-frequency components: high-frequency components describe specific details, while low-frequency components describe a wide range of information. Figure 2a is a frame image randomly selected from the BlurBody video sequence in the OTB-100(Object Tracking Benchmark) [5] dataset, and Fig. 2b is a matrix diagram of the discrete cosine transform coefficients of the image. From Fig. 2b, it can be seen that the image energy in natural scenes is concentrated in the low-frequency region. In addition, some conditions such as camera shake and fast motion of the object may also cause motion blur, which may result in insufficient high-frequency information. Therefore, the high-frequency information is not reliable in judging the correlation of the object area. Video frame and its matrix of discrete cosine transform coefficients. a Frame 12 of the BlurBody video sequence. b Matrix of discrete cosine transform coefficient In this paper, a perceptual hash algorithm [21] is proposed to quickly calculate the hash distance between the object area of the current frame and the previous frame. This process uses only low-frequency information. The hash distance is the basis for judging whether the tracking fails, as shown in Equation 4. $$ F\left(r_{i},r_{i-1}\right)= \left\{\begin{array}{ll} 1&\ {pd_{i,i-1} \leq H_{{th}}}\\ 0&\ {pd_{i,i-1} > H_{{th}}} \end{array}\right. $$ where F(ri,ri−1) indicates whether the frame i fails to track, which is determined by the object area of the frame i−1 in the video sequence, the values of 1 and 0 representing tracking success and tracking failure, respectively; pdi,i−1 is the hash distance between the object area of frame i and frame i−1; and Hth is the hash distance threshold. Taking the BlurBody video sequence in the OTB-100 dataset as the test object, the hash distance pdi,i−1 between the real object area of each frame and the previous frame is calculated, as shown in Fig. 3. Hash distance between target regions of adjacent frames in BlurBody video sequence It can be seen from Fig. 3 that the hash distance of the object area is usually less than 15; for video frames with pdi,i−1 greater than 15, there are often obvious blurring and camera shakes. At this time, there are significant deviations in the tracking results of the KCF algorithm. Figure 4 shows the BlurBody video sequence tested by the KCF algorithm. The tracking results of frames 43, 107, and 160 are compared with the real position of the object, and the tracking result hash distances pd43,42,pd107,106, and pd160,159 are respectively 9, 22, and 15. The hash distance pdi,i−1 of frame 43 is lower, and the tracking result is more accurate; the hash distance pdi,i−1 of frame 107 is higher, and the tracking result has obviously deviated from the true position of the object. It can be seen that the hash distance pdi,i−1 can well reflect the validity of the tracking result. Tracking results compared with real positions. a Comparison result of frame 43. b Comparison result of frame 107. c Comparison result of frame 160 Strategy to reduce missed rate There is less information on the appearance of images in the motion blur and dark scenes [22]. In addition, the SSD model detects the current frame separately—it does not consider the correlation of adjacent frames, so the missed rate is high in the above scenes. In this paper, image enhancement is proposed to obtain more detailed image information, and then the improved KCF algorithm is used to track the object in order to reduce the missed rate. We are faced with the situation that the SSD model cannot detect the object when the image is blurred or dark, as shown in Fig. 5. The essence of image blurring or darkening is that the image is subjected to average or integral operation, so the image can be inversely calculated to highlight the details of the image. In this paper, the Laplacian differential operator is proposed to sharpen the image to obtain more detailed information. Examples of complex scenes. a Motion blur. b Light tends to be dark in the case of illumination variation The enhanced image is tracked by an improved KCF algorithm with color features. In the KCF tracking algorithm, the object feature information is described by the histograms of oriented gradients [23]. However, in images involving blurring or illumination variation, the edge information of the object is often not obvious. This paper proposes to extract object information by combining color features using the Lab color feature. The strong expressive ability of the Lab color space allows for a better description of the appearance of the object. In the KCF tracking algorithm, for the case of extracting multi-channel features of an image as input, it is assumed that the describing vector of each channel feature of an image is x=[x1,x2,⋯xC], and the output formula of Gauss kernel in reference [7]: $$ {\mathbf{k}^{xx^{\prime}}} = \exp \left\{ { - \frac{1}{{{\sigma^{2}}}}\left\{ {{{\left\| \mathbf{x} \right\|}^{2}} + {{\left\| {{\mathbf{x^{\prime}}}} \right\|}^{2}} - 2{{\mathcal{F}}^{- 1}}\left[ {F\left(\mathbf{x} \right) \odot {F^{\ast}}\left({{\mathbf{x^{\prime}}}} \right)} \right]} \right\}} \right\} $$ could be rewritten as: $$ {\mathbf{k}^{xx^{\prime}}} = \exp \left\{ { - \frac{1}{{{\sigma^{2}}}}\left[ {{{\left\| \mathbf{x} \right\|}^{2}} + {{\left\| {{\mathbf{x^{\prime}}}} \right\|}^{2}} - 2{{\mathcal{F}}^{- 1}}\left({\sum\limits_{c = 1}^{C} {{F_{C}}}} \right)} \right]} \right\} $$ where FC=F(xC)⊙F∗(x′C). Based on Equation 6, the object is described by the histogram of oriented gradients feature of the 31-channel feature. In the strategy to reduce missed rate, Laplacian sharpening is first applied to the previous two frames. Then, KCF tracking model with the Lab color feature is trained by the object in the sharpened image. Next, the object position of the current frame is predicted by the trained model. Finally, the tracking result is checked by the method described in Section 2.1. If the tracking is successful, the predicted object will be given as the result. Otherwise, the object in the next frame will continue to be detected by the SSD model. The algorithm flow is shown in Fig. 6. Flow charts of the strategy to reduce missed rate To verify the feasibility of the algorithm in this section, two motion-blurred video sequences, BlurBody and BlurOwl, and two illumination-varying video sequences, Human 9 and Singer 2, were selected from the OTB-100 dataset for the following comparison experiments: Experiment 1 The object in the four video sequences is tracked by an unimproved KCF algorithm. Experiment 2 Clear frame sequences, or frame sequences having no significant illumination variations, were tracked by the unimproved KCF algorithm. Only the partial frame sequences of motion blur or illumination variations were tracked by the algorithm described in this section. The tracking results of Experiment 1 and Experiment 2 were evaluated by two indexes: precision rate (PR) and success rate (SR). The PR and SR of experiments 1 and 2 are shown in Fig. 7. It can be seen from the figure that for the video sequences of motion blur and illumination variation, the improved KCF tracking algorithm exhibits a significantly higher PR and SR than the unimproved algorithm. Test results of comparative experiments. a Precision plot. b Success plot Hardware and software co-design The algorithm in this paper is implemented by Zynq UltraScale+ MPSoC (multiprocessor system-on-chip) [24, 25]. The real-time object detection and tracking algorithm based on the SSD model and KCF tracking is implemented by embedded software and hardware cooperation: the algorithm module, with simple operation, a mass of judgment statements, and pointer operation, is processed by the processing system (PS); the part with great influence on the speed performance of the algorithm and high degree of parallelism is implemented by programmable logic (PL), which is composed of FPGA. The hardware and software partition of the system is shown in Fig. 8. The convolution and pooling layers in SSD model are implemented by the hardware in PL, while the softmax layer is implemented by PS because it involves floating-point operation. In addition, other functions are implemented by PS, such as non-maximum supression, mapping the operation results to the image, KCF tracking algorithm, validity detection mechanism of tracking results, and strategy to reduce missed rate. Software and hardware partition of the system The computing capability and memory bandwidth of embedded systems are limited; in addition, the weight parameters of deep neural network model often have a lot of redundancy. When the system on chip is implemented, there will be bottlenecks in computing and storage resources, but also a lot of power consumption. In this paper, we use DNNDK (Deep Neural Network Development Kit) [26] to compress the 32-bit floating-point weight into 8-bit fixed-point weight,and then compile it into ARM-executable instruction stream. In the embedded system, PS obtains the instruction from the off-chip memory and transmits it to the on-chip memory of the PL through the bus. The instruction scheduler of the PL obtains the instruction to control the operation of the computing engine [27, 28], as shown in Fig. 9. On-chip memory is used to cache input and output data as well as temporary data in the operation process, so as to achieve high throughput. On-chip memory is used to cache input and output data as well as temporary data in the operation process, so as to achieve high throughput. The deep pipeline design is adopted in the computing engine, which makes full use of the parallel processing function of FPGA, so the computing speed is improved. Among them, the processing elements of convolutional computing engine make full use of the fine-grained blocks in PL, such as multiplier, adder and so on, which makes it possible to efficiently complete the computation in convolutional neural network. Hardware architecture of software and hardware co-design In this paper, the tasks of reading video frames, running detection and tracking algorithms, and displaying video frames are implemented by ARM of the PS. If the single thread mode is adopted, the image is read first, then the target is detected and tracked, and finally the image is displayed, which will make the utilization of CPU and FPGA relatively low, and the real-time performance will be affected. Since the reading of video frame is mainly file I/O operation, the detection and tracking algorithm is mainly CPU and FPGA calculation, and the video image display is mainly Ethernet transmission (displayed after being transmitted to the computer through X11 protocol). It can be seen that these three steps occupy different system resources, so using multithreading to execute the above three steps simultaneously can make full use of CPU resources, thus significantly improving the operation efficiency, as shown in Fig. 10. Multithread concurrent execution of PS Overall performance comparison of algorithms In this section, we compare the object tracking performance of the proposed algorithm with four other algorithms that have better real-time and robustness: KCF, Struck, CSK, and Staple. In order to evaluate the performance of the proposed algorithm, the tracking algorithm, the SSD object detection model, and the algorithm itself are tested on the OTB-100 dataset. To ensure the objectivity and fairness of the experimental results, the SSD model in this algorithm is trained by the open datasets VOC2007 [29], VOC2012 [30], and Wider Face [31]. Since these three training datasets differ greatly from the object categories of the OTB-100 test dataset, 14 video sequences are tested, which are BlurBody, CarScale, Couple, Girl, Gym, Human2, Human6, Human7, Human8, Human9, Liquor, Man, Trellis, and Woman. We chose the one pass evaluation method [5] to test the KCF, Struck, CSK, and Staple tracking algorithms for artificially setting the first frame object position. The SSD object detection model and the algorithm in this paper do not provide the initial object position, and the object position is automatically detected by the model. Table 1 shows the results of the comparison of these algorithms, measured by PR, SR, and tracking speed (in frames per second). Table 1 The contrast of tracking accuracy and speed result The precision plot and success plot are drawn to prove the feasibility and effectiveness of the proposed algorithm, as shown in Fig. 11. One pass evaluation. a Precision plot. b Success plot It can be seen from Table 1 and Fig. 11 that the SR of the proposed algorithm is slightly higher than the SSD object detection model, and the PR is nearly 10% higher. This is attributed to the fact that the algorithm remedies the shortcomings of the SSD model by using the strategy of reducing missed detection in the scenes involving motion blur and illumination variation, which makes it possible to track frames that the SSD model cannot detect. Compared with Struck, CSK, and KCF, the PR and SR of the proposed algorithm are much higher. This is due to the validity detection mechanism of tracking results, which can detect tracking failure in time and immediately start the SSD model to re-detect the object. When testing processing speed, KCF, Struck, CSK, and Staple are tested directly on a PC. The experimental environment is the Windows operating system with an IntelⓇ Core i7-7700K 4.2-GHz processor with 16 GB of memory. The experimental software is MATLAB R2018b. The SSD detection model and the proposed algorithm are tested on a ZCU104 board [32]. Taking advantage of the fast running speed of KCF, it can achieve 36.2 frames per second on the ZCU104 board, which meets the real-time requirements. Experiment after adding training dataset As can be seen from Table 1 and Fig. 11, the PR and SR of the proposed algorithm are not higher than those of Staple. This is because the SSD model is needed to detect the object position when the current frame is the initial frame or in the tracking failure scenes. However, the training datasets VOC2007 and VOC2012 of the SSD model are quite different from the objects in the test video sequence: the "person" category in the test dataset is mostly the image under the road monitor perspective, while the "person" image in the training dataset is all the image from the horizontal perspective. In addition, the video sequence of the "bottle" category in the test dataset is very long, whereas the images in the training dataset are too few in number. The significant difference between the training dataset and the test dataset leads to the low accuracy of the SSD model (PR and SR are 74.51% and 75.12%, respectively), which reduces the accuracy of the proposed algorithm. To verify this point, another experiment was conducted: about 200 bottle images and 200 pedestrian images from the perspective of the road monitor were added to the training dataset. It is important to note that all the images added to the training dataset were not included in the test dataset. After retraining the SSD model, the test results of the contrast experiment are shown in Table 2. It can be seen that with the increase in the number of images in the training dataset, the accuracy of the SSD model has been improved slightly, while the accuracy of the algorithm in this paper has been greatly improved. At this time, the PR of the algorithm is approximately the same as that of Staple, but its SR is about 10% higher than that of Staple. Table 2 Test results after adding training data Experiments with specific attributes Object tracking tests were carried out on the video sequences with the attributes of occlusion, motion blur, and illumination variation. The accuracy and robustness of the algorithm in these challenging conditions are compared using the SR. The results are shown in Table 3. Table 3 Tracking result on videos with sequence attributes It can be seen from Table 3 that for occlusion, the SR of the algorithm and the SSD model are higher than those of other tracking algorithms. In other tracking algorithms, when the object is occluded, the algorithm not only easily loses track of the object, but also often fails to relocate the object in order to continue. However, the algorithm in this paper can relocate the object after tracking failure and thus continue to track it through the SSD model. For motion blurring, the SR of the proposed algorithm and the Staple algorithm is higher than the other algorithms. This is attributed to the introduction of color features into the strategy of reducing missed detection in this paper. For illumination variation, the SSD model exhibits high accuracy, because the brightness and contrast of the training dataset have been changed randomly in order to improve generalization in training the SSD model. In illumination-variation scenes, when the light is dark, the SR has been greatly improved compared with the SSD model due to introducing color features into the strategy of reducing missed detection. This paper has presented a real-time object detection-tracking algorithm for embedded platforms. This algorithm combines the object detection model of SSD in deep convolution networks and the KCF tracking algorithm. The SSD model was accelerated by using field-programmable gate arrays, which satisfies the real-time performance of the algorithm in embedded platforms. To solve the problem of the traditional KCF algorithm not being able to robustly track for a long time, and specifically the problem of contamination after it fails to track in complex scenes, the validity detection mechanism of the tracking results was proposed. To solve the problem of the SSD model's high missed rate under the conditions of motion blur and illumination variation, a strategy to reduce the missed detection was proposed by introducing color features. The results of the experiments show that the overall PR of the proposed algorithm reaches 91.19%, the SR reaches 84.79%, and the frame rate reaches 36.2 frames per second. In the specific cases of occlusion, motion blurring, and illumination-variation attributes, the proposed algorithm has higher accuracy and robustness than other tracking algorithms. The focus of future research should be on improving tracking performance by tracking with deep features and improving hardware implementation. We used publicly available data in order to illustrate and test our methods. The datasets used were acquired from [5] and from [29–31], which can be found in https://pjreddie.com/media/files/VOCtrainval_06-Nov-2007.tar, https://pjreddie.com/media/files/VOCtrainval_11-May-2012.tar, and http://shuoyang1213.me/WIDERFACE/, respectively. L. Hongmei, H. Lin, Z. Ruiqiang, L. Lei, W. Diangang, L. Jiazhou, in 2020 International Conference on Computer Engineering and Intelligent Control (ICCEIC). Object tracking in video sequence based on Kalman filter, (2020), pp. 106–110. https://doi.org/10.1109/ICCEIC51584.2020.00029. Y. Wang, W. Shi, S. Wu, Robust UAV-based tracking using hybrid classifiers. Mach. Vis. Appl.30(1), 125–137 (2019). https://doi.org/10.1007/s00138-018-0981-4. R. Iguernaissi, D. Merad, K. Aziz, P. Drap, People tracking in multi-camera systems: a review. Multimed. Tools Appl.78(8), 10773–10793 (2019). https://doi.org/10.1007/s11042-018-6638-5. H. Zhang, Z. Zhang, L. Zhang, Y. Yang, Q. Kang, D. Sun, Object tracking for a smart city using IoT and edge computing. Sensors. 19(9), 1987 (2019). https://doi.org/10.3390/s19091987. Y. Wu, J. Lim, M. -H. Yang, Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015). https://doi.org/10.1109/TPAMI.2014.2388226. J. F. Henriques, R. Caseiro, P. Martins, J. Batista, in Computer Vision ? ECCV 2012. ECCV 2012. Lecture Notes in Computer Science, vol. 7575, ed. by A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, and C. Schmid. Exploiting the circulant structure of tracking-by-detection with kernels (SpringerPlatz, 2012), pp. 702–715. J. F. Henriques, R. Caseiro, P. Martins, J. Batista, High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell.37(3), 583–596 (2015). https://doi.org/10.1109/TPAMI.2014.2345390. S. Hare, S. Golodetz, A. Saffari, V. Vineet, M. -M. Cheng, S. L. Hicks, P. H. S. Torr, Struck: structured output tracking with kernels. IEEE Trans. Pattern Anal. Mach. Intell.38(10), 2096–2109 (2016). https://doi.org/10.1109/TPAMI.2015.2509974. L. Bertinetto, J. Valmadre, S. Golodetz, O. Miksik, P. H. S. Torr, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Staple: complementary learners for real-time tracking (IEEESeattle, p. 2016. https://doi.org/10.1109/CVPR.2016.156. Y. Chen, X. Yang, B. Zhong, S. Pan, D. Chen, H. Zhang, CNNTracker: online discriminative object tracking via deep convolutional neural network. Appl. Soft Comput.77:, 1088–1098 (2016). https://doi.org/10.1016/j.asoc.2015.06.048. K. Zhang, Y. Guo, X. Wang, J. Yuan, Q. Ding, Multiple feature reweight DenseNet for image classification. IEEE Access. 6:, 9872–9880 (2019). https://doi.org/10.1109/ACCESS.2018.2890127. T. Kong, A. Yao, Y. Chen, F. Sun, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). HyperNet: towards accurate region proposal generation and joint object detectionIEEESeattle, 2016), pp. 845–853. https://doi.org/10.1109/CVPR.2016.98. T. -Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, S. Belongie, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Feature pyramid networks for object detection (IEEEHonolulu, 2017), pp. 1–8. https://doi.org/10.1109/CVPR.2017.106. N. Bodla, B. Singh, R. Chellappa, L. S. Davis, in Proceedings of the IEEE International Conference on Computer Vision (ICCV). Soft-NMS - improving object detection with one line of code (IEEEVenice, 2017), pp. 5562–5570. https://doi.org/10.1109/ICCV.2017.593. S. M. Marvasti-Zadeh, L. Cheng, H. Ghanei-Yakhdan, S. Kasaei, Deep learning for visual tracking: a comprehensive survey. IEEE Trans. Intell. Transp. Syst.https://doi.org/10.1109/TITS.2020.3046478. H. Li, Y. Li, F. Porikli, DeepTrack: learning discriminative feature representations online for robust visual tracking. IEEE Trans. Image Process. 25(4), 1834–1848 (2016). https://doi.org/10.1109/TIP.2015.2510583. Article MathSciNet Google Scholar L. Wang, W. Ouyang, X. Wang, H. Lu, in Proceedings of the IEEE International Conference on Computer Vision (ICCV). Visual tracking with fully convolutional networks (IEEESantiago, 2015), pp. 3119–3127. https://doi.org/10.1109/ICCV.2015.357. Y. Gao, Z. Hu, H. W. F. Yeung, Y. Y. Chung, X. Tian, L. Lin, in IEEE Transactions on Circuits and Systems for Video Technology, vol. 30. Unifying temporal context and multi-feature with update-pacing framework for visual tracking, (2020), pp. 1078–1091. https://10.1109/TCSVT.2019.2902883. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. -Y. Fu, A. C. Berg, in Computer Vision ? ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, vol. 9905, ed. by B. Leibe, J. Matas, N. Sebe, and M. Welling. SSD: single shot multibox detector (SpringerCham, 2016), pp. 21–37. https://doi.org/10.1007/978-3-319-46448-0_2. M. Jiang, J. Shen, J. Kong, H. Huo, Regularisation learning of correlation filters for robust visual tracking. IET Image Process.12(9), 1586–1594 (2018). https://doi.org/10.1049/iet-ipr.2017.1043. Y. Zhen, D. -Y. Yeung, Active hashing and its application to image and text retrieval. Data Min. Knowl. Disc.26(2), 255–274 (2013). https://doi.org/10.1007/s10618-012-0249-y. Z. Zhan, X. Yang, Y. Li, C. Pang, Video deblurring via motion compensation and adaptive information fusion. Neurocomputing. 341:, 88–98 (2019). N. Dalal, B. Triggs, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 1.Histograms of oriented gradients for human detection (IEEESan Diego, 2005), pp. 886–893. Xilinx, Zynq UltraScale+ MPSoC ZCU104 Evaluation Kit Quick Start Guide. Available: https://www.xilinx.com/support/documentation/boards_and_kits/zcu104/xtp482-zcu104-quickstart.pdf. Accessed May 2018. Xilinx, SDSoC environment user guide. Available: https://www.xilinx.com/support/documentation/sw_manuals/xilinx2019_1/ug1027-sdsoc-user-guide.pdf. Accessed May 2019. Xilinx, DNNDK user guide. Available: https://www.xilinx.com/support/documentation/user_guides/ug1327-dnndk-user-guide.pdf. Accessed Apr 2019. Xilinx, Xilinx AI SDK user guide. Available: https://www.xilinx.com/support/documentation/user_guides/ug1354-xilinx-ai-sdk.pdf. Accessed Apr 2019. Xilinx, Xilinx AI SDK programming guide. Available: https://www.xilinx.com/support/documentation/sw_manuals/vitis_ai/1_1/ug1355-xilinx-ai-sdk-programming-guide.pdf. Accessed Apr 2019. M. Everingham, G. L. Van, C. K. I. Williams, J. Winn, A. Zisserman, The Pascal Visual Object Classes (VOC) Challenge. Int. J. Comput. Vis.88(2), 303–338. https://doi.org/10.1007/s11263-009-0275-4. M. Everingham, S. M. A. Eslami, G. L. Van, C. K. I. Williams, J. Winn, A. Zisserman, The PASCAL Visual Object Classes Challenge: a retrospective. Int. J. Comput. Vis.111(1), 98–136 (2015). https://doi.org/10.1007/s11263-014-0733-5. S. Yang, P. Luo, C. C. Loy, X. Tanal, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). WIDER FACE: a face detection benchmark (IEEESeattle, p. 2016. https://doi.org/10.1109/CVPR.2016.596. Xilinx, ZCU104 board user. Available: https://www.xilinx.com/support/documentation/boards_and_kits/zcu104/ug1267-zcu104-eval-bd.pdf. Accessed Oct 2018. The authors would like to thank the editor and anonymous reviewers for their helpful comments and valuable suggestions. The work is founded by the National Key Research and Development Program of China (No. 2018AAA0102700). College of Information and Communication Engineering, Harbin Engineering University, Harbin, China Qingbo Ji, Chong Dai, Changbo Hou & Xun Li Qingbo Ji Chong Dai Changbo Hou Xun Li All authors take part in the discussion of the work described in this paper. QJ proposed the framework of this work. CD carried out the whole experiments and drafted the manuscript. CH offered useful suggestions and helped to modify the manuscript. XL analyzed the data. All authors read and approved the final manuscript. 1. Qingbo Ji received her B.S, M.S, and PhD degrees in College of Information and Communication Engineering from Harbin Engineering University, Heilongjiang, China, in 1998, 2004, 2008, respectively. She is currently an associate professor with College of Information and Communication Engineering, Harbin Engineering University. Her major research interests include the image processing, image recognition, and deep learning and embedded systems. 2. Chong Dai received his B.S degree in College of Electronic and Information Engineering from Heilongjiang University of Science and Technology, Heilongjiang, China, in 2017. And now, he is a postgraduate in College of Information and Communication Engineering at Harbin Engineering University. 3. Changbo Hou received the B.S and M.S degree in College of Information and Communication Engineering from Harbin Engineering University, Heilongjiang, China, in 2008 and 2011, respectively. He is currently a lecturer with the College of Information and Communication Engineering, Harbin Engineering University, where he is also a Doctor in reading in the Key Laboratory of In-ber Integrated Optics, Ministry Education of China. His research interests include wideband signal processing, optical sensors, image processing, and deep learning. 4. Xun Li received his B.S degree in commutation engineering from Northeastern University, Qinhuangdao, China, in 2017. And now, he is a postgraduate in College of Information and Commutation Engineering at Harbin Engineering University. Correspondence to Changbo Hou. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Ji, Q., Dai, C., Hou, C. et al. Real-time embedded object detection and tracking system in Zynq SoC. J Image Video Proc. 2021, 21 (2021). https://doi.org/10.1186/s13640-021-00561-7 Detection and tracking Validity detection mechanism Real-time Image and Video Processing in Embedded Systems for Smart Surveillance Applications
CommonCrawl
Mansour, Toufik, Shattuck, Mark. (1397). Combinatorial parameters on bargraphs of permutations. سامانه مدیریت نشریات علمی دانشگاه اصفهان, 7(2), 1-16. doi: 10.22108/toc.2017.102359.1483 Toufik Mansour; Mark Shattuck. "Combinatorial parameters on bargraphs of permutations". سامانه مدیریت نشریات علمی دانشگاه اصفهان, 7, 2, 1397, 1-16. doi: 10.22108/toc.2017.102359.1483 Mansour, Toufik, Shattuck, Mark. (1397). 'Combinatorial parameters on bargraphs of permutations', سامانه مدیریت نشریات علمی دانشگاه اصفهان, 7(2), pp. 1-16. doi: 10.22108/toc.2017.102359.1483 Mansour, Toufik, Shattuck, Mark. Combinatorial parameters on bargraphs of permutations. سامانه مدیریت نشریات علمی دانشگاه اصفهان, 1397; 7(2): 1-16. doi: 10.22108/toc.2017.102359.1483 Combinatorial parameters on bargraphs of permutations Transactions on Combinatorics مقاله 1، دوره 7، شماره 2، تابستان 2018، صفحه 1-16 اصل مقاله (271.72 K) نوع مقاله: Research Paper شناسه دیجیتال (DOI): 10.22108/toc.2017.102359.1483 Toufik Mansour1؛ Mark Shattuck 2 1Department of Mathematics, University of Tennessee, Knoxville, TN, USA 2Mathematics Department, University of Tennessee, Knoxville, TN, USA ‎In this paper‎, ‎we consider statistics on permutations of length $n$ represented geometrically as bargraphs having the same number of horizontal steps‎. ‎More precisely‎, ‎we find the joint distribution of the descent and up step statistics on the bargraph representations‎, ‎thereby obtaining a new refined count of permutations of a given length‎. ‎To do so‎, ‎we consider the distribution of the parameters on permutations of a more general multiset of which $\mathcal{S}_n$ is a subset‎. ‎In addition to finding an explicit formula for the joint distribution on this multiset‎, ‎we provide counts for the total number of descents and up steps of all its members‎, ‎supplying both algebraic and combinatorial proofs‎. ‎Finally‎, ‎we derive explicit expressions for the sign balance of these statistics‎, ‎from which the comparable results on permutations follow as special cases‎. ‎combinatorial statistic‎؛ ‎$q$-generalization‎؛ ‎bargraph‎؛ ‎permutations [1] A. Blecher, C. Brennan and A. Knopfmacher, Levels in bargraphs, Ars Math. Contemp., 9 (2015) 297–310. [2] A. Blecher, C. Brennan and A. Knopfmacher, Peaks in bargraphs, Trans. Royal Soc. S. Afr., 71 (2016) 97–103. [3] A. Blecher, C. Brennan and A. Knopfmacher, Combinatorial parameters in bargraphs, Quaest. Math., 39 (2016) 619–635. [4] E. Deutsch and S. Elizalde, Statistics on bargraphs viewed as cornerless Motzkin paths, Discrete Appl. Math., 221 (2017) 54–66. [5] S. Fereti´ c, A perimeter enumeration of column-convex polyominoes, Discrete Math. Theor. Comput. Sci., 9 (2007) 57–83. [6] A. Geraschenko, An investigation of skyline polynomials, http://people.brandeis.edu/ gessel/47a/geraschenko.pdf. [7] S. Heubach and T. Mansour, Combinatorics of Compositions and Words, Discrete Math. Appl., Chapman & Hall/CRC, Boca Raton, (2009). [8] Q.-M. Luo, An explicit formula for the Euler numbers of higher order, Tamkang J. Math., 36 (2005) 315–317. [9] T. Mansour, Enumeration of words by sum of differences between adjacent letters, Discrete Math. Theor. Comput. Sci., 11 (2009) 173–185. [10] J. Osborn and T. Prellberg, Forcing adsorption of a tethered polymer by pulling, J. Stat. Mech., 2010 (2010) 1–18. [11] T. Prellberg and R. Brak, Critical exponents from nonlinear functional equations for partially directed cluster models, J. Stat. Phys., 78 (1995) 701–730. [12] N. J. Sloane, The On-Line Encyclopedia of Integer Sequences, http://oeis.org, 2010. [13] H. M. Srivastava and J. Choi, Series Associated with the Zeta and Related Functions, Kluwer Academic Publishers, Dordrecht, 2001.
CommonCrawl
BMC Infectious Diseases Survival outcomes for first-line antiretroviral therapy in India's ART program Rakhi Dandona1, Bharat B. Rewari2,3, G. Anil Kumar1, Sukarma Tanwar1,2,3, S. G. Prem Kumar1, Venkata S. Vishnumolakala1, Herbert C. Duber4, Emmanuela Gakidou4 & Lalit Dandona1,4 BMC Infectious Diseases volume 16, Article number: 555 (2016) Cite this article Little is known about survival outcomes of HIV patients on first-line antiretroviral therapy (ART) on a large-scale in India, or facility level factors that influence patient survival to guide further improvements in the ART program in India. We examined factors at the facility level in addition to patient factors that influence survival of adult HIV patients on ART in the publicly-funded ART program in a high- and a low-HIV prevalence state. Retrospective chart review in public sector ART facilities in the combined states of Andhra Pradesh and Telangana (APT) before these were split in 2014 and in Rajasthan (RAJ), the high- and a low-HIV prevalence states, respectively. Records of adults initiating ART between 2007-12 and 2008-13 in APT and RAJ, respectively, were reviewed and facility-level information collected at all ART centres and a sample of link ART centres. Survival probability was estimated using Kaplan-Meier method, and determinants of mortality explored with facility and patient-level factors using Cox proportional hazard model. Based on data from 6581 patients, the survival probability of ART at 60 months was 76.3 % (95 % CI 73.0–79.2) in APT and 78.3 % (74.4–81.7) in RAJ. The facilities with cumulative ART patient load above the state average had lower mortality in APT (Hazard ratio [HR] 0.74, 0.57–0.95) but higher in RAJ (HR 1.37, 1.01–1.87). Facilities with higher proportion of lost to follow-up patients in APT had higher mortality (HR 1.47, 1.06–2.05), as did those with higher ART to pre-ART patient ratio in RAJ (HR 1.62, 1.14–2.29). In both states, there was higher hazard for mortality in patients with CD4 count 100 cells/mm3 or less at ART initiation, males, and in patients with TB co-infection. These data from the majority of facilities in a high- and a low-HIV burden state of India over 5 years reveal reasonable and similar survival outcomes in the two states. The facilities with higher ART load in the longer established ART program in APT had better survival, but facilities with a higher ART load and a higher ratio of ART to pre-ART patients in the less experienced ART program in RAJ had poorer survival. These findings have important implications for India's ART program planning as it expands further. Over the last decade, the HIV program in India has been scaled up substantially to reduce mortality and morbidity from HIV/AIDS and to improve the quality of life of those infected by HIV. The rapid scale-up of antiretroviral treatment (ART) services in recent years has improved access to ART with provision of free ART under the National AIDS Control Program (NACP- IV) [1]. The HIV program in India aims for evidence-based planning for further ART roll-out and performance monitoring [2]. However, there is a paucity of literature on survival outcomes of HIV patients on ART on a large-scale in India. The available reports are based on small samples of HIV patients, data limited to a single treatment centre, survival outcomes with TB as comorbidity, or have explored only the individual level predictors for survival on ART [3–7]. At the time of designing the study in 2012-13, our aim was to document survival outcomes and analyse the individual level and facility level predictors of survival for HIV patients on first-line ART in NACP-IV in two Indian states - Andhra Pradesh and Rajasthan. Andhra Pradesh in south India had a population of about 85 million population at that time, and the highest number of persons with HIV among any Indian state with a long-standing ART program. On the other hand, Rajasthan in north India had a population of about 70 million, and a relatively lower HIV burden with a more recent ART program [8]. Heterosexual mode of transmission is the major HIV infection route in both states [8]. After data collection was completed in 2013, the state of Telangana was carved out of Andhra Pradesh state in June 2014. As data were collected prior to this split, we report findings for the undivided Andhra Pradesh that includes the current Andhra Pradesh and Telangana (APT) and for Rajasthan (RAJ). Ethics approval for this study was obtained from Ethics Committees of the Public Health Foundation of India, New Delhi and the University of Washington, Seattle, USA. The study was approved by the Indian Council for Medical Research, Health Ministry Steering Committee, the Government of India and by the National AIDS Control Organization of India. Sample of ART facilities In India, ART services are provided by ART centres and Link ART centres (LAC). ART centre provides pre-ART and ART services to HIV infected people, and LAC is an extension of ART centre which were established to minimize the travel need and related costs for ART patients to receive basic ART services [8]. Patients on ART are registered with ART centre to start treatment and once stable are then transferred to LAC to receive medications on a routine basis. In this study, for APT all 45 ART centres and one LAC for each ART functional in 2012 were randomly sampled. A total of 41 LACs were subsequently sampled as four ART centres in the state capital Hyderabad had no LAC. In RAJ all 16 ART centres and all 27 link ART centres functional in 2012 were sampled, of which 10 LAC were newly established, and hence data were not available for these for this study. Patient clinical record data The ART patient's clinical record (known as white card) is maintained at the facilities that provide ART services. It is used to document demographic information of the patient; risk factors; and various treatment and clinical details, and follow-up details for each visit. We aimed to sample 75 and 35 adult patient records at each ART centre and LAC, respectively in APT, for the last five financial years at the time of starting data collection (April 2007 to March 2012). For RAJ, the aimed sample was 180 and 30 adult patient records at each ART centre and LAC, respectively, during the last 5 financial years at the time of starting data collection (April 2008 to March 2013). With this approach, at least 3000 adult patients on ART were finally expected to be part of the study in each state. Only records of patients who were initiated on first line ART between 6 and 60 months prior to the date of data collection were considered eligible. We used the ART enrolment register which includes documentation of the ART initiation date for each patient to arrive at the total number of eligible adult patient records at each facility. We then used a pre-defined sampling strategy to sample the eligible patient records at each ART facility - the total number of adult patients who had initiated ART within the inclusion time period was divided by the required number of adult patient records to be sampled to arrive at the sampling interval. A random number was picked within this sampling interval to select the first record, and then the records were sampled systematically using the sampling interval until the desired sample was achieved. Data collection procedure Data were collected electronically using Datstat Illume Survey Manager 5.1 (DatStat Inc., Seattle, WA). The program used for capturing data was a replica of the white card. Data extractors trained in study procedures recorded data "as is" from the white cards to reflect the actual data in each white card. The current status of the patient (alive, dead, lost to follow-up, or transferred to another facility) was recorded from the ART enrolment register. Ten percent or more of data collected were checked daily by the field team supervisor to ensure the quality of data collected. Formal consent of the senior-most person responsible at each ART facility was obtained to collect these data. Data were collected in APT from February to May 2013 and in RAJ during June-July 2013. Survival probability The probability of survival of HIV patients on ART at 12 and 60 months was estimated using the Kaplan-Meier product limit estimation method. The duration of survival was calculated from the month of ART initiation to the month of death or to the last visit for the alive patients. Censoring based on the last date of visit to the ART was done for patients who were lost to follow-up (LFU) or transferred out of the facility. We report overall survival probability of HIV patients on ART at 12 and 60 month for the two sexes and by baseline CD4 cell count. As mortality at the facility level is mostly not captured among HIV patients who were LFU, we adjusted the survival with mortality among LFU patients as shown in the equation below that has been previously used in the literature: [9] $$ \mathrm{S}\left(\mathrm{t}\right) = 1 - \left[\mathrm{ML}\left(\mathrm{t}\right) + \mathrm{L}\left(\mathrm{t}\right)*\mathrm{M}\mathrm{N}\mathrm{L}\left(\mathrm{t}\right)\right] $$ where S(t) = adjusted ART survival in last 5 years; L(t) = Proportion of LFU patients in 5 years; ML(t) = Mortality estimated in LFU patients; and MNL(t) = Mortality observed in patients in care (1- predicted Kaplan-Meier survival estimates). As an inverse relation between mortality among LFU patients and the rate of LFU in the ART program has been previously reported from an analysis of several HIV programs in Africa, we used the following equation from this analysis to estimate mortality among LFU patients in each facility in APT and RAJ: [9] $$ {\mathrm{ML}}_{\mathrm{i}}= \exp\ \left(\mathrm{a} + {\mathrm{br}}_{\mathrm{i}}\right)/\left(1 + \exp\ \left(\mathrm{a} + {\mathrm{br}}_{\mathrm{i}}\right)\right) $$ where MLi = mortality among non-LFU patients in each facility, r = proportion of LFU patients in each facility, a = 0.57287 and b = −4.04409. Finally, we calculated the weighted average of ML for each state by using the total patients on ART in each facility in each state. The weighted average of estimated mortality among LFU patients was 0.43 for APT and 0.61 for RAJ at 5 years. We considered 20 % lower and higher levels than the point estimates for ML(t) for sensitivity analysis for the probabilities of ART survival, which was performed using Monte Carlo simulations by doing 100,000 iterations with the @Risk software (Palisade UK Europe Ltd). We used random values between these plausible ranges to obtain the 95 % uncertainty interval (UI) for the probabilities of survival estimates. We report survival probability at 60 months that is adjusted for LFU mortality. Predictors of mortality Cox proportional hazard model was used to determine the predictors of mortality with select patient demography and clinical indicators (ART regimen, CD4 count at start of ART, co-existing tuberculosis, history of alcohol use). We also included facility related variables - cumulative ART patient load at ART centre, ratio of cumulative ART patients to pre-ART patients, and percent of cumulative LFU patients. The cumulative data over the 5-year study period was used to calculate the average value for each of these variables per state. Facilities were categorised as having below/equal or above the average value for cumulative ART patient load and ratio of cumulative ART patients to pre-ART patients; and were categorised into three equal groups based on the percent of cumulative LFU patients for the analysis. For this analysis, the ART patients undergoing treatment at LAC were considered together with the parent ART centre which was their primary registration for ART. The 95 % confidence intervals (CI) are reported where relevant. The median CD4 count at the ART initiation is presented for the alive, dead and LFU patients separately and range is also reported. Log rank test was used to examine the test of significance for survival probability. Data from 82 to 41 facilities were available for analysis in APT and RAJ, respectively. Data were analysed using the statistical software STATA (version 13.1, StatCorp, USA). A total of 3340 adult patient records were extracted for analysis in APT state of which 3280 (98.2 %) records had information available on the current status of the patient at the time of data collection. Among these, 2130 (64.9 %) were alive and on ART, 437 (13.3 %) had died, 432 (13.2 %) were LFU, and 281 (8.6 %) were transferred out to another ART facility during the study period. In RAJ state, 3241 adult patient records were extracted for analysis of which 3198 (98.7 %) had information available on the current status of the patient at the time of data collection. Among these, 2554 (79.9 %) were alive and on ART, 393 (12.3 %) were dead, 115 (3.6 %) were LFU, and 136 (4.3 %) were transferred out to another ART facility. The demographic and clinical characteristics of adult patients on ART are summarized in Table 1. The median age of patients on ART was 35 years in both states (Interquartile range, IQR 29–40 years). The median CD4 cell count at ART initiation was 172 cells/mm3 (IQR 104–236) in APT and 159 cells/mm3 (IQR 86–240) in RAJ. Significantly more patients had CD4 count of < =100 cells/mm3 at ART initiation in RAJ (29.4 %) than in APT (23.4 %; p < 0.001). The patients who were alive and on treatment showed an increasing trend in median CD4 cell counts at ART initiation over the years in both the states (Fig. 1). The overall baseline median CD4 cell count of deceased patients [126 cells/mm3 (IQR 66–194) in APT and 93 cells/mm3 (IQR 48–159) in RAJ) were comparatively lower than the patients who were alive and on treatment in both the states [184 cells/mm3 (IQR 115–245) in APT and 172 cells/mm3 (IQR 98–247) in RAJ]. Table 1 Demographic and clinical characteristics of patients on ART, and facility-related indicators in Andhra Pradesh and Telangana (2007-12) and in Rajasthan (2008-13) Yearly trends in baseline median CD4 cell counts for HIV patients on ART in Andhra Pradesh and Telangana (2007-12) and in Rajasthan (2008-13) based on current status of the patient Among the 437 and 393 patients who had died in APT and RAJ, respectively, 188 (43 %) in APT and 191 (48.6 %) patients in RAJ had died within 6 months of starting ART. The unadjusted mortality rate among patients on ART in APT and RAJ was 6.83 and 7.18 per 100 patient-years at 60 months, respectively. The median survival time was 22 months in APT and 18 months in RAJ. The estimated unadjusted survival probability on ART at 12 and 60 months was 91.2 % (95 % CI 90.1–92.1) and 76.3 % (95 % CI 73.0–79.2) in APT, respectively; and 90.6 % (95 % CI 89.4–91.6) and 78.3 % (95 % CI 74.4–81.7) in RAJ (Fig. 2). The probability of survival was higher among females than males in both states (log rank test, p < 0.001; Fig. 2), and was significantly lower for patients with CD4 count <101 cells/mm3 at ART initiation than those with CD4 count >250 cells/mm3 in both the states (log rank test, p < 0.001; Fig. 3). After adjusting for the assumed higher mortality among LFU, the adjusted survival probability on ART at 60 months was 70.6 % (95 % UI 67.0–73.8) in APT and 76.1 % (95 % UI 72.2–79.6) in RAJ. Kaplan-Meier unadjusted survival curve for HIV patients on ART for males and females in Andhra Pradesh and Telangana (2007-12) and Rajasthan (2008-13) Kaplan-Meier unadjusted survival curve for HIV infected patients on ART by CD4 count at ART initiation (cells/mm3) in Andhra Pradesh and Telangana (2007-12) and Rajasthan (2008-13) Table 2 shows the results with the Cox proportional hazard model for the predictors of mortality among HIV patients on ART. The findings at the patient level for both the states were similar. Patients with CD4 count <101 cells/mm3 at ART initiation had a higher hazard for mortality than patients with CD4 count >250 cells/mm3 (HR 3.36, 95 % CI 2.29–4.95 in APT and HR 3.71, 95 % CI 2.47–5.58 in RAJ). Patients with history of alcohol consumption had significantly higher risk of dying than who never consumed alcohol in APT (HR 1.57, 95 % CI 1.22–2.02) and in RAJ (HR 1.42, 95 % CI 1.09–1.86). Males as compared with females, and patients with TB co-infection had a higher hazard for death. The patients on Zidovudine-based ART regimen had a lower hazard for mortality than those on the Stavudine-based ART regimen in both states. Table 2 Determinants of mortality among HIV patients on ART using Cox proportional hazard model for Andhra Pradesh and Telangana (2007-12) and Rajasthan (2008-13). CI denotes confidence interval At the facility level, facilities with a cumulative ART patient load above the average for the state facilities had lower mortality in APT (HR 0.74, 95 % CI 0.57–0.95) but had higher mortality in RAJ (HR 1.37, 95 % CI 1.01–1.87). The facilities in APT with proportion of LFU patients higher than the state average had significantly higher mortality (HR 1.47, 95 % CI 1.06–2.05); the trend in RAJ was similar but did not reach statistical significance. On the other hand, facilities in RAJ with higher ART to pre-ART patient ratio had a significantly higher hazard for mortality (HR 1.62, 95 % CI 1.14–2.29). As public sector facilities provide ART to most patients in India, this sample of over 6500 adult patients on ART in two major states is fairly representative of a high and a low HIV burden state in India. This analysis of data covering 5 years reveals that the overall survival probability of HIV patients on ART at 60 months was reasonable at 76–78 %, and that the survival rates were similar in the high- and low-HIV burden states, with the former having a longer standing public funded ART program in place. The survival rates in our data at 60 months are similar to those reported previously from three centres in southern India [3, 10]. Consistent with the published literature, a significant proportion of deaths occurred within the first 6 months of ART initiation [3–5, 7, 11]. Poor survival of males on ART as compared with females in our population has been documented previously from India and elsewhere [3, 6, 12–16]. Factors such as poor treatment seeking behaviour and non-adherence to treatment, and increased risk of LFU have been reported previously as possible reasons for higher mortality among males on ART [17–20]. The median CD4 count at ART initiation was lower for males than females in both the states, and 58 % of LFU in APT and 65 % in RAJ were males in our study. This finding suggests that it would be useful for the HIV services to make males more aware of the benefits of timely initiation of ART for better survival outcome. Both a low CD4 count at ART initiation and co-existing TB have been previously reported to be associated with poorer survival outcomes among Indian patients [3, 4, 17, 21–23]. The overall median CD4 count at ART initiation in this study had increased significantly over the 5 years in both APT (154 to 193) and RAJ (132 to 174). However, these data were not for those who had died. As a lower CD4 count is associated with delayed ART initiation and with higher attrition while on treatment, [17, 22] the program could focus more on ensuring adherence and follow-up of the patients with lower CD4 count to further improve the survival outcomes. With regard to TB, NACP-IV has clearly identified HIV-TB coordination including cross-referral, detection and treatment as one of the objectives in the revised strategy that aims to further the integration between HIV and TB services, [24] in particular to prevent LFU and early initiation of ART [25–27]. Inclusion of data from a large number of facilities in this study allowed assessment of facility-level variables that influence survival on ART. These findings are relevant for program planning. The ART patient load was an important predictor of mortality in both states, albeit differently. In APT, facilities with a higher load had better survival outcomes possibly because of a longer established ART program that has likely acquired more experience leading to better outcomes. However, in RAJ, facilities with higher ART patient load had poorer survival outcome, as did facilities with a higher ratio of ART to pre-ART patients. This higher patient load in the less experienced ART program in RAJ may be resulting in difficulty in handling patients, which indicates the need for strengthening facilities in RAJ with high or increasing ART load through monitoring of their human resources, supplies and infrastructure. In addition, even though both states had more pre-ART than ART patients across the facilities, the average ART to pre-ART patient ratio was relatively higher in RAJ. The reasonable survival outcomes in the two states, which were not significantly different from each other without and with adjusting for mortality in the LFU patients, are encouraging for the national HIV program. Over the study period, the LFU proportion remained fairly consistent in APT, and was similar to that reported previously [17, 22]. Factors associated with poor patient retention have been documented for APT, [17, 22] and more effective and robust tracking of LFU is needed to improve survival outcomes. The significantly lower proportion of LFU in RAJ was a likely a result of a recent exercise carried out by the State AIDS Control Society to trace LFUs in order to bring them back to the treatment cycle. It is possible that some LFU patients may have initiated ART at another facility. However, it is not possible to track mobility of individual patients between the ART facilities in the program yet. To address this challenge, NACO is considering use of SMART cards with biometric identification for each patient which could facilitate not only tracking of patients but also potentially improve adherence and access to treatment [28]. Our study limitations include missing data, non-usable information on treatment adherence in the white cards, and survival status of transferred out and LFU patients as these were not readily available in the patient records. Despite these limitations, these large sample data collected from routine patient records are generalizable as all ART centres in both states were included. Data utilised for this study were obtained from paper forms/registers used in routine service conditions by the providers in the facilities, and thus are reflective of the ground reality. In conclusion, these data have highlighted the benefits of investment in ART in India which is associated with a reasonably good over survival rate at 5 years, and have identified important determinants of survival on ART at the facility-level in addition to patient-level factors that can inform improvement of the ART services in India. An important program-relevant message from these findings is that ART survival could potentially be improved further if facilities with higher load get specific attention in the initial phase in Indian states with a more recent ART program. AIDS: Andhra Pradesh and Telangana HIV: Hazard ratio LAC: Link ART centre LFU: Lost to follow-up NACP: National AIDS Control Programme NACO: National AIDS Control Organisation UI: Uncertainty interval RAJ: TB: NACP-IV components. [http://www.naco.gov.in/nacp-iv-components]. Accessed 5 Oct 2016. NACP-IV Programme Priorities and Thrust Areas. [http://www.naco.gov.in/programme-priorities-and-thrust-areas-0]. Accessed 5 Oct 2016. Allam RR, Murhekar MV, Bhatnagar T, Uthappa CK, Chava N, Rewari BB, Venkatesh S, Mehendale S. Survival probability and predictors of mortality and retention in care among patients enrolled for first-line antiretroviral therapy, Andhra Pradesh, India, 2008-2011. Trans R Soc Trop Med Hyg. 2014;108(4):198–205. Bachani D, Garg R, Rewari BB, Hegg L, Rajasekaran S, Deshpande A, Emmanuel KV, Chan P, Rao KS. Two-year treatment outcomes of patients enrolled in India's national first-line antiretroviral therapy programme. Natl Med J India. 2010;23(1):7–12. Ghate M, Deshpande S, Tripathy S, Godbole S, Nene M, Thakar M, Risbud A, Bollinger R, Mehendale S. Mortality in HIV infected individuals in Pune, India. Indian J Med Res. 2011;133:414–20. Rajeev A, Sharma A. Mortality and morbidity patterns among HIV patients with prognostic markers in a tertiary care hospital in southern India. Australa Med J. 2011;4(5):273–6. Sharma SK, Dhooria S, Prasad KT, George N, Ranjan S, Gupta D, Sreenivas V, Kadhiravan T, Miglani S, Sinha S, et al. Outcomes of antiretroviral therapy in a northern Indian urban clinic. Bull World Health Organ. 2010;88(3):222–6. NACO Annual Report 2014-15. [http://www.naco.gov.in/sites/default/files/annual_report%20_NACO_2014-15_0.pdf]. Accessed 5 Oct 2016. Egger M, Spycher BD, Sidle J, Weigel R, Geng EH, Fox MP, MacPhail P, van Cutsem G, Messou E, Wood R, et al. Correcting mortality for loss to follow-up: a nomogram applied to antiretroviral treatment programmes in sub-Saharan Africa. PLoS Med. 2011;8(1):e1000390. Ghate M, Tripathy S, Gangakhedkar R, Thakar M, Bhattacharya J, Choudhury I, Risbud A, Bembalkar S, Kadam D, Rewari BB, et al. Use of first line antiretroviral therapy from a free ART programme clinic in Pune, India - A preliminary report. Indian J Med Res. 2013;137(5):942–9. Rajasekaran S, Jeyaseelan L, Raja K, Vijila S, Krithigaipriya KA, Kuralmozhi R. Increase in CD4 cell counts between 2 and 3.5 years after initiation of antiretroviral therapy and determinants of CD4 progression in India. J Postgrad Med. 2009;55(4):261–6. Druyts E, Dybul M, Kanters S, Nachega J, Birungi J, Ford N, Thorlund K, Negin J, Lester R, Yaya S, et al. Male sex and the risk of mortality among individuals enrolled in antiretroviral therapy programs in Africa: a systematic review and meta-analysis. AIDS. 2013;27(3):417–25. Rai S, Mahapatra B, Sircar S, Raj PY, Venkatesh S, Shaukat M, Rewari BB. Adherence to Antiretroviral Therapy and Its Effect on Survival of HIV-Infected Individuals in Jharkhand, India. PLoS One. 2013;8(6):e66860. Sieleunou I, Souleymanou M, Schönenberger AM, Menten J, Boelaert M. Determinants of survival in AIDS patients on antiretroviral therapy in a rural centre in the Far-North Province, Cameroon. Trop Med Int Health. 2009;14(1):36–43. Stringer JS, Zulu I, Levy J, Stringer EM, Mwango A, Chi BH, Mtonga V, Reid S, Cantrell RA, Bulterys M, et al. Rapid scale-up of antiretroviral therapy at primary care sites in Zambia: feasibility and early outcomes. JAMA. 2006;296(7):782–93. Mills EJ, Bakanda C, Birungi J, Chan K, Hogg RS, Ford N, Nachega JB, Cooper CL. Male gender predicts mortality in a large cohort of patients receiving antiretroviral therapy in Uganda. J Int AIDS Soc. 2011;14:52. Alvarez-Uria G, Naik PK, Pakam R, Midde M. Factors associated with attrition, mortality, and loss to follow up after antiretroviral therapy initiation: data from an HIV cohort study in India. Glob Health Action. 2013;6:21682. Nachega JB, Hislop M, Dowdy DW, Lo M, Omer SB, Regensberg L, Chaisson RE, Maartens G. Adherence to highly active antiretroviral therapy assessed by pharmacy claims predicts survival in HIV-infected South African adults. J Acquir Immune Defic Syndr. 2006;43(1):78–84. Ochieng-Ooko V, Ochieng D, Sidle JE, Holdsworth M, Wools-Kaloustian K, Siika AM, Yiannoutsos CT, Owiti M, Kimaiyo S, Braitstein P. Influence of gender on loss to follow-up in a large HIV treatment programme in western Kenya. Bull World Health Organ. 2010;88(9):681–8. Cornell M, Schomaker M, Garone DB, Giddy J, Hoffmann CJ, Lessells R, Maskew M, Prozesky H, Wood R, Johnson LF, et al. Gender differences in survival among adult patients starting antiretroviral therapy in South Africa: a multicentre cohort study. PLoS Med. 2012;9(9):e1001304. Bhowmik A, Bhandari S, De R, Guha SK. Predictors of mortality among HIV-infected patients initiating anti retroviral therapy at a tertiary care hospital in eastern India. Asian Pac J Trop Med. 2012;5(12):986–90. Alvarez-Uria G, Pakam R, Midde M, Naik PK. Predictors of delayed antiretroviral therapy initiation, mortality, and loss to followup in HIV infected patients eligible for HIV treatment: data from an HIV cohort study in India. Biomed Res Int. 2013;2013:849042. Rupali P, Mannam S, Bella A, John L, Rajkumar S, Clarence P, Pulimood SA, Samuel P, Karthik R, Abraham OC, et al. Risk factors for mortality in a south Indian population on generic antiretroviral therapy. J Assoc Physicians India. 2012;60:11–4. NACP IV Strategy document. [http://www.naco.gov.in/sites/default/files/Strategy_Document_NACP%20IV.pdf]. Accessed 5 Oct 2016. Gupta AK, Singh GP, Goel S, Kaushik PB, Joshi BC, Chakraborty S. Efficacy of a new model for delivering integrated TB and HIV services for people living with HIV/AIDS in Delhi -- case for a paradigm shift in national HIV/TB cross-referral strategy. AIDS Care. 2014;26(2):137–41. Vijay S, Kumar P, Chauhan LS, Rao SV, Vaidyanathan P. Treatment outcome and mortality at one and half year follow-up of HIV infected TB patients under TB control programme in a district of South India. PLoS One. 2011;6(7):e21008. Vijay S, Swaminathan S, Vaidyanathan P, Thomas A, Chauhan LS, Kumar P, Chiddarwar S, Thomas B, Dewan PK. Feasibility of provider-initiated HIV testing and counselling of tuberculosis patients under the TB control programme in two districts of South India. PLoS One. 2009;4(11):e7899. National AIDS Control Programme. Journey of ART programme in India: story of a decade. New Delhi:Ministry of Health and Family Welfare, Government of India; 2014. The authors would like to acknowledge all individuals who contributed to this study, including the State AIDS Control Societies and facility staff who gave their time for facilitating access to white cards and to complete survey components; and the field team members who conducted data collection. Funding for this work was provided by the Bill & Melinda Gates Foundation. Availabilty of data and material Data are available with the corresponding author, and can be made available on request. RD, LD, BBR, GAK, ST and EG were responsible for the study design. SGPK and SPR were responsible for overseeing data collection. GAK and VVS were responsible for data management. RD, GAK, SGPK, VVS and ST were involved with data analysis. RD and GAK drafted the original manuscript. LD, BBR, HD and EG gave significant inputs for analysis and interpretation. All authors had full access to the data, and have read and approved the final manuscript. Competing interest BBR and ST are affiliated with the Department of AIDS Control, Ministry of Health and Family Welfare, Government of India that oversees the ART Programme in India. LD is on the Editorial Board for the journal BMC Medicine. Ethics approval for this study was obtained from Ethics Committees of the Public Health Foundation of India, New Delhi and the University of Washington, Seattle, USA. The study was also approved by the Indian Council for Medical Research, Health Ministry Steering Committee, the Government of India and by the National AIDS Control Organization of India. As the data collection involved retrospective review of patient records with no identifiable information to be collected, the consent to participate from patients was exempted by the Ethics Committee as this research was designed to study benefit of a public service programme. Public Health Foundation of India, New Delhi, India Rakhi Dandona, G. Anil Kumar, Sukarma Tanwar, S. G. Prem Kumar, Venkata S. Vishnumolakala & Lalit Dandona Department of AIDS Control, Ministry of Health and Family Welfare, Government of India, New Delhi, India Bharat B. Rewari & Sukarma Tanwar World Health Organization Country Office for India, New Delhi, India Institute for Health Metrics and Evaluation, University of Washington, Seattle, Washington, USA Herbert C. Duber, Emmanuela Gakidou & Lalit Dandona Rakhi Dandona Bharat B. Rewari G. Anil Kumar Sukarma Tanwar S. G. Prem Kumar Venkata S. Vishnumolakala Herbert C. Duber Emmanuela Gakidou Lalit Dandona Correspondence to Rakhi Dandona. Dandona, R., Rewari, B.B., Kumar, G.A. et al. Survival outcomes for first-line antiretroviral therapy in India's ART program. BMC Infect Dis 16, 555 (2016). https://doi.org/10.1186/s12879-016-1887-2 Submission enquiries: [email protected]
CommonCrawl
Asymptotic preserving scheme for a kinetic model describing incompressible fluids KRM Home Kinetic derivation of fractional Stokes and Stokes-Fourier systems March 2016, 9(1): 75-103. doi: 10.3934/krm.2016.9.75 Global existence of weak solution to the free boundary problem for compressible Navier-Stokes Zhenhua Guo 1, and Zilai Li 2, Center for Nonlinear Studies and School of Mathematics, Northwest University, Xi'an 710069, China School of Mathematics and Information Science, Henan Polytechnic University, Jiaozuo 454000, China Received October 2014 Revised August 2015 Published October 2015 In this paper, the compressible Navier-Stokes system (CNS) with constant viscosity coefficients is considered in three space dimensions. we prove the global existence of spherically symmetric weak solutions to the free boundary problem for the CNS with vacuum and free boundary separating fluids and vacuum. In addition, the free boundary is shown to expand outward at an algebraic rate in time. Keywords: global existence, free boundary., Navier-Stokes equations. Mathematics Subject Classification: Primary: 35Q35; Secondary: 76N1. Citation: Zhenhua Guo, Zilai Li. Global existence of weak solution to the free boundary problem for compressible Navier-Stokes. Kinetic & Related Models, 2016, 9 (1) : 75-103. doi: 10.3934/krm.2016.9.75 Y. Cho, H. J. Choe and H. Kim, Unique solvability of the initial boundary value problems for compressible viscous fluids,, J. Math. Pures Appl., 83 (2004), 243. doi: 10.1016/j.matpur.2003.11.004. Google Scholar H. J. Choe and H. Kim, Strong solutions of the Navier-Stokes equations for isentropic compressible fluids,, J. Differ. Eqs., 190 (2003), 504. doi: 10.1016/S0022-0396(03)00015-9. Google Scholar H. J. Choe and H. Kim, Global existence of the radially symmetric solutions of the Navier-Stokes equations for the isentropic compressible fluids,, Math. Methods Appl., 28 (2005), 1. doi: 10.1002/mma.545. Google Scholar E. Feireisl, A. Novotny and H. Petzeltová, On the existence of globally defined weak solutions to the Navier-Stokes equations,, J. Math. Fluid Mech., 3 (2001), 358. doi: 10.1007/PL00000976. Google Scholar E. Feireisl, On the motion of a viscous, compressible, and heat conducting fluid,, Indiana Univ. Math. J., 53 (2004), 1705. doi: 10.1512/iumj.2004.53.2510. Google Scholar E. Feireisl, Dynamics of Viscous Compressible Fluids,, Oxford University Press, (2004). Google Scholar Z. H. Guo, H. L. Li and Z. P. Xin, Lagrange structure and dynamics for solutions to the spherically symmetric compressible Navier-Stokes equations,, Commun. Math. Phys., 309 (2012), 371. doi: 10.1007/s00220-011-1334-6. Google Scholar D. Hoff, Global existence for 1D, compressible, isentropic Navier-Stokes equations with large initial data,, Trans. Amer. Math. Soc., 303 (1987), 169. doi: 10.2307/2000785. Google Scholar D. Hoff, Spherically symmetric solutions of the Navier-Stokes equations for compressible, isothermal flow with large discontinuous initial data,, Indiana Univ. Math. J., 41 (1992), 1225. doi: 10.1512/iumj.1992.41.41060. Google Scholar D. Hoff, Global existence of the Navier-Stokes equations for multidimensional compressible flow with discontinuous initial data,, J. Diff. Eqs., 120 (1995), 215. doi: 10.1006/jdeq.1995.1111. Google Scholar D. Hoff, Strong convergence to global solutions for multidimensional flows of compressible, viscous fluids with polytropic equations of state and discontinuous initial data,, Arch. Rat. Mech. Anal., 132 (1995), 1. doi: 10.1007/BF00390346. Google Scholar D. Hoff and H. K. Jenssen, Symmetric nonbarotropic flows with large data and forces,, Arch. Ration. Mech. Anal., 173 (2004), 297. doi: 10.1007/s00205-004-0318-5. Google Scholar D. Hoff and D. Serre, The failure of continuous dependence on initial data for the Navier-Stokes equations of compressible flow,, SIAM J. Appl. Math., 51 (1991), 887. doi: 10.1137/0151043. Google Scholar X. D. Huang, J. Li and Z. P. Xin, Global well-posedness of classical solutions with large oscillations and vacuum to the three-dimensional isentropic compressible Navier-Stokes equations,, Comm. Pure Appl. Math., 65 (2012), 549. doi: 10.1002/cpa.21382. Google Scholar S. Jiang and P. Zhang, On spherically symmetric solutions of the compressible isentropic Navier-Stokes equations,, Comm. Math. Phys., 215 (2001), 559. doi: 10.1007/PL00005543. Google Scholar H. Kong, Global Existence of Spherically Symmetric Weak Solutions to the Free Boundary Value Problem of 3D Isentropic Compressible Navier-Stokes Equations,, Master thesis, (2014). Google Scholar A. V. Kazhikhov and V. V. Shelukhin, Unique global solution with respect to time of initial-boundary value problems for one-dimensional equations of a viscous gas,, J. Appl. Math. Mech., 41 (1977), 273. Google Scholar P. L. Lions, Mathematical Topics in Fluid Mechanics,, Vol. 2. Compressible models, (1998). Google Scholar T. Luo, Z. P. Xin and T. Yang, Interface behavior of compressible Navier-Stokes equations with vacuum,, SIAM J. Math. Anal., 31 (2000), 1175. doi: 10.1137/S0036141097331044. Google Scholar A. Matsumura and T. Nishida, The initial value problem for the equations of motion of viscous and heatconductive gases,, J. Math. Kyoto Univ., 20 (1980), 67. Google Scholar J. Nash, Le problème de Cauchy pour les équations différentielles d'un fluide général,, Bull. Soc. Math. France, 90 (1962), 487. Google Scholar M. Okada, Free boundary problem for the equation of one dimensional motion of viscous gas,, Japan J. Appl. Math., 6 (1989), 161. doi: 10.1007/BF03167921. Google Scholar M. Okada and T. Makino, Free boundary problem for the equation of spherically Symmetrical motion of viscous gas,, Japan J. Appl. Math., 10 (1993), 219. doi: 10.1007/BF03167573. Google Scholar R. Salvi and I. Straškraba, Global existence for viscous compressible fluids and their behavior as $ t\rightarrow\infty$,, J. Fac. Sci. Univ. Tokyo Sect. IA Math., 40 (1993), 17. Google Scholar J. Serrin, On the uniqueness of compressible fluid motion,, Arch. Rational. Mech. Anal., 3 (1959), 271. Google Scholar Z. P. Xin, Blowup of smooth solutions to the compressible Navier-Stokes equations with compact density,, Comm. Pure Appl. Math., 51 (1998), 229. doi: 10.1002/(SICI)1097-0312(199803)51:3<229::AID-CPA1>3.0.CO;2-C. Google Scholar Xulong Qin, Zheng-An Yao. Global solutions of the free boundary problem for the compressible Navier-Stokes equations with density-dependent viscosity. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1041-1052. doi: 10.3934/cpaa.2010.9.1041 Hantaek Bae. Solvability of the free boundary value problem of the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 769-801. doi: 10.3934/dcds.2011.29.769 Daoyuan Fang, Bin Han, Matthias Hieber. Local and global existence results for the Navier-Stokes equations in the rotational framework. Communications on Pure & Applied Analysis, 2015, 14 (2) : 609-622. doi: 10.3934/cpaa.2015.14.609 Peixin Zhang, Jianwen Zhang, Junning Zhao. On the global existence of classical solutions for compressible Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 1085-1103. doi: 10.3934/dcds.2016.36.1085 Reinhard Racke, Jürgen Saal. Hyperbolic Navier-Stokes equations II: Global existence of small solutions. Evolution Equations & Control Theory, 2012, 1 (1) : 217-234. doi: 10.3934/eect.2012.1.217 Feimin Huang, Xiaoding Shi, Yi Wang. Stability of viscous shock wave for compressible Navier-Stokes equations with free boundary. Kinetic & Related Models, 2010, 3 (3) : 409-425. doi: 10.3934/krm.2010.3.409 Zilai Li, Zhenhua Guo. On free boundary problem for compressible navier-stokes equations with temperature-dependent heat conductivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3903-3919. doi: 10.3934/dcdsb.2017201 Yoshihiro Shibata. On the local wellposedness of free boundary problem for the Navier-Stokes equations in an exterior domain. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1681-1721. doi: 10.3934/cpaa.2018081 Joel Avrin. Global existence and regularity for the Lagrangian averaged Navier-Stokes equations with initial data in $H^{1//2}$. Communications on Pure & Applied Analysis, 2004, 3 (3) : 353-366. doi: 10.3934/cpaa.2004.3.353 Yuming Qin, Lan Huang, Zhiyong Ma. Global existence and exponential stability in $H^4$ for the nonlinear compressible Navier-Stokes equations. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1991-2012. doi: 10.3934/cpaa.2009.8.1991 Michele Campiti, Giovanni P. Galdi, Matthias Hieber. Global existence of strong solutions for $2$-dimensional Navier-Stokes equations on exterior domains with growing data at infinity. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1613-1627. doi: 10.3934/cpaa.2014.13.1613 Evrad M. D. Ngom, Abdou Sène, Daniel Y. Le Roux. Global stabilization of the Navier-Stokes equations around an unstable equilibrium state with a boundary feedback controller. Evolution Equations & Control Theory, 2015, 4 (1) : 89-106. doi: 10.3934/eect.2015.4.89 Yoshikazu Giga. A remark on a Liouville problem with boundary for the Stokes and the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1277-1289. doi: 10.3934/dcdss.2013.6.1277 Joanna Rencławowicz, Wojciech M. Zajączkowski. Global regular solutions to the Navier-Stokes equations with large flux. Conference Publications, 2011, 2011 (Special) : 1234-1243. doi: 10.3934/proc.2011.2011.1234 Keyan Wang. On global regularity of incompressible Navier-Stokes equations in $\mathbf R^3$. Communications on Pure & Applied Analysis, 2009, 8 (3) : 1067-1072. doi: 10.3934/cpaa.2009.8.1067 Jing Wang, Lining Tong. Stability of boundary layers for the inflow compressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2595-2613. doi: 10.3934/dcdsb.2012.17.2595 Chérif Amrouche, Nour El Houda Seloula. $L^p$-theory for the Navier-Stokes equations with pressure boundary conditions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1113-1137. doi: 10.3934/dcdss.2013.6.1113 Sylvie Monniaux. Various boundary conditions for Navier-Stokes equations in bounded Lipschitz domains. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1355-1369. doi: 10.3934/dcdss.2013.6.1355 Huicheng Yin, Lin Zhang. The global existence and large time behavior of smooth compressible fluid in an infinitely expanding ball, Ⅱ: 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1063-1102. doi: 10.3934/dcds.2018045 Luigi C. Berselli. An elementary approach to the 3D Navier-Stokes equations with Navier boundary conditions: Existence and uniqueness of various classes of solutions in the flat boundary case.. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 199-219. doi: 10.3934/dcdss.2010.3.199 2018 Impact Factor: 1.38 Zhenhua Guo Zilai Li
CommonCrawl
Jeppe Bundsgaard ORCID: orcid.org/0000-0003-0102-03211 International large-scale assessments like international computer and information literacy study (ICILS) (Fraillon et al. in International Association for the Evaluation of Educational Achievement (IEA), 2015) provide important empirically-based knowledge through the proficiency scales, of what characterizes tasks at different difficulty levels, and what that says about students at different ability levels. In international comparisons, one of the threats against validity is country differential item functioning (DIF), also called item-by-country interaction. DIF is a measure of how much harder or easier an item is for a respondent of a given group as compared to respondents from other groups of equal ability. If students from one country find a specific item much harder or easier than students from other countries, it can impair the comparison of countries. Therefore, great efforts are directed towards analyzing for DIF and removing or changing items that show DIF. From another angle, however, this phenomenon can be seen not only as a threat to validity, but also as an insight into what distinguishes students from different countries, and possibly their education, on a content level, providing even more pedagogically useful information. Therefore, in this paper, the data from ICILS 2013 is re-analyzed to address the research question: Which kinds of tasks do Danish, Norwegian, and German students find difficult and/or easy in comparison with students of equal ability from other countries participating in ICILS 2013? The analyses show that Norwegian and Danish students find items related to computer literacy easier than their peers from other countries. On the other hand, Danish and, to a certain degree, Norwegian students find items related to information literacy more difficult. Opposed to this, German students do not find computer literacy easier, but they do seem to be comparably better at designing and laying out posters, web pages etc. This paper shows that essential results can be identified by comparing the distribution of difficulties of items in international large-scale assessments. This is a more constructive approach to the challenge of DIF, but it does not eliminate the serious threat to the validity of the comparison of countries. International large-scale assessments like Programme for International Student Assessment (PISA), and the International Association for the Evaluation of Educational Achievement (IEA) studies progress in international reading literacy study (PIRLS) and international computer and information literacy study (ICILS) are most known for the so-called league tables, which provide information about the relative abilities of students across countries. But for teachers, teacher educators, and developers of teaching material, they can provide much more important empirically based knowledge of what characterizes tasks at different difficulty levels, and what that says about students at different ability levels: What can they be expected to do easily, what is their present zone of proximal development, and which tasks are they not yet able to perform? This knowledge is summed up in so-called described proficiency scales, which are developed on the basis of analyses of items of similar difficulty and detailed studies of tasks at a given difficulty interval (Fraillon et al. 2015; OECD 2014). When constructing a measure, the constructor needs to assure that it measures the same way for different persons being measured. This is called measurement invariance. It means that the result of a test should not depend on anything else but the students' proficiency in the area the test is intended to measure. It should not matter what background the student comes from, or on the specific items used to test this specific student. In international comparisons a number of factors can be a threat to measurement invariance. Typically in order to cover a broad excerpt of the construct, individual students receive only a subset of the items. If these items are not representative of the construct, the measure could be biased. By rotating the booklets or modules, test designers are able to minimize the potential consequences of this problem, but still the problem could persist and be difficult to identify if the total set of items were not covering the construct. One of the serious threats against measurement invariance is country differential item functioning (DIF), also called item-by-country interaction. DIF is a measure of how much harder or easier an item is for a respondent of a given group as compared to respondents from other groups of equal ability (Holland and Wainer 1993). If students from one country find a specific item much harder or easier than students from other countries, it can impair the comparison of countries. Therefore, in international large-scale assessments great efforts are directed towards analyzing for DIF and removing or changing items that show DIF (e.g. Fraillon et al. 2015, p. 166ff.). Nonetheless, DIF seems to be unavoidable in large-scale assessments like PISA and ICILS, and this has drawn heavy criticism, especially directed towards PISA (Kreiner and Christensen 2014). But from another angle, this phenomenon can be seen not only as a threat to validity, but also as an insight into what distinguishes students from different countries, and possibly their education, on a content level. In this paper, the data from ICILS 2013 (Fraillon et al. 2014) is re-analyzed to get a deeper understanding of what students from three North European countries, Denmark, Norway, and Germany, find difficult or easy as opposed to students from other countries.Footnote 1 Thus, the research questions are as follows: Research question 1: Can challenging content areas be identified by grouping items with Differential Item Functioning? Research question 2: Which kinds of tasks do Danish, Norwegian, and German students find difficult and/or easy in comparison to students of equal ability from other countries participating in ICILS 2013? International computer and information literacy study ICILS measures computer and information literacy (CIL) according to the following definition: "an individual's ability to use computers to investigate, create, and communicate in order to participate effectively at home, at school, in the workplace, and in society" (Fraillon et al. 2013, p. 17). ICILS divides CIL into two strands: (1) collecting and managing information, and (2) producing and exchanging information, each consisting of 3–4 aspects: 1.1 Knowing about and understanding computer use, 1.2 Accessing and evaluating information, 1.3 Managing information, 2.1 Transforming information, 2.2 Creating information, 2.3 Sharing information, and 2.4 Using information safely and securely (Fraillon et al. 2013, p. 18). The construct is measured using an innovative computer-based test made up of four modules each consisting of an authentic storyline where students are asked, for example, to help organize an after-school activity. The items types range from multiple choice and short text answers to the production of web pages and posters using interactive software. The data is analyzed using a uni-dimensional Rasch model. A two-dimensional model, relating to the two strands mentioned before, was also tested, but the two dimensions showed a very high correlation (0.96), and it was therefore decided to base the analysis on the more simple Rasch model (Fraillon et al. 2014, p. 73). Differential item functioning in large-scale assessments The concept of DIF was developed as an alternative to item bias to avoid an implicit (negative) evaluation of the consequences of an item functioning differently for a group of test takers (Angoff 1993). DIF is a statistical concept, while item bias is a social concept. In the context of international educational surveys, DIF is also referred to as item-by-country interaction. DIF is generally seen as a problematic phenomenon, i.e. as an indicator of item bias, and the solution is therefore often to remove items that show DIF, or to treat the items as not-administered for the groups where they showed DIF, or to allow for country-specific item parameters. But sometimes items are important for the construct, and differences in different groups can be understandable and meaningful. For example, Hagquist and Andrich (2017) argue that stomach ache as an indicator of psychosomatic problems will have different interpretations in boys and girls, because girls can experience stomach ache in connection with their menstrual periods. They state that: "It turns out that in dealing with this DIF a critical issue is whether this potential source of the DIF should be considered relevant or irrelevant for the conceptualisation of psychosomatic problems and its applications" (Hagquist and Andrich 2017, p. 7). Therefore, they suggest not just to remove items, but also to resolve them by splitting them into two items, one for each group. This way, the information remain in the study, and the groups' different relations to the item is taken care of. This solution is also available to international educational surveys, but it would make it more difficult to explain the construct theoretically and to deduce proficiency scales from the data because they would be different in countries with different country-specific parameters. A number of studies have discussed the consequences of DIF in international large-scale assessments. According to Kreiner and Christensen (2014), the "evidence against the Rasch model is overwhelming" in their secondary analyses of PISA 2006 data, and they argue that the DIF is seriously impairing the league tables. Using an alternative statistical method based on a long-form market basket definition (Mislevy 1998), Zwitser et al. (2017) argue that they are able to take DIF into account, and at the same time provide final scores that are comparable between countries. In their analysis, model fit improves substantially if country-specific item parameters are included in the method, and the resulting league table is "nearly the same as the PISA league table that is based on an international calibration" (Zwitser et al. 2017, p. 225). They use this as evidence for the claim that "PISA methodology is quite robust against (non-)uniform DIF" (ibid.). Most of the research on DIF in international large-scale assessments is looking for sources for DIF and finds it in differences in language, curriculum, or culture (Huang et al. 2016; Oliveri et al. 2013; Sandilands et al. 2013; Wu and Ercikan 2006), while other studies investigate gender DIF (Grover and Ercikan 2017; Innabi and Dodeen 2006; Le 2009; Punter et al. 2017). Only one paper was found relating to DIF in ICILS 2013 (Punter et al. 2017). In this paper, the assessment data from ICILS 2013 was re-analyzed using a three-dimensional 2PL IRT model (the GPCM, generalized partial credit model), showing better fit than the Rasch model used in the international report (Fraillon et al. 2014). Correlations between these three dimensions ranged between 0.636 between dimension 1 and 3 for girls in Norway, and 0.982 between dimension 2 and 3 for girls in Slovenia. The analysis of differences in boys and girls in these three dimensions showed that girls outperformed boys in most countries on the dimension called evaluating and reflecting on information, and even more so on the dimension called sharing and communicating information, while no significant gender differences were found in the dimension called applying technical functionality (Punter et al. 2017, p. 777). The authors argue that the DIF found in relation to gender, and resolved by implementing a three-dimensional solution, is an argument in favor of analyzing the ICILS data in three dimensions instead of the uni-dimensional solution chosen in the international report. By far most of the research done in relation to DIF is concerned with improving test fairness, and has been for thousands of years (Holland and Wainer 1993, p. xiii). Therefore, the consequence of identifying DIF in items is usually to remove the item from the test or to resolve it by splitting it or marking it as not-administered for the groups showing DIF. But as already pointed out by Angoff (1993, p. 21ff.), investigation of DIF can give interesting insights into the construct and into the groups of students taking the test etc. In some sense, each item in a test is a construct in itself, which tests the specific knowledge and/or skill that it asks about. Items in a test can typically be arranged into a number of groups of similar items, relating to a sub-area (aspect) of the construct. As noted, the items in ICILS are related to two strands, but in the international analysis, it was found that these strands were highly correlated, so the test can be considered as unidimensional (Fraillon et al. 2014, p. 73). In PIRLS and trends in international mathematics and science study (TIMSS) each of the three main constructs (science literacy, mathematical literacy, and reading literacy) are reported both as uni-dimensional scales and as multiple (three or four) sub-dimensions. The fact that countries are not positioned the same way in each league table of the sub-dimensions is an indication of differential item functioning in the main scale between items from the different sub-scales, which opens up for seeing "DIF as an interesting outcome" (Zwitser et al. 2017, p. 214). Thus, when defining a construct to be measured, one has to decide how broad it should be, and how much differential item functioning is acceptable between groups of items. This decision can be called DIF by design, which is also what is used in the analysis in this paper. DIF could be an indication of the instrument measuring more than one construct, but if the constructs are closely correlated, and conceptually connected, they might work adequately statistically for the majority of the groups of students. Finding DIF in items for a particular group, e.g., for students from a specific country, can therefore be seen both as an indication of multi-dimensionality of the construct and as a potentially interesting and important characteristic of this group, be it special skills or lack of knowledge. Methods and instruments The student responses found in the dataset from the international computer and information literacy study (ICILS) 2013 (Fraillon et al. 2014) are re-analyzed using the Rasch model (Rasch 1960). The Rasch model separates the item difficulties and the student abilities, making it possible to talk about item difficulties independently of the students taking the test. The Rasch model gives the probability of a correct response (a score of 1, rather than 0) to an item, i, with a difficulty (\(\delta_{i}\)) depending on the ability (\(\theta_{p}\)) of the respondent, p. When a respondent has the same ability as the difficulty of an item, she has a 50 percent probability of answering correctly. In case of items with more categories than 0 and 1, a partial credit version of the Rasch model can be used (Andersen 1977; Andrich 1978; Masters 1982). The partial credit model can be written as follows: $$P_{pik} = \frac{{e^{{\sum\nolimits_{j = 0}^{k} {\left( {\theta_{p} - \delta_{ij} } \right)} }} }}{{\sum\limits_{n = 0}^{{m_{i} }} {e^{{\sum\nolimits_{j = 0}^{n} {\left( {\theta_{p} - \delta_{ij} } \right)} }} } }},\quad k = 0,1, \ldots m_{i} ,$$ where \(P_{pik}\) is the probability of getting from category k − 1 to k, and \(m_{i}\) + 1 is the number of categories in item i. Under the Rasch model, DIF can be described by the formula \(P\left( {X = 1|\theta ,G} \right) \ne P\left( {X = 1|\theta } \right)\), i.e. the probability of responding correctly to an item is different for members and non-members of the group G. This can be integrated into the Rasch model: $$P_{pik} = \frac{{e^{{\mathop \sum \nolimits_{j = 0}^{k} \left( {\theta_{p} - \delta_{ij} + \gamma_{i} G_{p} } \right)}} }}{{\sum\limits_{n = 0}^{{m_{i} }} {e^{{\mathop \sum \nolimits_{j = 0}^{n} \left( {\theta_{p} - \delta_{ij} + \gamma_{i} G_{p} } \right)}} } }},\quad k = 0,1, \ldots ,m_{i} ,$$ where \(G_{p}\) is 1 if the person is a member of the group p, and otherwise 0. Given that item difficulties are estimated based on empirical data, they cannot be expected to be exactly the same for different groups. Therefore, a threshold for acceptable differences has to be set. Longford et al. (1993, p. 175) have reproduced a table developed by N.S. Petersen from the Educational Testing Service in 1987. In this table, Petersen differentiates between three categories of DIF based on Mantel and Haenszel's differential item functioning (MH D-DIF): A, B, and C. DIF in category A is so low that it is in no need of attention. In category B, the level of DIF calls for consideration, and "if there is a choice among otherwise equivalent items, select the item with the smallest absolute value of MH D-DIF" (ibid.). Items with a DIF in category C should only be included in a test if it is "essential to meet specifications", and should be documented and brought before an independent review panel. Based on the educational testing service (ETS) DIF classification rules presented and expanded in Longford et al. (1993), Paek and Wilson (2011, p. 1028) calculate the threshold values as they would look in a Rasch framework: $$\begin{aligned} & A\; {\text{if}}\; \left| \gamma \right| \le 0.426\;{\text{or}}\;{\text{if}}\;H_{0} :\gamma = 0\;{\text{is}}\;{\text{not}}\;{\text{rejected}}\;{\text{below }}\;0.05\; {\text{level}} \\ & B\; {\text{if}}\; 0.426 < \left| \gamma \right| < 0.638\;{\text{and}}\;{\text{if}}\;H_{0} :\gamma = 0\;{\text{is}}\;{\text{rejected}}\;{\text{below}}\; 0.05 \;{\text{level}} \\ & C\;{\text{if}}\;0.638 \le \left| \gamma \right|\;{\text{and if}}\;H_{0} :\gamma = 0\;{\text{is}}\;{\text{rejected}}\;{\text{below}}\;0.05\; {\text{level}} \\ \end{aligned}$$ where A is considered a negligible DIF, B a medium DIF, and C a large DIF. In the ICILS DIF analyses that follow, the standard errors are well below 0.025 for all items. Thus in all cases, the null hypothesis will be rejected for γ above 0.426. In ICILS, the international report selects 0.3 logits (described as "approximately about one-third of a standard deviation" of the distribution of students (Fraillon et al. 2015, p. 164) as the threshold for considerable DIF, which means that the difference between two groups would be 0.6 logits. The publishedFootnote 2 dataset from ICILS 2013 was used. Items were re-coded and deleted or excluded from the scaling for individual countries in accordance with the decisions in the technical report (Fraillon et al. 2015, pp. 171, 264ff.). This study intends to understand which content knowledge can be gained from items showing DIF, and, therefore, the items that were removed from the dataset prior to the final international estimation for the international report were kept for the countries of interest in this study. The reason for removal in the international study might very well have been DIF (but only one (A10C for Germany under the MH D-DIF Level C criteria) of the removed items was actually showing DIF in the analyses of the present study) (Additional file 1). The test analysis modules (TAM) package in R (Robitzsch et al. 2017) was used for the analyses,Footnote 3 which were carried out under conditions as close to the ICILS international study as possible, including the use of weights to sample a group of 500 students from each country (250 students from each of the two participating Canadian provinces). The model used was the partial credit model ("item + item * step"), and the estimation was done using the Marginal Maximum Likelihood algorithm, with the mean of the item difficulties constrained to 0. To make sure that the analysis was comparable to the international ICILS analysis, item difficulties from the estimation were compared to the item difficulties reported in the ICILS technical report (Fraillon et al. 2015, p. 171). One item (A10E) showed a rather large discrepancy (around 0.5 logits) from the ICILS estimations due to different response distributions in the samples. In the ICILS international study, only countries that met the IEA sampling requirements were included in the estimation of the item difficulties. Because Denmark is one of the countries of interest in this study, it was included in the following analyses. In order to ensure comparability and soundness of the analyses, a comparison was made of an analysis of only the countries that met the sampling requirements with an analysis including 500 students from Denmark and the rest of the countries. Only minor differences were noted in the item difficulties in these two analyses. The analyses of DIF were carried out individually for each country using the R formula \(\sim item*step + country*item\) which is equivalent to the ConQuest (Wu et al. 2007) parametrization \(\sim item + step + item*step + item + country + country*item\). In the context of marginal maximum likelihood estimation, the analysis can take group differences in ability into account when estimating item parameters. This is done by allowing each group (in this case country and gender) to have their own population parameters.Footnote 4 The standard settings of the TAM function tam.mml.mfr were used, except for fac.oldxsi set to 0.4 to ensure convergence. For comparison, an analysis of the same dataset was carried out without the country interaction. The results of these analyses are presented in Table 1. Table 1 Key parameters from the Rasch analyses As can be seen from the deviances, the analyses taking DIF into account do describe the data better for all countries in this study. In order to test for homogeneity of the DIF, all expected score curves were plotted so the curves for the country under investigation could be compared visually to the curves for the remaining countries. The conclusion from these comparisons supported the hypothesis that the DIF could be considered uniform (Hanson 1998). The last three columns in Table 1 report the number of items that showed DIF according to the ICILS criterion (DIF larger than half of 0.6 logits) and the MH D-DIF Level B (DIF larger than half of 0.638 logits) and Level C (DIF larger than half of 0.426 logits) criteria. The number of items showing DIF is rather high, but this observation is not of primary interest for this study. In order to get insight into the content of the DIF items, the items were collected in groups based on the ICILS study's identification of the items in relation to the strands and aspects. As the Danish National Research Coordinator of ICILS. I had access to a mapping of items onto aspects in the ICILS working documents (IEA Data Processing Center, IEA Secretariat, and The Australian Council for Educational Research 2014). In Tables 2, 3 and 4, the description of the items is given together with the sizes of the DIF (Figs. 1, 2 and 3 show the sizes of DIF visually). Table 2 Easier and harder items for Danish students Items showing DIF for Danish students. Green bars show how many items there are in the aspect in total. Yellow bars show the number of items having a DIF indicating that they are easier for Danish students at level B (darker yellow), level ICILS (medium yellow), and level C (lighter yellow). Orange bars show the number of items being harder for Danish students at levels B, ICILS, and C Table 3 Easier and harder items for Norwegian students Items showing DIF for Norwegian students. Green bars show how many items there are in the aspect in total. Yellow bars show the number of items having a DIF indicating that they are easier for Norwegian students at level B (darker yellow), level ICILS (medium yellow), and level C (lighter yellow). Orange bars show the number of items being harder for Norwegian students at levels B, ICILS, and C Table 4 Easier and harder items for German students Items showing DIF for German students. Green bars show how many items there are in the aspect in total. Yellow bars show the number of items having a DIF indicating that they are easier for German students at level B (darker yellow), level ICILS (medium yellow), and level C (lighter yellow). Orange bars show the number of items being harder for German students at levels B, ICILS, and C Both Danish, German and Norwegian students find a number of items from Aspect 1.1, Knowing about and understanding computer use, easier than their peers of the same ability level from the other participating countries. Items in this aspect concern opening a link, navigating to URL's by inserting them into the browser address bar, opening files of specific types, and switching applications from the task bar. As can be seen from the descriptions, these items are connected to basic use of computers, and therefore address the computer literacy aspect of Computer and Information Literacy measured in ICILS. The second observation is that Danish students find items from Aspect 2.1, Transforming information, easier than their peers from other countries. Some of the items from this aspect are related to computer literacy, like using software to crop an image, but most of them are more related to information literacy, like excluding irrelevant information in a poster, adapting information to an audience, and converting a description of directions into a visual route on a map. Norwegian students find a single item (adapt information for an audience) from Aspect 2.1 easier than peers of similar abillity in other participating countries. German students, on the other hand, find one item (exclude irrelevant information in a poster) in Aspect 2.1 more difficult than their peers. Thirdly, German students find two items from from Aspect 2.4, Using information securely and safely, easier, and three items harder than peers from other participating countries. The easy items are connected to information literacy (they test if students can identify features that make one of two passwords more secure, and recognize that usage restrictions for images are a legal issue), while two of the harder items are connected to computer literacy (identify that an email does not originate from the purported sender, and that a link's URL does not match the URL displayed in the link text). The third of the harder items are more connected to information literacy (identify information that it is risky to include in a public profile). Danish students find an item from Aspect 2.4 easier than their peers in other countries. But, on the other hand, they find two items from this aspect more difficult than their peers. The easy item is connected to being able to understand technical aspects of secure Internet use (identify that an email does not originate from the purported sender). One of the more difficult items is of the same kind, namely the one that tests students' ability to identify URL fraud, while the other is about identifying information that is risky to include in a public profile, and could be said to be more related to information literacy. Norwegian students also find an item in aspect 2.4 easier than their peers. This item, explaining the potential problem if a personal email address is publicly available, is more connected to information literacy. And the Norwegian students also find one item more difficult, namely the one related to identifying paid search results from among organic search results. This is considered more related to information literacy. Two items in Aspect 2.2, Creating information, are harder for Danish students. These items are related to information literacy, more specifically to the layout of a presentation or information sheet, including laying out images and creating a balanced layout of text and images. Norwegian students also find the latter item harder. Contrary to this, German students find three items from Aspect 2.2 easier. The items German students find easy in Aspect 2.2 are all related to layout, including designing and laying out text, using colors consistently, and establishing a clear role for a photo and caption on a website. The final observation is related to Aspect 1.2, Accessing and evaluating information. Danish students find items from this aspect harder than their peers from other countries. The Danish students have trouble selecting relevant images in a presentation and presenting accurate information. These items are related to information literacy. The same goes for the item that the Norwegian students find difficult, namely finding information on a website. Contrary to this, German students find it easier to find specific information on a website. This paper shows that essential results can be identified by comparing the distribution of difficulties of items in international large-scale assessments. This is a more constructive approach to the challenge of DIF, but it does not eliminate the serious threat to the validity of the comparison of countries. One explanation for the DIF could be that the CIL construct is in fact more than one construct related to the two strands, collecting and managing information, and producing and exchanging information. This was partly the conclusion in the study by Punter et al. (2017) mentioned earlier, even though they split the items into three strands: evaluating and reflecting on information, communicating information, and applying technical functionality. This study underpins the hypothesis that CIL may be two things: Computer Literacy and Information Literacy. Therefore, I propose in future studies to investigate the psychometric properties of a two-dimensional scale composed of these two aspects. While I believe that the content-oriented approach to DIF used in this paper provides very important knowledge, which could be used in large-scale international assessment studies to inform educators more about the content aspects of the assessment, I do also want to bring up some concerns. First, even though I think I have identified a number of important insights, the DIF does show a somewhat unclear picture. One example is the items measuring computer literacy in Aspect 2.4 that Danish students found easier and harder, respectively. Second, the number of items showing DIF is rather small [even though it can be considered high when taking into account the severity of the DIF in relation to the league tables (cf. Kreiner and Christensen 2014)]. A number of conclusions can be drawn based on these observations. First, it seems that Norwegian and Danish students find items related to computer literacy easier than their peers from other countries. This could be connected to the fact that Denmark and Norway have some of the highest ICT Development Indexes worldwide, and that computers are highly available in their classrooms (Fraillon et al. 2014, p. 96). Students in these countries are used to working with computers, probably more than their peers from the other participating countries. Second, however, Danish, and to a certain degree Norwegian, students find items related to information literacy more difficult. This is the case when it comes to the layout and design of posters, information sheets, etc., and when it comes to communicating appropriately in a specific situation. Opposed to this, German students seem to be comparably good at designing and laying out posters, web pages etc. From a Danish perspective, these results are rather surprising and alarming. Information literacy has been an integral part of the teaching and learning standards, especially in relation to teaching and learning Danish, for several years (Undervisningsministeriet 2009, 2014), and the use of computers for research has been promoted for decades (Bundsgaard et al. 2014, p. 111f.). If Danish students are struggling with assessing and evaluating, managing, and creating information, they will face problems in their future studies, as citizens, and at the workplace. Do these conclusions indicate that having easy access to technology might help develop basic computer skills, while more critical parts of computer and information literacy need more focus in teaching practices to be developed? As the title of this paper suggests, identifying country DIF in international comparative educational studies can be considered a pedagogical tool. The analyses can give teachers and curriculum designers knowledge of which aspects of a construct students in a specific country find particularly easy or hard, and this can be used in giving these particular aspects extra focus in teaching. Based on the analyses in this paper, a recommendation for Danish (and to a certain degree Norwegian) teachers would be to put extra emphasis on teaching information literacy, while German students might gain if their teachers put more emphasis on computer literacy. The data set used is freely available at https://www.iea.nl/repository/studies/icils-2013 after login (user creation is free). The R script used in the analyses is available as Additional file 1. The other countries/education systems participating in ICILS 2013, and included in the analysis were Australia, Chile, Croatia, the Czech Republic, Republic of Korea, Lithuania, Poland, the Russian Federation, the Slovak Republic, Slovenia, Thailand, and Turkey. The two Canadian provinces Newfoundland and Labrador and Ontario were also included. The City of Buenos Aires (Argentina), Denmark, Hong Kong SAR, the Netherlands, and Switzerland did not meet the sampling requirements, and were therefore not included in the international item estimation. In this paper, however, Denmark is included. https://www.iea.nl/our-data. The R code is available as an appendix to this paper. In the international estimation a group variable called "Windows" was included to take into account if a Windows computer was used by the test taker. This variable was not available in the public dataset. CIL: computer and information literacy DIF: differential item functioning ETS: IEA: International Association for the Evaluation of Educational Achievement ICILS: MH D-DIF: Mantel and Haenszel's differential item functioning PIRLS: progress in international reading literacy study PISA: Programme for International Student Assessment test analysis modules TIMSS: trends in international mathematics and science study Andersen, E. B. (1977). Sufficient statistics and latent trait models. Psychometrika, 42(1), 69–81. Andrich, D. (1978). A rating formulation for ordered response categories. Psychometrika, 43(4), 561–573. https://doi.org/10.1007/BF02293814. Angoff, W. H. (1993). Perspectives on differential item functioning methodology. In P. W. Holland & H. Wainer (Eds.), Differential item functioning (pp. 3–24). Hillsdale, NJ: Lawrence Erlbaum. Bundsgaard, J., Pettersson, M., & Puck, M. R. (2014). Digitale kompetencer. It i danske skoler i et internationalt perspektiv [Digital Competences. IT in Danish Schools in an International Perspective]. Aarhus: Aarhus Universitetsforlag. Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Gebhardt, E. (2014). Preparing for life in a digital age. The IEA international computer and information literacy study international report. Cham: Springer. Fraillon, J., Schulz, W., & Ainley, J. (2013). International computer and information literacy study: Assessment framework. Retrieved from http://ifs-dortmund.de/assets/files/icils2013/ICILS_2013_Framework.pdf. Fraillon, J., Schulz, W., Friedman, T., Ainley, J., Gebhardt, E., Ainley, J., et al. (2015). International association for the evaluation of educational achievement (IEA). ICILS 2013: Technical report. Grover, R. K., & Ercikan, K. (2017). For which boys and which girls are reading assessment items biased against? Detection of differential item functioning in heterogeneous gender populations. Applied Measurement in Education, 30(3), 178–195. https://doi.org/10.1080/08957347.2017.1316276. Hagquist, C., & Andrich, D. (2017). Recent advances in analysis of differential item functioning in health research using the Rasch model. Health and Quality of Life Outcomes. https://doi.org/10.1186/s12955-017-0755-0. Hanson, B. A. (1998). Uniform DIF and DIF defined by differences in item response functions. Journal of Educational and Behavioral Statistics, 23(3), 244–253. Holland, P. W., & Wainer, H. (1993). Differential item functioning. Hillsdale, NJ: Lawrence Erlbaum. Huang, X., Wilson, M., & Wang, L. (2016). Exploring plausible causes of differential item functioning in the PISA science assessment: Language, curriculum or culture. Educational Psychology, 36(2), 378–390. https://doi.org/10.1080/01443410.2014.946890. IEA Data Processing Center, IEA Secretariat, & The Australian Council for Educational Research. (2014, February). International computer and information literacy study. Item map for CIL framework. Innabi, H., & Dodeen, H. (2006). Content analysis of gender-related differential item functioning TIMSS items in mathematics in Jordan. School Science and Mathematics, 106(8), 328–337. https://doi.org/10.1111/j.1949-8594.2006.tb17753.x. Kreiner, S., & Christensen, K. B. (2014). Analyses of model fit and robustness. A new look at the PISA scaling model underlying ranking of countries according to reading literacy. Psychometrika, 79(2), 210–231. https://doi.org/10.1007/s11336-013-9347-z. Le, L. T. (2009). Investigating gender differential item functioning across countries and test languages for PISA science items. International Journal of Testing, 9(2), 122–133. https://doi.org/10.1080/15305050902880769. Longford, N. T., Holland, P. W., & Thayer, D. T. (1993). Stability of the MH D-DIF statistics across populations. In P. W. Holland & H. Wainer (Eds.), Differential item functioning. Hillsdale: Lawrence Erlbaum. Masters, G. N. (1982). A Rasch model for partial credit scoring. Psychometrika, 47(2), 149–174. Mislevy, R. J. (1998). Implications of market-basket reporting for achievement-level setting. Applied Measurement in Education, 11(1), 49–63. https://doi.org/10.1207/s15324818ame1101_3. OECD. (2014). PISA 2012—technical report. Retrieved from http://www.oecd.org/pisa/pisaproducts/PISA-2012-technical-report-final.pdf. Oliveri, M. E., Ercikan, K., & Zumbo, B. (2013). Analysis of sources of latent class differential item functioning in international assessments. International Journal of Testing, 13(3), 272–293. https://doi.org/10.1080/15305058.2012.738266. Paek, I., & Wilson, M. (2011). Formulating the Rasch differential item functioning model under the marginal maximum likelihood estimation context and its comparison with Mantel-Haenszel procedure in short test and small sample conditions. Educational and Psychological Measurement, 71(6), 1023–1046. https://doi.org/10.1177/0013164411400734. Punter, R. A., Meelissen, M. R. M., & Glas, C. A. W. (2017). Gender differences in computer and information literacy: An exploration of the performances of girls and boys in ICILS 2013. European Educational Research Journal, 16(6), 762–780. https://doi.org/10.1177/1474904116672468. Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Copenhagen: Danmarks pædagogiske Institut. Robitzsch, A., Kiefer, T., & Wu, M. (2017). TAM: Test analysis modules. R package version 2.8-21. Retrieved from https://CRAN.R-project.org/package=TAM. Sandilands, D., Oliveri, M. E., Zumbo, B. D., & Ercikan, K. (2013). Investigating sources of differential item functioning in international large-scale assessments using a confirmatory approach. International Journal of Testing, 13(2), 152–174. https://doi.org/10.1080/15305058.2012.690140. Undervisningsministeriet. (2009). Fælles mål 2009—Dansk [Common Goals 2009—Danish]. Retrieved from http://www.uvm.dk/Service/Publikationer/Publikationer/Folkeskolen/2009/~/media/Publikationer/2009/Folke/Faelles%20Maal/Filer/Faghaefter/120326%20Faelles%20maal%202009%20dansk%2025.ashx. Undervisningsministeriet. (2014). Forenklede Fælles Mål Dansk [Simplified Common Goals 2014 Danish]. Retrieved October 30, 2015, from EMU Danmarks læringsportal website: http://www.emu.dk/omraade/gsk-lærer/ffm/dansk. Wu, M. L., Adams, R. J., Wilson, M. R., & Haldane, S. A. (2007). ConQuest: Generalised item response modelling software (version 2.0). Camberwell: ACER Press. Wu, A. D., & Ercikan, K. (2006). Using multiple-variable matching to identify cultural sources of differential item functioning. International Journal of Testing, 6(3), 287–300. https://doi.org/10.1207/s15327574ijt0603_5. Zwitser, R. J., Glaser, S. S. F., & Maris, G. (2017). Monitoring countries in a changing world: A new look at DIF in international surveys. Psychometrika, 82(1), 210–232. https://doi.org/10.1007/s11336-016-9543-8. The author wishes to thank the editor Peter Van Rijn and the anonymous reviewers for the invaluable comments and suggestions to an earlier version of the paper. The author also would like to express his deep gratitude to the thousands of students and teachers around the globe who gave their time and effort by taking the test and answering questions. This research was not funded. Danish School of Education, Aarhus University, Copenhagen, Denmark Jeppe Bundsgaard Search for Jeppe Bundsgaard in: The author read and approved the final manuscript. Correspondence to Jeppe Bundsgaard. The author was National Research Coordinator for Denmark in ICILS 2013. Additional file 1. R script used in the analyses. Bundsgaard, J. DIF as a pedagogical tool: analysis of item characteristics in ICILS to understand what students are struggling with. Large-scale Assess Educ 7, 9 (2019) doi:10.1186/s40536-019-0077-2 ICILS 2013 Proficiency scales Pedagogical use of tests
CommonCrawl
The system and its model A system is a part of the real world which interacts with its environment. It reacts to input signals from the environment with output signals. Signals are physical (biological, economic etc.) quantities or quantity changes which convey information about the nature of the examined system or phenomenon. Modelling is an important part of the examination of systems. Modelling is a tool of experimental research and examinations during which we replace the examined system with another one. Then we use the model to learn more about the behaviour and characteristics of the system. In many cases it is impossible to carry out experiments on the real system as it would be too expensive or would disturb the operation of the system. Other times the system does not yet exist, we only plan to create it, but first we want to analyse with a model how it will behave. The purpose of modelling: analysis and better understanding of the characteristics and behaviour of systems (analysis); prediction of the future condition of systems; completion of system design tasks (synthesis); validation of systems. The model describes the behaviour of the system from a specific point of view. The model summarises our knowledge about the system. Models can be physical and mathematical models. For instance, the miniature representation of a real system (e.g. a model aircraft, model car, model ship or model train) is a physical model. Physical models make it possible to examine the main properties of real systems, though some properties may be size-dependent so they cannot be examined on such miniature representations. Model vehicles, for example, enable the examination of driving properties but not the examination of phenomena influencing passenger comfort. Certain systems show similarities and analogies. For example, there is correspondence between the elements and signals of a mechanical and an electrical system. So, a mechanical system can be examined by analysing a corresponding examinable electrical system and, vice versa, the properties of an electrical system can be given on the basis of an equivalent mechanical system. Figure 1 shows an electrical system consisting of resistance, inductance and capacitor, and a mechanical system consisting of mass, spring and fluid damping element. The input signal of the electrical system is the u voltage, and its output signal is the i current (or the q charge). The input signal of the mechanical system is the F force, and its output signal is the h displacement. We want to determine the relationship between the input and the output signal. Figure 1: Analogous mechanical and electrical system Electrical phenomena are analogous with fluid flow in many respects, so they can be illustrated by examining the characteristics of fluid flow. The mathematical model describes the behaviour of the system with mathematical equations. It enables the analysis of system behaviour without the need to carry out experiments on the real system. We can use the system model to make calculations and numerically simulate the behaviour of the system. The system model is also used for designing controllers. As an example, let's see the mathematical description of the mechanical and electrical systems shown in Figure 1. The behaviour of the systems can be described with a differential equation which reveals the relationship between the input signal and its changes (derivatives) and the output signal and its changes (derivatives). Signals are quantities which have information content. The behaviour of the electrical system can be described with the following differential equation: $u=iR+L\frac{di}{dt} +\frac{1}{C} \int _{0}^{t}id\tau $ This equation means that the sum of voltage drops on resistance, inductance and capacity gives the voltage fed into the network. The symbol of electric charge is q which is the integral of the electric current (this is taken here as an output signal). This way the above equation will take the following form: $u=L\frac{d^{2} q}{dt^{2} } +R\frac{dq}{dt} +\frac{1}{C} q $ The differential equation of the mechanical system: $F=m\frac{d^{2} h}{dt^{2} } +k\frac{dh}{dt} +ch $ which means that the resultant of the forces acting on the mass, which is the difference of the F tension force, the velocity-dependent fluid damping and the displacement-dependent spring force, results in the acceleration. It is clear from this that the behaviour of the electrical and that of the mechanical system is described by a second order differential equation. Analogous corresponding quantities in the two systems: input signal: u voltage input signal: F tension force output signal: q charge output signal: h displacement L inductance m mass R resistance k fluid damping coefficient 1/C inversion of the capacity c spring constant Now let's mark the input signal with $u$, the output signal with $y$, and the parameter values with $a_{0} ,a_{1} ,a_{2} $. Both systems' behaviour can be described with the following differential equation: $a_{2} \frac{d^{2} y}{dt^{2} } +a_{1} \frac{dy}{dt} +a_{0} y=u $ A possible solution of the equation can be reached by expressing the highest differential coefficient (in the case of the mechanical system, the acceleration): $\frac{d^{2} y}{dt^{2} } =\frac{1}{a_{2} } u-\frac{a_{1} }{a_{2} } \frac{dy}{dt} -\frac{a_{0} }{a_{2} } y $ The sudden change in the input signal only affects the acceleration first which, with the evolution of time, will change the velocity, and ultimately the displacement. (Velocity is shaped by the integration of the acceleration signal, while displacement is shaped by the integration of the velocity signal.) Figure 2 shows the block diagram, which can be the basis of a calculation algorithm. By specifying the input signal and the initial conditions the integrator blocks calculate the derivative of the output signal and the change of the output signal versus time. (In the Figure the superscript in brackets indicate the derivatives.) Figure 2: Block diagram of the second order differential equation The output signals of the integrators are important quantities for the system (displacement, velocity and electric charge, current) which are the so-called states of the system, their value is influenced by the earlier changes of the system, so we may say that they have a memory-like property. They are unable to immediately react to sudden input signal changes; they change gradually. With the state variables the second order differential equation can be transformed into a system of two first order differential equations. This is the so-called state equation of the system. $\frac{dx_{1} }{dt} =x_{2}$ $\frac{dx_{2} }{dt} =-\frac{a_{0} }{a_{2} } x_{1} -\frac{a_{1} }{a_{2} } x_{2} +\frac{1}{a_{2} } u $ $y=x_{1} $ The model of the system makes it possible to examine and calculate the behaviour of the system, and its response to input signals and initial conditions. In simple cases (linear equations in the case of well-defined, analytically described input signals) the calculations can be performed analytically. These calculations are made easier by the various programming languages, such as Matlab, Mathematica and SIMUL for modelling or Prolog for logical relations. Levine (1996) Szűcs (1970,1972)
CommonCrawl
Tier 1 Capital Ratio: Definition and Formula for Calculation Adam Hayes Adam Hayes, Ph.D., CFA, is a financial writer with 15+ years Wall Street experience as a derivatives trader. Besides his extensive derivative trading expertise, Adam is an expert in economics and behavioral finance. Adam received his master's in economics from The New School for Social Research and his Ph.D. from the University of Wisconsin-Madison in sociology. He is a CFA charterholder as well as holding FINRA Series 7, 55 & 63 licenses. He currently researches and teaches economic sociology and the social studies of finance at the Hebrew University in Jerusalem. Reviewed by Margaret James Diane Costagliola Fact checked by Diane Costagliola Diane Costagliola is a researcher, librarian, instructor, and writer who has published articles on personal finance, home buying, and foreclosure. Investopedia / Nez Riaz What Is the Tier 1 Capital Ratio? The tier 1 capital ratio is the ratio of a bank's core tier 1 capital—that is, its equity capital and disclosed reserves—to its total risk-weighted assets. It is a key measure of a bank's financial strength that has been adopted as part of the Basel III Accord on bank regulation. The tier 1 capital ratio measures a bank's core equity capital against its total risk-weighted assets—which include all the assets the bank holds that are systematically weighted for credit risk. For example, a bank's cash on hand and government securities would receive a weighting of 0%, while its mortgage loans would be assigned a 50% weighting. Tier 1 capital is core capital and is comprised of a bank's common stock, retained earnings, accumulated other comprehensive income (AOCI), noncumulative perpetual preferred stock and any regulatory adjustments to those accounts. The tier 1 capital ratio is the ratio of a bank's core tier 1 capital—that is, its equity capital and disclosed reserves—to its total risk-weighted assets. It is a key measure of a bank's financial strength that has been adopted as part of the Basel III Accord on bank regulation. To force banks to increase capital buffers and ensure they can withstand financial distress before they become insolvent, Basel III rules would tighten both tier-1 capital and risk-weighted assets (RWAs). The Formula for the Tier 1 Capital Ratio Is: Tier 1 Capital Ratio = Tier 1 Capital Total Risk Weighted Assets \text{Tier 1 Capital Ratio} = \frac{\text{Tier 1 Capital}}{\text{Total Risk Weighted Assets}} Tier 1 Capital Ratio=Total Risk Weighted AssetsTier 1 Capital​ What Does the Tier 1 Capital Ratio Tell You? The tier 1 capital ratio is the basis for the Basel III international capital and liquidity standards devised after the financial crisis, in 2010. The crisis showed that many banks had too little capital to absorb losses or remain liquid, and were funded with too much debt and not enough equity. To force banks to increase capital buffers, and ensure they can withstand financial distress before they become insolvent, Basel III rules would tighten both tier 1 capital and risk-weighted assets (RWAs). The equity component of tier-1 capital has to have at least 4.5% of RWAs. The tier 1 capital ratio has to be at least 6%. Basel III also introduced a minimum leverage ratio—with tier 1 capital, it must be at least 3% of the total assets—and more for global systemically important banks that are too big to fail. The Basel III rules have yet to be finalized due to an impasse between the U.S. and Europe. A firm's risk-weighted assets include all assets that the firm holds that are systematically weighted for credit risk. Central banks typically develop the weighting scale for different asset classes; cash and government securities carry zero risk, while a mortgage loan or car loan would carry more risk. The risk-weighted assets would be assigned an increasing weight according to their credit risk. Cash would have a weight of 0%, while loans of increasing credit risk would carry weights of 20%, 50% or 100%. The tier 1 capital ratio differs slightly from the tier 1 common capital ratio. Tier 1 capital includes the sum of a bank's equity capital, its disclosed reserves, and non-redeemable, non-cumulative preferred stock. Tier 1 common capital, however, excludes all types of preferred stock as well as non-controlling interests. Tier 1 common capital includes the firm's common stock, retained earnings and other comprehensive income. Example of the Tier 1 Capital Ratio For example, assume that bank ABC has shareholders' equity of $3 million and retained earnings of $2 million, so its tier 1 capital is $5 million. Bank ABC has risk-weighted assets of $50 million. Consequently, its tier 1 capital ratio is 10% ($5 million/$50 million), and it is considered to be well-capitalized compared to the minimum requirement. On the other hand, bank DEF has retained earnings of $600,000 and stockholders' equity of $400,000. Thus, its tier 1 capital is $1 million. Bank DEF has risk-weighted assets of $25 million. Therefore, bank DEF's tier 1 capital ratio is 4% ($1 million/$25 million), which is undercapitalized because it is below the minimum tier 1 capital ratio under Basel III. Bank GHI has tier 1 capital of $5 million and risk-weighted assets of $83.33 million. Consequently, bank GHI's tier 1 capital ratio is 6% ($5 million/$83.33 million), which is considered to be adequately capitalized because it is equal to the minimum tier 1 capital ratio. The Difference Between the Tier 1 Capital Ratio and the Tier 1 Leverage Ratio The tier 1 leverage ratio is the relationship between a banking organization's core capital and its total assets. The tier 1 leverage ratio is calculated by dividing tier 1 capital by a bank's average total consolidated assets and certain off-balance sheet exposures. Similarly to the tier 1 capital ratio, the tier 1 leverage ratio is used as a tool by central monetary authorities to ensure the capital adequacy of banks and to place constraints on the degree to which a financial company can leverage its capital base but does not use risk-weighted assets in the denominator. Tier 1 Common Capital Ratio Definition The Tier 1 common capital ratio is a measurement of a bank's core equity capital compared with its total risk-weighted assets. Tier 1 Leverage Ratio: Definition, Formula, Example The tier 1 leverage ratio relates a bank's core capital to its total assets in order to judge liquidity. Tier 1 Capital: Definition, Components, Ratio, and How It's Used Tier 1 capital is used to describe the capital adequacy of a bank and refers to its core capital, including equity capital and disclosed reserves. Basel III: What It Is, Capital Requirements, and Implementation Basel III is a set of reform measures intended to improve regulation, supervision, and risk management in the international banking sector. What the Capital Adequacy Ratio (CAR) Measures With Formula The capital adequacy ratio (CAR) is defined as a measurement of a bank's available capital expressed as a percentage of a bank's risk-weighted credit exposures. Common Equity Tier 1 (CET1) Definition and Calculation Common Equity Tier 1 (CET1) is a component of Tier 1 capital that is mostly of common stock held by a bank or other financial institution. How Can I Calculate the Tier 1 Capital Ratio? Tier 1 Capital vs. Tier 2 Capital: What's the Difference? What Is the Minimum Capital Adequacy Ratio Under Basel III? How to Calculate the Capital-To-Risk Weighted Assets Ratio What Does a High Capital Adequacy Ratio Indicate? How to Determine Solvency Ratio Requirements Under the Basel III Accord
CommonCrawl
Matched steering vector searching based direction-of-arrival estimation using acoustic vector sensor array Yu Ao1, Ling Wang1, Jianwei Wan1 & Ke Xu1 The acoustic vector sensor (AVS) array is a powerful tool for underwater target's direction-of-arrival (DOA) estimation without any bearing ambiguities. However, traditional DOA estimation algorithms generally suffer from low signal-to-noise ratio (SNR) as well as snapshot deficiency. By exploiting the properties of the minimum variance distortionless response (MVDR) beamformer, a new DOA estimation method basing on matched steering vector searching is proposed in this article. Firstly, attain the rough estimate of the desired DOA using the traditional algorithms. Secondly, set a small angular interval around the crudely estimated DOA. Thirdly, make the view direction vary in the view interval, and for each view direction, calculate the beam amplitude response of the MVDR beamformer, and find the minimum of the amplitude response. Finally, the pseudo-spatial spectrum is achieved, and the accurate estimate of the desired DOA can be obtained through peak searching. Computer simulations verify that the proposed method is efficient in DOA estimation, especially in low SNR and insufficient snapshot data scenarios. An acoustic vector sensor (AVS) consists of an omnidirectional acoustic pressure receiver and a dipole-like directional particle velocity receiver [1]. AVS measures the three Cartesian components of the particle velocity as well as the scalar acoustic pressure at a single point in sound field synchronously and independently [2]. Compared with the standard acoustic pressure sensors, the intrinsic directivity gives an AVS two advantages. One is that the directly measured directional information permits the arrays made up of acoustic vector sensors to improve the accuracy of target detection and source localization without increasing array aperture. The other is that the left/right ambiguity problem, from which an acoustic pressure sensor array always suffers, never arise. Even a single AVS is capable of localizing a source in the whole space [3], which is of great practical significance. Due to the considerable performance and the huge potential demands in underwater applications, AVS has developed rapidly in theory and been widely used in many engineering fields during the last two decades, especially in passive DOA estimation. Since Nehorai and Paldi first introduce the AVS array measurement model to the signal processing research community [4], diverse types of DOA estimation algorithms have been proposed [5,6,7,8,9,10,11,12,13]. Hawkes and Nehorai adapt the MVDR (also known as Capon) approach to AVS array [5]. Wong and Zoltowski link the subspace-based methods, which include the estimation of signal parameters via rotational invariance technique (ESPRIT) [6], root multiple signal classification (MUSIC) [7], and self-initiating MUSIC [8] to the AVS array. The wideband source localization and wideband beamforming issues are discussed in [9, 10] respectively. A 2-D DOA estimation algorithm using the propagator method (PM) is proposed in [11]. Liu et al. introduce a 2-D DOA estimation method for coherence sources with a sparse AVS array [12]. Han and Nehorai put forward a new class of nested vector-sensor arrays which is capable of significantly increasing the degrees of freedom [13]. In [14], a modified particle filtering algorithm for DOA Tracking basing on a single AVS is proposed. With the help of an L-shaped sparsely-distributed vector sensor array, Si et al. present a novel 2-D DOA and polarization estimation method to handle the scenario where uncorrelated and coherent sources coexist [15]. Recently, several novel techniques such as the parallel profiles with linear dependencies (PARALIND) model [16], compressed sensing [17], and partial angular sparse representation method [18] have been investigated for DOA estimation using the AVS array. In the practical ocean environment, the signal-to-noise ratio (SNR) is usually quite low and the snapshot data is usually insufficient. These disadvantages may lead to serious performance degradation for DOA estimation when the traditional techniques are applied. To overcome these problems, a number of new algorithms have appeared in the literature [19, 20,21,22,23,24,25,26]. Ichige et al. put forward a modified MUSIC algorithm by using both the amplitude and phase information of noise subspace [19]. A new method for DOA estimation is proposed in [20] through iteratively subspace decomposition. In [21], by means of signal covariance matrix reconstruction, the noise subspace is precisely estimated and the DOA estimation performance is improved. With the help of the optimization method, [22] presented a noise subspace-based iterative algorithm for direction finding. Recently, a few new techniques were combined with DOA estimation, such as the sparse recovery algorithm [23], the sparse decomposition technique [24], the compressive sensing theory [25], and the multiple invariance ESPRIT [26]. In this paper, we investigate the feature of the Capon approach in depth. The design principle of the MVDR beamformer can be described as minimizing the variance of interference and noise at the output of the beamformer, while ensuring the distortionless response of the beamformer towards a selected view direction, which is naturally hoped to equal the direction of the desired source. However, in the case that the view direction does not point to the desired source precisely, even a very slight mismatch will lead the phenomenon known as signal cancellation [27], i.e., the beamformer will misinterpret the desired signal as an interference and put a null in the direction of the desired signal. Generally speaking, signal cancellation has an unfavorable effect on beamforming and DOA estimation, and several studies have been carried out on suppressing such effects [28,29,30]. However, in this paper, we find that the signal cancellation phenomenon can be utilized to attain a better performance of DOA estimation by searching the matched steering vector. What is more, differing from all of the methods mentioned in [19, 20,21,22,23,24,25,26], our study is based on the AVS array; hence, the bearing ambiguity is removed. The rest part of this paper is structured as follows. In Section 2, we state the mathematical model for the measurements of an AVS array. In Section 3, we propose our DOA estimation algorithm, give its steps, and analyze the relation between the presented algorithm and the MVDR algorithm. In Section 4, we show some computer simulation experiments and discuss about the results. Finally, we conclude this paper in Section 5. Measurement model We consider a horizontal linear array which consists of M acoustic vector sensors, with a uniform element spacing d. Let K mutually uncorrelated narrowband point sources with common center frequency ω be located at azimuths φk and elevationsθk(k = 1,2,…,K) with respect to the first sensor of the array. In addition, φk ∈ [−π, π), θk ∈ [0, π]. In this paper, we only concern on the azimuth estimation. Figure 1 exhibits the first AVS of the array and the wave vector of one of the impinging signals, which is represented as k, in the Cartesian coordinate system. The density of the water medium ρ and the sound speed in the medium c are assumed to be constant and prior known. The AVS array is assumed to be in the far field with respect to all sources, ensuring that the wave fronts at the array are planar. The first AVS of the array and the wave vector of the impinging signal in the Cartesian coordinate system The acoustic pressure component of the kth source signal at the first sensor of the array is defined as [31] $$ {\mathrm{s}}_k(t)={p}_k(t)\exp \left( i\omega t\right) $$ where pk(t) is a zero-mean complex Gaussian process, which denotes the slowly varying random pressure envelope of the kth source signal. And its variance \( {\sigma}_k^2=E\left[{\left|{p}_k(t)\right|}^2\right] \) denotes the power of sk(t). Let a(φk) represent the M-by-1 steering vector, which is the array's response to a unit amplitude plane wave from the horizontal direction φk, of an equivalent pressure sensor array, i.e., an array with all of the vector sensors being replaced by pressure sensors hypothetically. Thus, we have $$ \mathbf{a}\left({\varphi}_k\right)={\left[1,{e}^{-i2\pi d\cos {\varphi}_k/\lambda },\dots, {e}^{-i\left(M-1\right)2\pi d\cos {\varphi}_k/\lambda}\right]}^{\mathrm{T}} $$ where λ stands for the wavelength. Besides, let uk represent the 4-by-1 response vector of a single AVS to the kth source, which is defined as $$ {\mathbf{u}}_k={\left[1,\cos {\varphi}_k\sin {\theta}_k,\sin {\varphi}_k\sin {\theta}_k,\cos {\theta}_k\right]}^{\mathrm{T}} $$ The output of the mth sensor at the moment of t is a 4-by-1 vector, which is expressed as $$ {\mathbf{x}}_m(t)=\sum \limits_{k=1}^K{a}_m\left({\varphi}_k\right){\mathbf{u}}_k{s}_k(t)+{\mathbf{n}}_m(t) $$ where am(φk) denotes the mth element of a(φk), and $$ {\mathbf{n}}_m(t)=\left[\begin{array}{c}{n}_p(t)\\ {}{\mathbf{n}}_v(t)\end{array}\right] $$ In Eq. (5), np(t) and nv(t) represent the noise of the acoustic pressure receiver and the particle velocity receiver respectively. Note that nv(t) is a 3-by-1 vector. The output of the AVS array is a 4 M-by-1 vector by stacking the M 4-by-1 measurement vectors of each sensor. It can be written as $$ {\displaystyle \begin{array}{l}\mathbf{X}(t)={\left[{\mathbf{x}}_1^{\mathrm{T}}(t),\dots, {\mathbf{x}}_M^{\mathrm{T}}(t)\right]}^{\mathrm{T}}\\ {}\kern1.75em =\left[\mathbf{a}\left({\varphi}_1\right)\otimes {\mathbf{u}}_1,\dots, \mathbf{a}\left({\varphi}_K\right)\otimes {\mathbf{u}}_K\right]\mathbf{S}(t)+\mathbf{N}(t)\end{array}} $$ $$ \mathbf{S}(t)={\left[{\mathrm{s}}_1(t),{\mathrm{s}}_2(t),\dots, {\mathrm{s}}_K(t)\right]}^{\mathrm{T}} $$ contains the K source signals, and $$ \mathbf{N}(t)={\left[{\mathbf{n}}_1^{\mathrm{T}}(t),{\mathbf{n}}_2^{\mathrm{T}}(t),\dots, {\mathbf{n}}_M^{\mathrm{T}}(t)\right]}^{\mathrm{T}} $$ Both the signal vector S(t) and the noise vector N(t) are assumed to be independent identically distributed (i.i.d.), zero-mean, complex Gaussian processes. Moreover, we assume that S(t) and N(t) are independent with each other. They can be completely characterized by their covariance matrices $$ {\mathbf{R}}_s=E\left\{\mathbf{S}(t){\mathbf{S}}^{\mathrm{H}}(t)\right\}=\operatorname{diag}\left({\sigma}_k^2\right) $$ $$ {\mathbf{R}}_n=E\left\{\mathbf{N}(t){\mathbf{N}}^{\mathrm{H}}(t)\right\}={I}_M\otimes \left[\begin{array}{cc}{\sigma}_p^2& 0\\ {}0& {\sigma}_v^2{I}_3\end{array}\right] $$ where \( {\sigma}_p^2 \) and \( {\sigma}_v^2 \) represent the variances of the noise of the acoustic pressure receiver and particle velocity receiver respectively, and IM denotes the Mth-order identity matrix. We define the steering vector of the AVS array, which is represented by ψ(φk) as the Kronecker product of a(φk) and uk. That is to say $$ \boldsymbol{\uppsi} \left({\varphi}_k\right)=\mathbf{a}\left({\varphi}_k\right)\otimes {\mathbf{u}}_k $$ Thus, Eq. (6) can be rewritten as $$ {\displaystyle \begin{array}{l}\mathbf{X}(t)=\left[\boldsymbol{\uppsi} \left({\varphi}_1\right),\dots, \boldsymbol{\uppsi} \left({\varphi}_K\right)\right]\mathbf{S}(t)+\mathbf{N}(t)\\ {}\kern1.75em =\boldsymbol{\Psi} \mathbf{S}(t)+\mathbf{N}(t)\end{array}} $$ The covariance matrix of the output data X(t) is $$ \mathbf{R}=E\left\{\mathbf{X}(t){\mathbf{X}}^{\mathrm{H}}(t)\right\}={\boldsymbol{\Psi} \mathbf{R}}_s{\boldsymbol{\Psi}}^{\mathrm{H}}+{\mathbf{R}}_n $$ Signal cancellation of MVDR beamformer Without loss of generality, we assume that among the K source signals, one of them is the desired signal, and the others are interference. Let \( \tilde{\varphi} \) represent the desired direction, which is unknown and to be estimated. With regard to the MVDR beamforming method, the problem of solving the optimal weight vector w can be expressed as $$ \underset{\mathbf{w}}{\min }{\mathbf{w}}^{\mathrm{H}}{\mathbf{R}}_n\mathbf{w},\kern1.25em \mathrm{s}.\mathrm{t}.\kern0.5em {\mathbf{w}}^{\mathrm{H}}\boldsymbol{\uppsi} \left(\overline{\varphi}\right)=1 $$ where \( \overline{\varphi} \) denotes the view direction, and \( \boldsymbol{\uppsi} \left(\overline{\varphi}\right) \) represents the corresponding view steering vector. Equation (14) implies that signal from the view direction \( \overline{\varphi} \) will pass the beamformer without distortion; meanwhile, signals from any other direction will be suppressed. With the help of the Lagrange multiplier approach, w can be solved as $$ \mathbf{w}=\frac{{\mathbf{R}}_n^{-1}\boldsymbol{\uppsi} \left(\overline{\varphi}\right)}{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\mathbf{R}}_n^{-1}\boldsymbol{\uppsi} \left(\overline{\varphi}\right)} $$ In practice, the noise covariance matrix Rn can hardly be estimated; therefore, we replace Rn by the estimation value of the data covariance matrix, which is $$ \hat{\mathbf{R}}=\frac{1}{N}\sum \limits_{n=1}^N\mathbf{X}(n){\mathbf{X}}^{\mathrm{H}}(n) $$ where N denotes the number of snapshots. Given the weight vector w, the beam response of a beamformer is defined as $$ H\left(\varphi \right)={\mathbf{w}}^{\mathrm{H}}\boldsymbol{\uppsi} \left(\varphi \right) $$ Plug Eq. (15) into Eq. (17), and we obtain the beam amplitude response of the MVDR beamformer, which is expressed as $$ \left|H\left(\varphi \right)\right|=\left|\frac{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\varphi \right)}{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\overline{\varphi}\right)}\right| $$ Consider a desired direction-centered angular interval $$ \Phi =\left[\tilde{\varphi}-\Delta \varphi, \tilde{\varphi}+\Delta \varphi \right] $$ as the view interval. Δφ is a small degree and stands for the radius of Φ. Value of the view direction \( \overline{\varphi} \) varies within the range of Φ. If \( \overline{\varphi}\ne \tilde{\varphi} \), the MVDR beamformer would treat the desired signal as an interference signal and suppress it; thus, in the beam pattern of ∣H(φ)∣, there will exist a steep null at the desired direction. This phenomenon is the so-called signal cancellation. On the contrary that if \( \overline{\varphi}=\tilde{\varphi} \), according to the constraint in Eq. (14), ∣H(φ)∣will approximately equal to one within the range of Φ. Here, we demonstrate the signal cancellation phenomenon of the MVDR beamformer using a simple computer simulation. Assume that \( \tilde{\varphi}={30}^{\circ } \), Δφ = 5∘, and Φ = [25∘, 35∘]. Let \( \overline{\varphi} \) be 25∘, 27.5∘, 30∘, 32.5∘, and 35∘respectively. For each value of \( \overline{\varphi} \), the beam pattern of ∣H(φ)∣ within the whole horizontal interval [−180∘, 180∘] is plotted in Fig. 2a, where the text "φview" stands for\( \overline{\varphi} \). The same beam patterns within the range of Φ are plotted in Fig. 2b. Beam patterns of the beam amplitude responses with different view directions. a In the angular interval of [−180∘, 180∘]. b In the angular interval of [25∘, 35∘] It is evident in Fig. 2b that when \( \overline{\varphi}=\tilde{\varphi} \), i.e., 30∘, we have $$ \mid H\left(\varphi \right)\mid \approx 1,\kern0.75em \varphi \in \Phi $$ However, when \( \overline{\varphi}\ne \tilde{\varphi} \), there are obvious nulls around 30∘ in the beam patterns. This characteristic of the MVDR beamformer can be exploited in finding the desired direction. In the next subsection, the principles of a new DOA estimation algorithm is presented. DOA estimation In the case of \( \overline{\varphi}\ne \tilde{\varphi} \), define the minimum of the beam amplitude response ∣H(φ)∣ within Φ as \( {\overline{H}}_{\mathrm{min}} \), which is expressed as $$ {\overline{H}}_{\mathrm{min}}=\underset{\varphi \in \Phi}{\min}\left|\frac{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\varphi \right)}{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\overline{\varphi}\right)}\right|,\kern0.5em \overline{\varphi}\ne \tilde{\varphi} $$ According to the previous analysis, as there exists a null within Φ, thus we have $$ {\overline{H}}_{\mathrm{min}}\approx 0 $$ If \( \overline{\varphi}=\tilde{\varphi} \), define the minimum of ∣H(φ)∣ within the interval Φ as \( {\tilde{H}}_{\mathrm{min}} \), which is expressed as $$ {\tilde{H}}_{\mathrm{min}}=\underset{\varphi \in \Phi}{\min}\left|\frac{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\tilde{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\varphi \right)}{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\tilde{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\tilde{\varphi}\right)}\right| $$ According to the previous analysis, we have $$ {\tilde{H}}_{\mathrm{min}}\approx 1 $$ It can be concluded from Eqs. (22) and (24) that $$ {\tilde{H}}_{\mathrm{min}}\gg {\overline{H}}_{\mathrm{min}} $$ Equation (25) indicates that within Φ, if and only if \( \overline{\varphi}=\tilde{\varphi} \), the minimum of the amplitude response reaches the maximum. Since ∣H(φ)∣ is determined by the view steering vector, i.e., \( \boldsymbol{\uppsi} \left(\overline{\varphi}\right) \), the above necessary and sufficient condition is equivalent to the statement that the view steering vector matches the desired steering vector: $$ \boldsymbol{\uppsi} \left(\overline{\varphi}\right)=\boldsymbol{\uppsi} \left(\tilde{\varphi}\right) $$ We can construct such a worst-case performance optimization problem as $$ \underset{\overline{\varphi}}{\max}\underset{\varphi \in \Phi}{\min}\left|\frac{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\varphi \right)}{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\overline{\varphi}\right)}\right|,\kern1.25em \mathrm{s}.\mathrm{t}.\overline{\varphi}\in \Phi $$ In Eq. (27), once the maximum is found, the desired direction is found thereupon. We name this method matched steering vector searching (MSVS) based DOA estimation algorithm. Equation (27) can be extended to problems involving multiple desired sources. Assume that there are J desired sources among all the K source signals. For the jth source signal, the desired DOA is\( {\tilde{\varphi}}_j \), and the view interval is\( {\Phi}_j=\left[{\tilde{\varphi}}_j-\Delta \varphi, {\tilde{\varphi}}_j+\Delta \varphi \right] \), where j = 1,2,…,J. Therefore, the DOA estimation problem for the jth desired signal can be described as $$ \underset{\overline{\varphi}}{\max}\underset{\varphi \in {\Phi}_j}{\min}\left|\frac{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\varphi \right)}{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\overline{\varphi}\right)}\right|,\kern1.25em \mathrm{s}.\mathrm{t}.\overline{\varphi}\in {\Phi}_j $$ Furthermore, the maximum finding problem in Equation (28) can be regarded as a spectrum peak searching problem. We can define the pseudo-spatial power spectrum as $$ {P}_{\mathrm{MSVS}}\left(\overline{\varphi}\right)=\underset{\varphi \in {\Phi}_j}{\min}\left|\frac{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\varphi \right)}{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\overline{\varphi}\right)}\right|,\kern1.25em \overline{\varphi}\in {\Phi}_j $$ Then, the angles corresponding to the peaks of the spectra are estimation values of the desired directions. Algorithm implementation In practice, to make the view intervals certain, first of all, we shall get the rough estimates of the desired directions using the traditional algorithms such as MUSIC or MVDR. After that, we can establish the view intervals basing on the rough estimates. For the jth view interval Φj, we sample it uniformly for L sample points and each sample point represents a view direction. The larger L is, the larger the computing load is and the higher the estimation accuracy is. Then, calculate the pseudo-spatial power spectrum according to Eq. (29), and search for the peak to acquire the accurate estimate of the desired direction. The steps of the MSVS algorithm are listed as follows. Relation between MSVS and MVDR Algorithm Given the weight vector w(φ) of a beamformer and the covariance matrix of the output data R, the output power of the beamformer is $$ P\left(\varphi \right)={\mathbf{w}}^{\mathrm{H}}\left(\varphi \right)\mathbf{Rw}\left(\varphi \right) $$ Plug Eq. (15) into Eq. (30), and we can obtain the beam scanning spatial spectrum of the MVDR beamformer: $$ {P}_{\mathrm{MVDR}}\left(\varphi \right)=\frac{1}{{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\varphi \right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\varphi \right)} $$ In Eq. (31), R has been replaced by its estimation value\( \hat{\mathbf{R}} \), which is defined by Eq. (16). Plug Eq. (31) into Eq. (29), and the pseudo-spatial spectrum of the MSVS algorithm can be restated as $$ {P}_{\mathrm{MSVS}}\left(\overline{\varphi}\right)={P}_{\mathrm{MVDR}}\left(\overline{\varphi}\right)\cdot \underset{\varphi \in {\Phi}_j}{\min}\left|{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\varphi \right)\right|,\overline{\varphi}\in {\Phi}_j $$ Define a window function as $$ {W}_j\left(\overline{\varphi}\right)=\underset{\varphi \in {\Phi}_j}{\min}\left|{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left(\varphi \right)\right|,\kern1.25em \overline{\varphi}\in {\Phi}_j $$ Then, Eq. (32) can be rewritten as $$ {P}_{\mathrm{MSVS}}\left(\overline{\varphi}\right)={P}_{\mathrm{MVDR}}\left(\overline{\varphi}\right)\cdot {W}_j\left(\overline{\varphi}\right),\kern1.25em \overline{\varphi}\in {\Phi}_j $$ Equation (34) indicates that the MSVS pseudo-spatial spectrum can be seemed as the windowed MVDR spatial spectrum. In particular, if \( {W}_j\left(\overline{\varphi}\right)\equiv 1 \), the MSVS algorithm would turn into the MVDR algorithm. In order to further analyze the performance of the MSVS algorithm, we shall investigate the characteristics of the window function \( {W}_j\left(\overline{\varphi}\right) \). For the jth desired signal, if \( \overline{\varphi}\ne {\tilde{\varphi}}_j \), according to the preceding analysis, the amplitude response will have a null in the direction of \( {\tilde{\varphi}}_j \). Thus, in this case, $$ {W}_j\left(\overline{\varphi}\right)=\left|{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left({\tilde{\varphi}}_j\right)\right|,\kern0.5em \overline{\varphi}\in {\Phi}_j,\kern0.5em \overline{\varphi}\ne {\tilde{\varphi}}_j $$ If \( \overline{\varphi}={\tilde{\varphi}}_j \), the main lobe of the amplitude response will lie in the view interval Φj. In addition, as Φj is a relatively narrow interval, the amplitude response can be approximately seemed as constant within the range of Φj. Hence, the window function can be approximately expressed as $$ {W}_j\left(\overline{\varphi}\right)\approx \left|{\boldsymbol{\uppsi}}^{\mathrm{H}}\left({\tilde{\varphi}}_j\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left({\tilde{\varphi}}_j\right)\right|,\kern0.5em \overline{\varphi}={\tilde{\varphi}}_j $$ By combining Eqs. (35) and (36), Eq. (33) can be rewritten as $$ {W}_j\left(\overline{\varphi}\right)=\left|{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left({\tilde{\varphi}}_j\right)\right|,\kern0.5em \overline{\varphi}\in {\Phi}_j $$ Equation (37) implies that \( {W}_j\left(\overline{\varphi}\right) \) can be seemed as the modulus of the weighted inner product of the view steering vector \( \boldsymbol{\uppsi} \left(\overline{\varphi}\right) \) and the desired steering vector \( \boldsymbol{\uppsi} \left({\tilde{\varphi}}_j\right) \). Here, we present Theorem 1, the proof of which is postponed into the Appendix. Theorem 1 Assume that N denotes the number of snapshots, M denotes the number of sensors, and N ≫ M. \( {W}_j\left(\overline{\varphi}\right)=\left|{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\hat{\mathbf{R}}}^{-1}\boldsymbol{\uppsi} \left({\tilde{\varphi}}_j\right)\right|,\overline{\varphi}\in {\Phi}_j \). Then, if and only if \( \overline{\varphi}={\tilde{\varphi}}_j \), the window function \( {W}_j\left(\overline{\varphi}\right) \) reaches the maximum. Therefore, the window function \( {W}_j\left(\overline{\varphi}\right) \) always reach the maximum in the desired direction. Since the MSVS pseudo-spatial spectrum is windowed MVDR spatial spectrum, the peak of the MSVS pseudo-spatial spectrum shall be sharper, and the MSVS algorithm shall have a higher estimation accuracy. In the next section, we will validate the advantages of the MSVS approach by simulation experiments. Here, we state some common assumptions. The array is an 8-element uniform linear AVS array. Element spacing d is half-wavelength. There are two source signals impinging on the AVS array, and their azimuths are 30∘ and 60∘ respectively. We treat the former signal as the desired signal, whereas the latter as interference. Both signals have equal power. We set the view interval as [25∘, 35∘]. As we only concern on the azimuth estimation, to simplify the problem, assume that for all of the sources, the elevations are 90∘ and are pre-known so that the array and the sources are in the same horizontal plane. The angular searching step is 0.1∘. Cramer-Rao bound In the case of a single source, the Cramer-Rao bound (CRB) on the DOA parameters with an AVS array is given in [5]: $$ \mathrm{CRB}\left(\varphi, \theta \right)=\frac{1}{2N}\frac{1}{M{\beta \beta}_I}\left(1+\frac{1}{M{\beta \beta}_I}\right){\left(\boldsymbol{\Gamma} +\boldsymbol{\Pi} \right)}^{-1} $$ where \( \beta ={\sigma}_s^2/{\sigma}_p^2 \) is the SNR at each pressure receiver, βI = (1 + 1/η) is the effective increase in SNR, and \( \eta ={\sigma}_v^2/{\sigma}_p^2 \) is the ratio of noise powers for the particle velocity receiver and the pressure receiver. If all the noise is internal receiver noise, then η is a direct reflection of the relative noise floors of the two types of receiver, and the technology is available to make them approximately equal [32], i.e., η = 1. If ambient noise is present, then η < 1 since the particle velocity receivers filter out some of the unwanted noise, for example, η = 1/3 for spherically isotropic noise [33]. In order to simulate the underwater environment, we assume that η = 1/3 in the following simulations consistently. When the origin of the coordinate system is the array centroid, Γ and Π in Eq. (38) are given by $$ \boldsymbol{\Gamma} =\frac{4{\pi}^2}{M}\left[\begin{array}{cc}{\sin}^2\theta \sum \limits_m{\left({\mathbf{r}}_m^{\mathrm{H}}{\mathbf{v}}_{\varphi}\right)}^2& \sin \theta \sum \limits_m{\mathbf{r}}_m^{\mathrm{H}}{\mathbf{v}}_{\varphi }{\mathbf{r}}_m^{\mathrm{H}}{\mathbf{v}}_{\theta}\\ {}\sin \theta \sum \limits_m{\mathbf{r}}_m^{\mathrm{H}}{\mathbf{v}}_{\varphi }{\mathbf{r}}_m^{\mathrm{H}}{\mathbf{v}}_{\theta }& \sum \limits_m{\left({\mathbf{r}}_m^{\mathrm{H}}{\mathbf{v}}_{\theta}\right)}^2\end{array}\right] $$ $$ \boldsymbol{\Pi} =\frac{1}{1+\eta}\left[\begin{array}{cc}{\sin}^2\theta & 0\\ {}0& 1\end{array}\right] $$ wherermis the position vector of the mth sensor and in unit of wavelength. Assume that the sensors are placed along the x-axis and the array centroid is at the origin of the coordinate system, we have $$ {\displaystyle \begin{array}{l}{\mathbf{r}}_1={\left(-\frac{7}{4},0,0\right)}^{\mathrm{T}},\kern0.75em {\mathbf{r}}_2={\left(-\frac{5}{4},0,0\right)}^{\mathrm{T}}\\ {}{\mathbf{r}}_3={\left(-\frac{3}{4},0,0\right)}^{\mathrm{T}},\kern0.75em {\mathbf{r}}_4={\left(-\frac{1}{4},0,0\right)}^{\mathrm{T}}\\ {}{\mathbf{r}}_5={\left(\frac{1}{4},0,0\right)}^{\mathrm{T}},\kern1.25em {\mathbf{r}}_6={\left(\frac{3}{4},0,0\right)}^{\mathrm{T}}\\ {}{\mathbf{r}}_7={\left(\frac{5}{4},0,0\right)}^{\mathrm{T}},\kern1.25em {\mathbf{r}}_8={\left(\frac{7}{4},0,0\right)}^{\mathrm{T}}\end{array}} $$ In addition, in Eq. (39), $$ {\mathbf{v}}_{\varphi }=\left(\partial \mathbf{h}/\partial \varphi \right)/\sin \theta $$ $$ {\mathbf{v}}_{\theta }=\partial \mathbf{h}/\partial \theta $$ where h denotes the direction vector of the source. $$ \mathbf{h}={\left[\cos \varphi \sin \theta, \sin \varphi \sin \theta, \cos \theta \right]}^{\mathrm{T}} $$ Combining Eqs. (39)–(44) under assumptions of M = 8 and θ = 90∘, Γ and Π can be calculated as $$ \boldsymbol{\Gamma} =\frac{\pi^2}{2}\left[\begin{array}{cc}\frac{21}{2}{\sin}^2\varphi & 0\\ {}0& 0\end{array}\right] $$ $$ \boldsymbol{\Pi} =\frac{1}{1+\eta}\left[\begin{array}{cc}1& 0\\ {}0& 1\end{array}\right] $$ Plug Eqs (45) and (46) into Eq. (38), and we can obtain the CRB on azimuth estimation in this context: $$ \mathrm{CRB}\left(\varphi \right)=\frac{1}{2N}\frac{1}{8{\beta \beta}_I}\left(1+\frac{1}{8{\beta \beta}_I}\right){\left(\frac{21{\pi}^2{\sin}^2\varphi }{4}+\frac{1}{1+\eta}\right)}^{-1} $$ Simulation experiments Firstly, we compare the spatial spectra of the proposed MSVS algorithm and some conventional DOA estimation approaches, including MVDR, PM, and MUSIC. Figure 3 displays the spatial spectra with SNR = 15 dB and N = 200. In Fig. 3, we can find that for all of the four algorithms, there exist clear spectrum peaks around the desired direction, and among them, the proposed one has the sharpest spectrum peak. Spatial spectra comparison, SNR = 15 dB, N = 200 The spatial spectra under deteriorated conditions, i.e., SNR = − 15 dB and N = 50 are presented in Fig. 4, from which we can find that the spectrum peak of the PM algorithm deviates from the desired DOA seriously. Besides, the spatial spectra of MVDR and MUSIC are nearly flat. Unlike these methods, the spatial spectrum of the MSVS algorithm still displays a quite clear peak around 30∘. The 3 dB bandwidth of the MSVS algorithm is much narrower than others. This simulation experiment illustrates that the proposed algorithm works effectively even with low SNR and short snapshots. This is due to its sensitivity to the matching degree of the steering vectors. Specifically speaking, when \( \overline{\varphi} \) deviates from \( \tilde{\varphi} \), the view steering vector mismatches the desired steering vector, and the MSVS spectrum corresponds to the null of the amplitude response within the view interval, which is a very small value. However, when \( \overline{\varphi} \) equals \( \tilde{\varphi} \), the steering vectors are matched. In this case, the amplitude response within the view interval keeps approximately equalling a large value, causing that the MSVS spectrum shapes a sharp peak in the desired direction. Spatial spectra comparison, SNR = − 15 dB, N = 50 Next, we adopt 100 times of Monte Carlo trials to assess the DOA estimation performances of the abovementioned algorithms. Besides, ESPRIT based on AVS array is put in the comparison. Define the root mean square error (RMSE) as $$ \mathrm{RMSE}=\sqrt{\frac{1}{100}\sum \limits_{m=1}^{100}{\left({\overset{\frown }{\varphi}}_m-\tilde{\varphi}\right)}^2} $$ where \( {\overset{\frown }{\varphi}}_m \) stands for the estimate value of the desired DOA in the mth Monte Carlo trial. Figure 5 shows the DOA estimation performance comparison of the proposed algorithm, ESPRIT, MVDR, PM, and MUSIC approaches, and the CRB under different SNR, with number of snapshots N equals 100. Figure 6 depicts the same comparison with different N, and the SNR is fixed on − 25 dB. RMSE versus SNR, N = 100 RMSE versus number of snapshots, SNR = − 25 dB Figures 5 and 6 illustrate that the performances of all the algorithms degrade with SNR getting lower or N getting smaller. However, it is clearly indicated in both figures that the MSVS algorithm performs better than others under every simulation condition. It can be seen in Fig. 5 that even the SNR is as low as − 30 dB, RMSE of the proposed algorithm is less than 1∘. Other algorithms cannot achieve such a performance unless the SNR increases at least to about − 5 dB. Figure 6 gives similar results. In the next two experiments, we investigate the anti-interference capability of the MSVS algorithm. In the previous simulations, we assume that the power of the interference signal equals the power of the desired signal, i.e., the interference-to-signal ratio (ISR) is 0 dB. Now, we increase the ISR to 10 dB, 20 dB, and 30 dB successively, with other simulation conditions remaining unchanged. The results are displayed in Fig. 7. RMSE with different ISR, N = 100 Then, we increase the number of the interference signals. In Fig. 8, "1 interference" corresponds to the initial assumptions. "2 interferences" means that we add a source signal from the azimuth of 120∘; "3 interferences" for an added source signal from −30∘, and finally "4 interferences" for an added source signal from −120∘. All of the ISRs keep to be 0 dB. RMSE with different interference number, N = 100 Figures 7 and 8 illustrate that the performance of the MSVS algorithm is basically independent of the number and intensity of interference. This phenomenon is easy to explain. From the analysis of Section 3, we can see that the MSVS algorithm is essentially a MVDR-based method. Therefore, the MSVS algorithm inherits the strong ability to suppress the interference of the MVDR beamformer. A new DOA estimation algorithm basing on the matched steering vector searching has been presented in this paper. The paper has described the measurement model of an AVS array. After studying on the signal cancellation of the MVDR beamformer, we present our algorithm, introducing its principles and steps to implement. We have also investigated the relation between the proposed algorithm and MVDR method. Then, we conduct the simulation experiments. It is verified that the proposed algorithm has the sharpest spectrum peak and can obtain the best estimation accuracy when compared with the conventional DOA estimation algorithms, especially under conditions of low SNR and short snapshots. What is more, the proposed algorithm has a strong anti-interference capability. The power or number of the interference can hardly affect the performance of our algorithm. In the future, we shall research the joint azimuth angle and elevation angle estimation using the proposed algorithm. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. AVS: Acoustic vector sensor DOA: Direction-of-arrival ESPRIT: Estimation of signal parameters via rotational invariance technique i.i.d.: Independent identically distributed MSVS: Matched steering vector searching Multiple signal classification MVDR: Minimum variance distortionless response Propagator method RMSE: Root mean square error SNR: Y. Ao, K. Xu, J.W. Wan, Research on source of phase difference between channels of the vector hydrophone. Proc. IEEE. ICSP., Chengdu, China, 1671–1676 (2016) P. Felisberto, P. Santos, S.M. Jesus, Tracking source azimuth using a single vector sensor. Proc. Fourth IEEE International Conference on Sensor Technologies and Application., Venice, Italy, 416–421 (2010) A. Zhao, X. Bi, J. Hui, C. Zeng, L. Ma, A three-dimensional target depth-resolution method with a single-vector sensor. Sensors 18(4), 1182 (2018) A. Nehorai, E. Paldi, Acoustic vector-sensor array processing. IEEE Trans. Signal Process. 42(9), 2481–2491 (1994) M. Hawkes, A. Nehorai, Acoustic vector-sensor beamforming and Capon direction estimation. IEEE Trans. Signal Process. 46(9), 2291–2304 (1998) K.T. Wong, M.D. Zoltowski, Closed-form underwater acoustic direction-finding with arbitrarily spaced vector hydrophones at unknown locations. IEEE J. Ocean. Eng. 22(3), 566–575 (1997) K.T. Wong, M.D. Zoltowski, Root-MUSIC-based azimuth-elevation angle-of-arrival Estimation with uniformly spaced but arbitrarily oriented velocity hydrophones. IEEE Trans. Signal Process. 47(12), 3250–3260 (1999) K.T. Wong, M.D. Zoltowski, Self-initiating MUSIC-based direction finding in underwater acoustic particle velocity-field beamspace. IEEE J. Ocean. Eng. 25(2), 262–273 (2000) M. Hawkes, A. Nehorai, Wideband source localization using a distributed acoustic vector-sensor array. IEEE Trans. Signal Process. 57(6), 1479–1491 (2003) H. Chen, J. Zhao, Wideband MVDR beamforming for acoustic vector sensor linear array. IEE Proc. Radar Sonar Navig. 151(3), 158–162 (2004) J. He, Z. Liu, Two-dimensional direction finding of acoustic sources by a vector sensor array using the propagator method. Signal Process. 88(10), 2492–2499 (2008) Z. Liu, X. Ruan, J. He, Efficient 2-D DOA estimation for coherent sources with a sparse acoustic vector-sensor array. Multidimens. Syst. Signal Process. 24(1), 105–120 (2013) K. Han, A. Nehorai, Nested vector-sensor array processing via tensor modeling. IEEE Trans. Signal Process. 62(10), 2542–2553 (2014) X. Li, H. Sun, L. Jiang, Y. Shi, Y. Wu, Modified particle filtering algorithm for single acoustic vector sensor DOA tracking. Sensors 15, 26198–26211 (2015) W. Si, P. Zhao, Z. Qu, Two-dimensional DOA and polarization estimation for a mixture of uncorrelated and coherent sources with sparsely-distributed vector sensor array. Sensors 16, 789 (2016) X. Zhang, M. Zhou, J. Li, A PARALIND decomposition-based coherent two-dimensional direction of arrival estimation algorithm for acoustic vector-sensor arrays. Sensors 13, 5302–5316 (2013) J. Li, Q. Lin, C. Kang, K. Wang, X. Yang, DOA Estimation for underwater wideband weak targets based on coherent signal subspace and compressed sensing. Sensors 18, 902 (2018) J. Li, Z. Li, X. Zhang, Partial angular sparse representation based DOA estimation using sparse separate nested acoustic vector sensor array. Sensors 18, 4465 (2018) K. Ichige, K. Saito, H. Arai, High resolution DOA estimation using unwrapped phase information of MUSIC-based noise subspace. IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences 91, 1990–1999 (2008) H. Changuel, A. Changuel, A. Gharsallah, A new method for estimating the direction-of-arrival waves by an iterative subspace-based method. Applied Computational Electromagnetics Society Journal 25(5), 476–485 (2010) Q. Zhao, W.J. Liang, in Advances in Computer Science, Intelligent System and Environment. A modified MUSIC algorithm based on eigenspace, vol 104 (Springer Berlin Heidelberg, 2011), pp. 271–276 E.A. Santiago, M. Saquib, Noise subspace-based iterative technique for direction finding. IEEE Transactions on Aerospace and Electronic Systems 49(4), 2281–2295 (2013) N. Hu, Z. Ye, D. Xu, A sparse recovery algorithm for DOA estimation using weighted subspace fitting. Signal Processing 92(10), 2566–2570 (2012) Q. Xie, Y. Wang, T. Li, Application of signal sparse decomposition in the detection of partial discharge by ultrasonic array method. IEEE Transactions on Dielectrics and Electrical Insulation 22(4), 2031–2040 (2015) X. Yang, G. Li, Z. Zheng, DOA estimation of noncircular signal based on sparse representation. Wireless Personal Communications 82(4), 2363–2375 (2015) K. B. Cui, W. W. Wu, J. J. Huang, X. Chen, and N. C. Yuan, "DOA estimation of LFM signals based on STFT and multiple invariance ESPRIT," AEU-International J Electron Commun, vol.77, pp. 10-17, 2017. B. Widrow, K.M. Duvall, R.P. Gooch, W.C. Newman, Signal cancellation phenomena in adaptive antennas: causes and cures. IEEE Transactions on Antennas and Propagation 30(3), 469–478 (1982) S.A. Vorobyov, Principles of minimum variance robust adaptive beamforming design. Signal Processing 93(12), 3264–3277 (2013) J. Li, P. Stoica, Z. Wang, Doubly constrained robust Capon beamformer. IEEE Transactions on Signal Processing 52(9), 2407–2423 (2004) A. Khabbazibasmenj, S.A. Vorobyov, Robust adaptive beamforming for general-rank signal model with positive semi-definite constraint via POTDC. IEEE Transactions on Signal Processing 61(23), 6103–6117 (2013) K.G. Nagananda, G.V. Anand, Subspace intersection method of high-resolution bearing estimation in shallow ocean using acoustic vector sensors. Signal Processing 90(1), 105–118 (2010) M.J. Berliner, J.F. Lindberg, O.B. Wilson, Acoustic particle velocity sensors: design, performance and applications. J. Acoust. Soc. America. 100(6), 3478–3479 (1996) G.Q. Sun, D.S. Yang, S.G. Shi, Spatial correlation coefficients of acoustic pressure and particle velocity based on vector hydrophone. Acta. Acustica 28(6), 509–513 (2003) The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions. The authors are grateful to the National Science Foundation of China and the National University of Defense Technology for their support of this research. This research was funded in part by the National Natural Science Foundation of China under grant 61601209 and the Fundamental Research Project of the National University of Defense Technology, under grant ZDYYJCYJ20140701. College of Electronic Science and Technology, National University of Defense Technology, Changsha, China Yu Ao , Ling Wang , Jianwei Wan & Ke Xu Search for Yu Ao in: Search for Ling Wang in: Search for Jianwei Wan in: Search for Ke Xu in: The algorithms proposed in this paper have been conceived by YA, LW, and JWW. YA and LW designed the experiments. YA and KX performed the experiments and analyzed the results. YA is the main writer of this paper. All authors read and approved the final manuscript. Correspondence to Yu Ao. The authors declare that they have no competing interests. And all authors have seen the manuscript and approved to submit to your journal. We confirm that the content of the manuscript has not been published or submitted for publication elsewhere. Proof of Theorem 1 If the number of snapshots N is large enough, \( \hat{\mathbf{R}} \) approximately equals R. Thus, Eq. (37) can be rewritten as $$ {W}_j\left(\overline{\varphi}\right)=\left|{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right){\mathbf{R}}^{-1}\boldsymbol{\uppsi} \left({\tilde{\varphi}}_j\right)\right|,\kern0.5em \overline{\varphi}\in {\Phi}_j $$ Firstly, for simplicity, assume that there exists only one source signal, and the desired direction is \( {\tilde{\varphi}}_1 \). In this case, the covariance matrix of the output data R1 is expressed as $$ {\mathbf{R}}_1={\sigma}_1^2\boldsymbol{\uppsi} \left({\tilde{\varphi}}_1\right){\boldsymbol{\uppsi}}^{\mathrm{H}}\left({\tilde{\varphi}}_1\right)+{\mathbf{R}}_n $$ According to Woodbury's inversion formula, the inverse of R1 can be expressed as $$ {\mathbf{R}}_1^{-1}={\mathbf{R}}_n^{-1}-\frac{\sigma_1^2{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1{\tilde{\boldsymbol{\uppsi}}}_1^{\mathrm{H}}{\mathbf{R}}_n^{-1}}{1+{\sigma}_1^2{\tilde{\boldsymbol{\uppsi}}}_1^{\mathrm{H}}{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1} $$ where \( {\tilde{\boldsymbol{\uppsi}}}_1 \) is abbreviated for \( \boldsymbol{\uppsi} \left({\tilde{\varphi}}_1\right) \), and \( {\tilde{\boldsymbol{\uppsi}}}_1^{\mathrm{H}} \) is abbreviated for \( {\boldsymbol{\uppsi}}^{\mathrm{H}}\left({\tilde{\varphi}}_1\right) \). Plug Eq. (51) into Eq. (49), and we have $$ {\displaystyle \begin{array}{l}{W}_1\left(\overline{\varphi}\right)=\left|{\overline{\boldsymbol{\uppsi}}}^{\mathrm{H}}{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1-{\sigma}_1^2\frac{{\overline{\boldsymbol{\uppsi}}}^{\mathrm{H}}{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1{\tilde{\boldsymbol{\uppsi}}}_1^{\mathrm{H}}{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1}{1+{\sigma}_1^2{\tilde{\boldsymbol{\uppsi}}}_1^{\mathrm{H}}{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1}\right|\\ {}\kern2.25em =\left|{\overline{\boldsymbol{\uppsi}}}^{\mathrm{H}}{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1\left(1-{\sigma}_1^2\frac{{\tilde{\boldsymbol{\uppsi}}}_1^{\mathrm{H}}{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1}{1+{\sigma}_1^2{\tilde{\boldsymbol{\uppsi}}}_1^{\mathrm{H}}{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1}\right)\right|\end{array}} $$ where \( {\overline{\boldsymbol{\uppsi}}}^{\mathrm{H}} \) is abbreviated for \( {\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right) \). Rn is defined by Eq. (10); thus, its inverse is easy to obtain: $$ {\mathbf{R}}_n^{-1}={I}_M\otimes \left[\begin{array}{cc}\frac{1}{\sigma_p^2}& 0\\ {}0& \frac{1}{\sigma_v^2}{I}_3\end{array}\right] $$ We abbreviate \( {\mathbf{R}}_n^{-1} \) as $$ {\mathbf{R}}_n^{-1}={I}_M\otimes {\mathbf{R}}_u $$ According to Eqs. (11) and (54), we have $$ {\displaystyle \begin{array}{l}{\tilde{\boldsymbol{\uppsi}}}_1^{\mathrm{H}}{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1={\left({\tilde{\mathbf{a}}}_1\otimes {\tilde{\mathbf{u}}}_1\right)}^{\mathrm{H}}\left({I}_M\otimes {\mathbf{R}}_u\right)\left({\tilde{\mathbf{a}}}_1\otimes {\tilde{\mathbf{u}}}_1\right)\\ {}\kern3.5em =\left[{\tilde{\mathbf{a}}}_1^{\mathrm{H}}\otimes \left({\tilde{\mathbf{u}}}_1^{\mathrm{H}}{\mathbf{R}}_u\right)\right]\left({\tilde{\mathbf{a}}}_1\otimes {\tilde{\mathbf{u}}}_1\right)\\ {}\kern3.5em =\left({\tilde{\mathbf{a}}}_1^{\mathrm{H}}{\tilde{\mathbf{a}}}_1\right)\otimes \left({\tilde{\mathbf{u}}}_1^{\mathrm{H}}{\mathbf{R}}_u{\tilde{\mathbf{u}}}_1\right)\end{array}} $$ According to Eqs. (2), (3), and (53), we have $$ {\tilde{\mathbf{a}}}_1^{\mathrm{H}}{\tilde{\mathbf{a}}}_1=M $$ $$ {\tilde{\mathbf{u}}}_1^{\mathrm{H}}{\mathbf{R}}_u{\tilde{\mathbf{u}}}_1=\frac{1}{\sigma_p^2}+\frac{1}{\sigma_v^2} $$ $$ {\tilde{\boldsymbol{\uppsi}}}_1^{\mathrm{H}}{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1=M\left(\frac{1}{\sigma_p^2}+\frac{1}{\sigma_v^2}\right) $$ In Eq. (52), we represent the constant factor by a capitalized C: $$ C=1-{\sigma}_1^2\frac{{\tilde{\boldsymbol{\uppsi}}}_1^{\mathrm{H}}{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1}{1+{\sigma}_1^2{\tilde{\boldsymbol{\uppsi}}}_1^{\mathrm{H}}{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1} $$ and then Eq. (52) can be rewritten as $$ {W}_1\left(\overline{\varphi}\right)=\left|{\boldsymbol{\uppsi}}^{\mathrm{H}}\left(\overline{\varphi}\right)C{\mathbf{R}}_n^{-1}\boldsymbol{\uppsi} \left({\tilde{\varphi}}_1\right)\right| $$ Since \( C{\mathbf{R}}_n^{-1} \) is a diagonal matrix with all of the non-zero elements being constant, \( {W}_1\left(\overline{\varphi}\right) \) reaches the maximum if and only if \( \boldsymbol{\uppsi} \left(\overline{\varphi}\right) \) matches \( \boldsymbol{\uppsi} \left({\tilde{\varphi}}_1\right) \), i.e., \( \overline{\varphi} \) equals \( {\tilde{\varphi}}_1 \). Secondly, assume that there exist two source signals. One is the desired signal, with the azimuth angle \( {\tilde{\varphi}}_1 \), and the other is an interference signal with the azimuth angle φ2. In this case, the covariance matrix of the output data R2 is expressed as $$ {\mathbf{R}}_2={\mathbf{R}}_1+{\sigma}_2^2{\boldsymbol{\uppsi}}_2{\boldsymbol{\uppsi}}_2^{\mathrm{H}} $$ By using Woodbury's inversion formula, the inverse of R2 can be expressed as $$ {\mathbf{R}}_2^{-1}={\mathbf{R}}_1^{-1}-\frac{\sigma_2^2{\mathbf{R}}_1^{-1}{\boldsymbol{\uppsi}}_2{\boldsymbol{\uppsi}}_2^{\mathrm{H}}{\mathbf{R}}_1^{-1}}{1+{\sigma}_2^2{\boldsymbol{\uppsi}}_2^{\mathrm{H}}{\mathbf{R}}_1^{-1}{\boldsymbol{\uppsi}}_2} $$ Plug Eq. (62) into Eq. (49), and we have $$ {W}_1\left(\overline{\varphi}\right)=\left|{\overline{\boldsymbol{\uppsi}}}^{\mathrm{H}}{\mathbf{R}}_1^{-1}{\tilde{\boldsymbol{\uppsi}}}_1-{\sigma}_2^2\frac{{\overline{\boldsymbol{\uppsi}}}^{\mathrm{H}}{\mathbf{R}}_1^{-1}{\boldsymbol{\uppsi}}_2}{1+{\sigma}_2^2{\boldsymbol{\uppsi}}_2^{\mathrm{H}}{\mathbf{R}}_1^{-1}{\boldsymbol{\uppsi}}_2}{\boldsymbol{\uppsi}}_2^{\mathrm{H}}{\mathbf{R}}_1^{-1}{\tilde{\boldsymbol{\uppsi}}}_1\right| $$ $$ {\boldsymbol{\uppsi}}_2^{\mathrm{H}}{\mathbf{R}}_1^{-1}{\tilde{\boldsymbol{\uppsi}}}_1={\boldsymbol{\uppsi}}_2^{\mathrm{H}}C{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1 $$ Here, we assume that\( {\tilde{\varphi}}_1 \)andφ2are far apart, leading the weighted inner product of the corresponding steering vectors to be a very small value, that is $$ {\boldsymbol{\uppsi}}_2^{\mathrm{H}}C{\mathbf{R}}_n^{-1}{\tilde{\boldsymbol{\uppsi}}}_1\approx 0 $$ $$ {W}_1\left(\overline{\varphi}\right)\approx \left|{\overline{\boldsymbol{\uppsi}}}^{\mathrm{H}}{\mathbf{R}}_1^{-1}{\tilde{\boldsymbol{\uppsi}}}_1\right| $$ Therefore. \( {W}_1\left(\overline{\varphi}\right) \) still reaches the maximum if and only if \( \overline{\varphi} \) equals \( {\tilde{\varphi}}_1 \). Similarly, we can deduce that if there are K source signals, and the interference signals are far apart from the desired signal in direction, the window function would always be expressed by \( \left|{\overline{\boldsymbol{\uppsi}}}^{\mathrm{H}}{\mathbf{R}}_1^{-1}{\tilde{\boldsymbol{\uppsi}}}_1\right| \). This completes the proof of Theorem 1. Ao, Y., Wang, L., Wan, J. et al. Matched steering vector searching based direction-of-arrival estimation using acoustic vector sensor array. J Wireless Com Network 2019, 214 (2019) doi:10.1186/s13638-019-1536-8 Accepted: 07 August 2019 Acoustic vector sensor (AVS) array Direction-of-arrival (DOA) estimation Minimum variance distortionless response (MVDR) Pseudo-spatial spectrum Steering vector
CommonCrawl
Associations between malaria and local and global climate variability in five regions in Papua New Guinea Chisato Imai1,2, Hae-Kwan Cheong3, Ho Kim4, Yasushi Honda5, Jin-Hee Eum3, Clara T. Kim4, Jin Seob Kim3, Yoonhee Kim2, Swadhin K. Behera6, Mohd Nasir Hassan7, Joshua Nealon7, Hyenmi Chung7,8 & Masahiro Hashizume2 Malaria is a significant public health issue in Papua New Guinea (PNG) as the burden is among the highest in Asia and the Pacific region. Though PNG's vulnerability to climate change and sensitivity of malaria mosquitoes to weather are well-documented, there are few in-depth epidemiological studies conducted on the potential impacts of climate on malaria incidence in the country. This study explored what and how local weather and global climate variability impact on malaria incidence in five regions of PNG. Time series methods were applied to evaluate the associations of malaria incidence with weather and climate factors, respectively. Local weather factors including precipitation and temperature and global climate phenomena such as El Niño-Southern Oscillation (ENSO), the ENSO Modoki, the Southern Annular Mode, and the Indian Ocean Dipole were considered in analyses. The results showed that malaria incidence was associated with local weather factors in most regions but at the different lag times and in directions. Meanwhile, there were trends in associations with global climate factors by geographical locations of study sites. Overall heterogeneous associations suggest the importance of location-specific approaches in PNG not only for further investigations but also public health interventions in repose to the potential impacts arising from climate change. Papua New Guinea (PNG) is a malaria endemic country where all four human Plasmodium species (Plasmodium falciparum, Plasmodium vivax, Plasmodium malariae, and Plasmodium ovale) circulate in the population with varying distribution and degrees of endemicity [1]. In the nation, malaria is the leading cause of outpatient visits, the fourth leading cause of hospital admissions, and the third most common cause of death [2]. Despite significant reductions of malaria morbidity and mortality in many Pacific and Asian countries, the disease remains a serious public health issue in PNG, and instead, a localized increase in malaria prevalence has been reported over recent decades in the country [3]. Surveys in the 1940s and 1950s showed no cases in highland region, but began to report malaria from the 1960s [4, 5]. One of the possible contributors for the localized increase is global warming as the changes in the disease distribution and intensity of transmission have been witnessed following warming temperatures in other highland areas around the globe [6–8]. As a coastal country lying in the tropical Pacific Ocean, PNG is regarded highly vulnerable to the effects of climate change. In fact, rising sea levels and warming trends in both annual and seasonal mean air temperatures have already been reported for Port Moresby [9]. Given the nation's vulnerability to climate change and high public health burdens of malaria, it is important for the country to gain proper understanding about the potential impacts of climate change on the infectious disease in order to prepare them with integrated action and strategic malaria control and prevention programs. To understand the impacts of climate change, firstly, it is critical to know the current condition of associations between malaria and climate. However, in PNG, there is limited information only available from descriptive assessments, and few epidemiological studies have ever focused on the topic in depth. The present study was therefore developed to investigate how and what local weather and global climate variability are associated with malaria among different regions in PNG. Local weather factors of our interest in this study include rainfall and temperature. The dependence of malaria transmission on those weather factors is a well-accepted fact due to their significant roles in population dynamics of mosquito vectors. Generally, a minimal volume of rainfall is essential to create the water pools necessary for vector breeding and larval habitats, and minimum ambient temperatures are required below which mosquito vectors, and parasites within them, are biologically unable to develop [5, 10]. In a preceding study, temperature is described as the primary determinant of malaria incidence as endemicity is dependent on altitude on which temperatures also depend [11]. As the extent of local weather factors, global climate variability was also taken into consideration in the present study. Global climate here refers to ocean and atmosphere phenomena such as El Niño-Southern Oscillation (ENSO). Because precipitation and temperature conditions are linked to ocean-atmosphere phenomena, the potential indirect impacts of ocean-atmosphere phenomena on malaria transmission in the other parts of the world have been documented elsewhere [12–15]. In PNG, local weather has a very significant relationship with sea surface temperature (SST) as the average monthly air temperature and year-to-year variability in rainfall are highly impacted by ENSO [9]. One extreme example for the impacts of ocean environment on the weather in PNG is the episode of a severe drought caused by the strong El Niño event in 1997 [16]. Considering those relationships between local weather and global climate variability, global climate seems more likely to indirectly influence malaria transmission in PNG through the local weather factors. For global climate variability, not only ENSO but also the ENSO Modoki, the Southern Annular Mode (SAM), and the Indian Ocean Dipole (IOD) were considered in the present study. The El Niño Modoki is a coupled ocean-atmosphere phenomenon in the tropical Pacific that is different from the canonical coupled phenomenon, El Niño. El Niño is characterized by abnormally warmer SST in the eastern tropical Pacific than usual, whereas El Niño Modoki is characterized by anomalous warming in the central Pacific and anomalous cooling in the eastern and western Pacific [17]. In terms of climatic influence, however, both extreme El Niño and El Niño Modoki events impact in a similar manner on PNG since SST in the western Pacific becomes unusually cooler and brings low rainfall to the country during those events [9, 16]. Data describing the effects of IOD on PNG weather are scarce. However, a typical IOD event is characterized by cooler SST in the eastern part of the Indian Ocean near Indonesia and often results in a decrease of precipitation in the neighboring country, Australia [18]. The effects of SAM on Australia are similarly well-documented while little is known in PNG [19, 20]. With a view to improve understanding of global climatic determinants of malaria incidence in geographically distinct foci of PNG, a comprehensive assessment of these ocean-atmospheric phenomena was performed. Malaria and local weather data Data on monthly malaria cases from 1996 to 2008 were obtained from the National Health Information System of PNG. Four administrative provinces and one district were included in this study. They were the southern part of Western province, Eastern Highlands province, East Sepik province, Madang province, and Port Moresby. All study locations were located in the coastal lowlands, except for Eastern Highlands province. Western province and Port Moresby are located at the southern coastal area, and East Sepik and Madang provinces are on the northern coast. Eastern Highlands province is landlocked and in a mountainous region (>1600 m above sea level). Data on local weather factors (e.g., precipitation, minimum and maximum temperature) were acquired from the PNG National Weather Service. Ocean climate data The events of ENSO were represented by NINO3.4 anomaly index, defined by the anomaly from the SST climatology of 1981–2010 in the NINO3.4 region (5° N–5° S, 170–120° W) of the Pacific Ocean. SAM, also known as the Antarctic Oscillation, refers to the alternating pattern of strengthening and weakening westerly winds with high and low pressure bands between the mid and high latitudes in the Southern Hemisphere [19]. Its index was defined by the monthly mean 700-hPa height anomalies at 20° S which was projected onto the leading Empirical Orthogonal Function of monthly 700-hPa 1979–2000 data [21]. The evolution of IOD is represented by the dipole mode index (DMI), defined as the difference in SST between the western (10° S–10° N, 50–70° E) and eastern (10° S–0°, 90–110° E) tropical Indian Ocean [22]. The data for those three climate indices and the measurements described above were obtained from the U.S. National Oceanic and Atmospheric Administration (NOAA). The El Niño Modoki index (EMI) was defined with SST anomaly by 1982–2010 base period as $$ \mathrm{E}\mathrm{M}\mathrm{I}={\mathrm{SSTA}}_{\mathrm{central}}-0.5\ \left({\mathrm{SSTA}}_{\mathrm{east}}\right)-0.5\ \left({\mathrm{SSTA}}_{\mathrm{west}}\right) $$ where SSTA indicates sea surface temperature anomaly of the area mean regions specified as the central (165° E–140° W, 10° S–10° N), the east (110°–70° W, 15° S–5° N), and the west (125°–145° E, 10° S–20° N) [23]. The data for calculating the index was obtained from Japan Agency for Marine-Earth Science and Technology (JAMSTEC). The examined period of time for the respective local weather and global climate models of each study location is described in the supplemental material (Additional file 1: Table S1). A time series method was applied to evaluate the associations of malaria incidence with local weather and global climate variability, respectively. Two-step approaches were applied. A generalized additive model, which flexibly models nonlinearity with smoothing splines [24], was initially used to visualize the responses of exposure factors to malaria incidence, since little is known in prior about the relationships. We then confirmed the linear relationships with weather and climate variability and generalized linear models (GLMs) with negative binomial distribution were used to estimate the associations. The distribution selection for GLM is described in the section of sensitivity analysis. Local weather model: $$ \log \left({Y}_t\right)={\beta}_0+{\beta}_1{\mathrm{temperature}}_{t-l}+{\beta}_2{\mathrm{rain}}_{t-l}+\mathrm{ns}(t)+ \log \left(\mathrm{population}\right) $$ Global climate model: $$ \log \left({Y}_t\right)={\beta}_0+{\beta}_1{\mathrm{EMI}}_{t-l}+{\beta}_2\mathrm{NINO}3.4{\mathrm{Anom}}_{t-l}+{\beta}_3{\mathrm{SAM}}_{t-l}+{\beta}_4{\mathrm{DMI}}_{t-l}+\mathrm{ns}(t)+ \log \left(\mathrm{population}\right) $$ The outcome number of reported malaria cases at month t, denoted by Yt, was considered as an outcome variable while predictor variables were the local weather and global climate. In the local weather model, minimum temperature instead of maximum temperature was included due to better data completeness. The ns(t) denotes the natural cubic spline function on the observational time to remove seasonality variations of each location in respective local weather and global climate models. The optimal degrees of freedom for the natural cubic spline on the observational time were selected based upon the lowest Akaike's Information Criterion (AIC). The t − l for each weather and climate variable of interest denotes a lag time. Lags for local weather models were designed as moving averages from the month of the event (0 month) to 3 months prior (0–1, 0–2, and 0–3-month average). The global climate models in turn included the moving average from the month of the event (0 month) to 6 months in prior (0–1, 0–2, 0–3, 0–4, 0–5, and 0–6-month average). Those lengths of lag times were first determined a priori based on biological plausibility and then assessed by cross correlation functions to confirm no obvious discrepancy with a priori approach (Additional file 1: Figure S1 and S2). The examined period of time for the respective local weather and global climate models of each study location is described in the supplemental material (Additional file 1: Table S1). The annual population of each study site was included with the offset function. The statistical analyses were performed with the statistical software package R version 2.15.3 [25]. As alternative methods for controlling for seasonality, calendar time variables of month and year, the periodic harmonic functions called Fourier series formed by the sum of sines and cosines, and natural cubic splines for observational time were compared by AIC. The lowest AIC were observed with the model with natural cubic splines, and consequently, the method was chosen for seasonal control. The distribution selections for GLM, whether Poisson or negative binomial, were also examined by overdispersion parameters. The result was that the parameter proved the overdispersion of all models with Poisson distribution, and they were improved by using GLM negative binomial distribution models. Local weather factors Figure 1 shows the locations of study sites and their monthly average of malaria reported cases, precipitation, and temperature. There is minimal seasonal variation in temperatures, but it shows distinct seasonal patterns in precipitation and reported malaria cases in each region of PNG. The seasonality of monthly precipitation seems to coincide with malaria cases, with the notable exception of Eastern Highlands. The other distinctive characteristic of Eastern Highlands is the low temperature, since the province is located at high altitude. Time series plots for malaria cases and local weather factors during the study period are also available in the supplemental material (Additional file 1: Figure S3). Study locations and seasonal variations of malaria cases and weather factors. The locations of five study sites in Papua New Guinea are shown with the monthly averages of malaria reported cases, precipitation (mm), and maximum and minimum temperatures (°C) The results of time series analysis varied across study locations (Fig. 2). Malaria incidence in Western province was significantly negatively associated with minimum temperature at the current month and 1 month before (0–1 month), yet the direction of the association shifted to positive from lags of 2 months (0–2 months). At the lag of 2 months, minimum temperature was not significantly associated but it later became significant with an increased strength of the association at the lag of 3 months (0–3 months). Although the exact strength of the associations differed by lags, the transitions in direction of the association from negative to positive with minimum temperature were similarly observed in Eastern Highlands and Madang. In Eastern Highlands, the impact was initially observed negatively at the current month and then shifted towards positive with a lag of 1 month and the 2-months lag which recorded a significant association. Madang, in turn, did not provide any significant relationships at any lags, yet the association altered from negative to positive with an increase in strength over time. The contrast was Port Moresby where the strength of negative associations grew over lag times, and the association was significant with a lag of 3 months. In East Sepik, no notable associations with minimum temperature were observed. The percentage change of malaria cases with 1 °C temperature and 10 mm precipitation increment. The graphs show the effects of temperature and precipitation at different lag times on malaria cases in the five study locations. The effects are indicated by the percentage changes of the number of malaria cases with 1 °C minimum temperature and 10 mm precipitation increase. The dots and bars are the estimates and 95 % confidence intervals of the percent change For precipitation, a pronounced trend in associations was displayed only in Eastern Highlands and Madang. In Eastern Highlands, the significantly negative associations were found at current and at lags of 1 and 2 months. In contrast, for Madang, significant positive associations were observed at 1- and 2-month lag times. Global climate factors The correlations between global climatic factors at the same month were weak, with the maximum Pearson's correlation (r) to be 0.46 between NINO3.4 anomaly and EMI (Additional file 1: Figure S4). Figure 3 presents the time series of global climate indices from 1997 to 2008. Compared with local weather factors, the effects of global climate had more consistent direction of associations over lag times. There were also trends in association by geographical location of study site (Fig. 4). Table 1 presents a summary of significant associations with their directions and lag times. EMI was negatively associated in the southern coastal locations, Western and Port Moresby, at commonly short lags of the current month (0 month) and 1 month (0–1 month). Despite the absence of significant associations, the results revealed that there was a consistent negative association in Madang. Eastern Highlands, in contrast, was positively associated with EMI. NINO3.4 anomaly had negative associations in all study locations. The northern coastal locations of East Sepik and Madang provinces were more likely to be immediately affected (no lags) by NINO3.4 anomaly whereas the impacts in the southern locations Western and Port Moresby were observed much later, after 4 or 5 months. Eastern Highlands, located in the central mountainous area between the northern and southern coastal locations, had associations in both immediate and later lag times (0, 0–1, 0–4, and 0–5 months). Although the direction of associations was different, malaria incidence in the southern coastal locations, Western and Port Moresby, was significantly associated with DMI. In Western province, malaria cases were negatively associated with DMI from the immediate month to 2 months before DMI. Malaria incidence in Port Moresby was conversely positively related with DMI at the current and 1 month prior, yet shifted to a negative association at 4-month lag (0–4 months). The impact of SAM was the least associated with malaria cases among all global indicators. In Port Moresby, the trend in associations was less consistent but was significant with 1-month lag (0–1 month). Madang was the only town that had consistently negative associations with SAM at both short and long lags. Time series plots for NINO3.4 anomaly, EMI, SAM, and DMI from 1997 to 2008 The percentage change of malaria cases with 1 unit increment of global climate indicators. The graphs show the effects of EMI, NINO3.4 anomaly, SAM, and DMI at different lag times on malaria cases in the five study locations. The effects are indicated by the percentage changes of the number of cases with every 1 unit increment change in the global climate indicators Table 1 The summary of significant associations with global climate index The impacts of temperature and precipitation have been extensively studied as the primary determinants of malaria incidence since they exert critical biological influence on development and life cycle of both mosquito vectors and malaria parasites. In particular, temperature is a fundamental determinant of parasite development, and the length of extrinsic incubation period is highly sensitive to the ambient temperature [5, 10]. In our results, there were negative associations with minimum temperature in three study locations (i.e., Western, Eastern Highlands, and Madang provinces) when the immediate impact is considered (lag 0). However, positive associations were present when 2- to 3-month time lags were introduced. The lead time prior to observable positive effects varied by location, but the time-dependent associations observed in this study agree with previously published studies [26, 27]. The positive association observed at 2- to 3-month lags also seems biologically plausible given the life cycle of Anopheles mosquitoes and parasites. Malaria incidence in Port Moresby and East Sepik, on the other hand, had different responses to minimum temperature, showing either null or negative associations. This may suggest that we failed to consider the substantial confounders or needed a more biologically relevant variable for ambient temperature. For instance, Plasmodium development depends on not only minimum temperature but also depends on the diurnal temperature range [28, 29]. This is an indicator that we may consider in future analyses. Observed relationships between levels of precipitation and malaria also varied by study regions. This is not a surprising finding, considering that the effect of the weather variability can be very location specific. Generally, aquatic reservoirs are essential for mosquito survival and breeding. However, the impacts of different volumes and frequencies of precipitation on entomological and epidemiological parameters may vary substantially from one ecosystem to another. A number of studies support this hypothesis, reporting contradictory findings which suggest that the impacts of precipitation are strongly context dependent [30–35]. For example, rainfall may promote malaria transmission by creating ground pools and other water sources in which vectors can breed, yet heavy rains can have flushing effects, removing these habitats [5]. Drought at where rainfall is normally abundant, on contrary, may eliminate predators and result in safe havens for mosquitoes since mosquitoes are susceptible to a range of vertebrate and invertebrate predators found in the wetlands [36]. Accordingly, malaria epidemics have been reported in the year following a drought in Venezuela [37]. The impacts of rainfall will therefore be location specific and require interpretation of epidemiology and vector ecology of individual sites. In contrast to results of local weather analysis, global climate indicators revealed more intuitive findings. The extreme positive ENSO and El Niño Modoki events usually link to lower volumes of precipitation in PNG. Likewise, positive IOD events bring lower precipitation in neighboring Australia [18]. Therefore, the negative associations between malaria cases and those global climate indices (ESNO, El Niño Modoki, and IOD) found in this study may reflect the conditions of lower malaria transmission and lower precipitation or vice versa. The reason why this is not completely consistent with the findings with the local weather analysis is uncertain. Addressing the meteorological bases for the relationships between local weather and global climate factors is beyond the scope of this study. One possible explanation, however, is that the ocean exchanges the various components of atmospheric conditions such as heat, water, gases, and air circulations. Global climate indices that encompass those different aspects of local weather may increase the probability of malaria incidence and thus resulted as plausible predictors for the disease in this study. There are some limitations in this study. Foremost, the assessed outcome was a clinical diagnosis based on the presence and the history of fever. The accuracy of case detection in PNG has been previously described [38]; thus, there is a high possibility that our data includes misclassified cases. This would have been serious issues in terms of proper treatments at the individual level and the precise estimates of impacts. However, clinical diagnosis can be still sufficient to capture the association trends with weather factors as studies showed that suspected cases have the seasonal variations similar to laboratory confirmed cases [39, 40], and the statistical focus of time series is relative changes in cases. In addition, because these relative changes are examined by monthly variations, other changes in much longer time scales (e.g., yearly) do not greatly affect or confound in our analysis. For instance, the changes of the predominant malaria species are reported in PNG [3], but it is very unlikely that those factors rapidly change in months. This applies to other important non-climatic determinants of malaria incidence which include intervention programs, socio-economic development, increased population movements, agriculture, urbanization, and drug resistance. Moreover, if any, potential impacts of such influences were also minimized by adjusting seasonality and long-term trends in our models. Secondly, there were some missing local weather data (i.e., maximum temperature), particularly from Eastern Highlands. This hampered exploring sensitivity of different measurements of weather variability such as diurnal range of temperature. In addition, this may have created bias in assessment results and reduced the power of analysis due to the shorter observable period of time than that of global climate indicators. The missing data, however, seemed to have occurred randomly (e.g., data from a single month or season was not consistently absent) and is thus likely free of substantial systematic bias (Additional file 1: Figure S4). In our study, the local weather and global climate factors that interplayed and highlighted their associations with malaria incidence vary by study sites. This is not a surprising result as substantial heterogeneity of malaria epidemiology due to environmental and cultural diversity in PNG has been also described in the previous study [11]. Certainly, further investigations for better understanding of the topic is necessary, but more importantly, our findings suggested significance of location-specific research and implementation of malaria interventions. The location-specific approaches seem to be one of the keys to minimize the potential impacts of climate change and maximize the effects of control and prevention programs in PNG. AIC, Akaike's Information Criterion; ENSO, El Niño-Southern Oscillation; EMI, El Niño Modoki index; GLM, generalized linear model; IOD, Indian Ocean Dipole; JAMSTEC, Japan Agency for Marine-Earth Science and Technology; NOAA, National Oceanic and Atmospheric Administration; PNG, Papua New Guinea; SAM, Southern Annular Mode; SST, sea surface temperature Cooper RD, Waterson DG, Frances SP, Beebe NW, Pluess B, Sweeney AW. Malaria vectors of Papua New Guinea. Int J Parasitol. 2009;39:1495–501. World Health Organization Western Pacific Region. Country health information profile. Geneva, Switzerland: WHO Press; 2011. p. 321–9. Mueller I, Tulloch J, Marfurt J, Hide R, Reeder JC. Malaria control in Papua New Guinea results in complex epidemiological changes. P N G Med J. 2005;48:151–7. Mueller I, Bjorge S, Poigeno G, Kundi J, Tandrapah T, Riley ID, et al. The epidemiology of malaria in the Papua New Guinea highlands: 2. Eastern Highlands Province. P N G Med J. 2003;46:166–79. Reiter P. Climate change and mosquito-borne disease. Environ Health Perspect. 2001;109 Suppl 1:141–61. Alonso D, Bouma MJ, Pascual M. Epidemic malaria and warmer temperatures in recent decades in an East African highland. Proc Biol Sci. 2011;278:1661–9. Parry M, Canziani O, Palutikof J, van der Linden P, Hanson C. Climate Change 2007: impacts, adaptation and vulnerability. Cambridge, United Kingdom and New York, NY, USA: Cambridge University Press; 2007. Omumbo JA, Lyon B, Waweru SM, Connor SJ, Thomson MC. Raised temperatures over the Kericho tea estates: revisiting the climate in the East African highlands malaria debate. Malar J. 2011;10:12. doi:10.1186/1475-2875-10-12. Australian Bureau of Meteorology and CSIRO. Climate Change in the Pacific: scientific assessment and new research. Volume 2: Country Reports.2011. Macdonald G. The epidemiology and control of malaria. London, New York: Oxford University Press; 1957. Müller I, Bockarie M, Alpers M, Smith T. The epidemiology of malaria in Papua New Guinea. Trends Parasitol. 2003;19:253–9. http://dx.doi.org/10.1016/S1471-4922(03)00091-6. Gagnon AS, Smoyer-Tomic KE, Bush ABG. The El Niño Southern Oscillation and malaria epidemics in South America. Int J Biometeorol. 2002;46:81–9. Ototo E, Githeko A, Wanjala C, Scott T. Surveillance of vector populations and malaria transmission during the 2009/10 El Niño event in the western Kenya highlands: opportunities for early detection of malaria hyper-transmission. Parasit Vectors. 2011;4:144. Hanf M, Adenis A, Nacher M, Carme B. The role of El Niño Southern Oscillation (ENSO) on variations of monthly Plasmodium falciparum malaria cases at the cayenne general hospital, 1996–2009, French Guiana. Malar J. 2011;10:100. Hashizume M, Terao T, Minakawa N. The Indian Ocean Dipole and malaria risk in the highlands of western Kenya. Proc Natl Acad Sci U S A. 2009;106:1857–62. doi:10.1073/pnas.0806544106. May, R.J. Policy making and implementation: studies from Papua New Guinea. Bryant J. Allen, R.M.B., Ed. ANU E Press: Canberra, 2009; pp 325–343. Ashok K, Yamagata T. Climate change: the El Niño with a difference. Nature. 2009;461:481–4. Ashok K, Guan Z, Yamagata T. Influence of the Indian Ocean Dipole on the Australian winter rainfall. Geophys Res Lett. 2003;30:CLM 6–1 - 6–4. Ho M, Kiem AS, Verdon-Kidd DC. The Southern Annular Mode: a comparison of indices. Hydrol Earth Syst Sci. 2012;16:967–82. doi:10.5194/hess-16-967-2012. Pui A, Sharma A, Santoso A, Westra S. Impact of the El Niño–Southern Oscillation, Indian Ocean Dipole, and Southern Annular Mode on Daily to Subdaily Rainfall Characteristics in East Australia. Mon Weather Rev. 2012;140:1665–82. Climate Prediction Center. Antarctic Oscillation (AAO). National Oceanic and Atmospheric Administration, Maryland. 2005. http://www.cpc.ncep.noaa.gov/products/precip/CWlink/daily_ao_index/aao/aao.shtml. Accessed 21 October 2013. Saji NH, Goswami BN, Vinayachandran PN, Yamagata T. A dipole mode in the tropical Indian Ocean. Nature. 1999;401:360–3. doi:10.1038/43854. Weng H, Ashok K, Behera S, Rao S, Yamagata T. Impacts of recent El Niño Modoki on dry/wet conditions in the Pacific rim during boreal summer. Clim Dyn. 2007;29:113–29. doi:10.1007/s00382-007-0234-0. Barnett AG, Dobson AJ. Analysing seasonal health data. Berlin: London: Springer; 2010:138–42. R Development Core Team. R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2013. http://www.R-project.org/. Teklehaimanot HD, Schwartz J, Teklehaimanot A, Lipsitch M. Weather-based prediction of Plasmodium falciparum malaria in epidemic-prone regions of Ethiopia II. Weather-based prediction systems perform comparably to early detection systems in identifying times for interventions. Malar J. 2004;3:44. Akinbobola A, Omotosho JB. Predicting malaria occurrence in Southwest and North central Nigeria using meteorological parameters. Int J Biometeorol. 2013;57:721–8. Blanford JI, Blanford S, Crane RG, Mann ME, Paaijmans KP, Schreiber KV et al. Implications of temperature variation for malaria parasite development across Africa. Sci Rep. 2013;3:1300. doi:10.1038/srep01300. Paaijmans KP. From the cover: understanding the link between malaria risk and climate. Proc Natl Acad Sci U S A. 2009;106:13844–9. doi:10.1073/pnas.0903423106. Briët OJT, Vounatsou P, Gunawardena DM, Galappaththy GNL, Amerasinghe PH. Temporal correlation between malaria and rainfall in Sri Lanka. Malar J. 2008;7:77. doi:10.1186/1475-2875-7-77. Thomson MC, Mason SJ, Phindela T, Connor SJ. Use of rainfall and sea surface temperature monitoring for malaria early warning in Botswana. Am J Trop Med Hyg. 2005;73:214–21. Imbahale SS, Mukabana WR, Orindi B, Githeko AK, Takken W. Variation in malaria transmission dynamics in three different sites in Western Kenya. J Trop Med. 2012;2012:8. doi:10.1155/2012/912408. Bhattacharya S, Sharma C, Dhiman RC, Mitra AP. Climate change and malaria in India. Curr Sci. 2006;90:369–75. Li T, Yang Z, Wang M. Temperature, relative humidity and sunshine may be the effective predictors for occurrence of malaria in Guangzhou, Southern China, 2006–2012. Parasit Vectors. 2013;6. Jusot JF, Alto O. Short term effect of rainfall on suspected malaria episodes at Magaria, Niger: a time series study. Trans R Soc Trop Med Hyg. 2011;105:637–43. doi:10.1016/j.trstmh.2011.07.011. Lafferty KD. The ecology of climate change and infectious diseases. Ecology. 2009;90:888–900. doi:10.1890/08-0079.1. Bouma MJ, Dye C. Cycles of malaria associated with El Niño in Venezuela. JAMA. 1997;278:1772–4. Genton B, Smith T, Baea K, Narara A, Al-Yaman F, Beck HP, et al. Malaria: how useful are clinical criteria for improving the diagnosis in a highly endemic area? Trans R Soc Trop Med Hyg. 1994;88:537–41. Thiam S, Thwing J, Diallo I, Fall F, Diouf M, Perry R, et al. Scale-up of home-based management of malaria based on rapid diagnostic tests and artemisinin-based combination therapy in a resource-poor country: results in Senegal. Malar J. 2012;11:334. D'Acremont V, Kahama-Maro J, Swai N, Mtasiwa D, Genton B, Lengeler C. Reduction of anti-malarial consumption after rapid diagnostic tests implementation in Dar es Salaam: a before-after and cluster randomized controlled study. Malar J. 2011;10:107. This study was conducted as a part of the project of the World Health Organization (WHO) Western Pacific Regional Office. However, the findings and opinion presented in this paper does not necessarily represent that of WHO. We also thank Mr. Kasis Inape for providing weather data from PNG. CI led the analysis, interpretation of data, and development of the manuscript. HC, HK, YH, and MH developed the study conception and design and performed the critical revision. JE, CK, JK, and YK organized the collected data and performed preliminary analysis. SKB conducted the critical revision. JN assisted in the manuscript editing and led the critical revision. MNH and HC assisted the realization of the study project. All authors read and approved the final manuscript. School of Public Health and Social Work, Queensland University of Technology, 60 Musk Avenue, Brisbane, 4064, QLD, Australia Chisato Imai Department of Pediatric Infectious Diseases, Institute of Tropical Medicine, Nagasaki University, 1-12-4 Sakamoto, Nagasaki, 852-8523, Japan , Yoonhee Kim & Masahiro Hashizume Department of Social and Preventive Medicine, Sungkyunkwan University School of Medicine, 300 Cheoncheon-dong, Jangan-gu, Suwon, Gyeonggi-do, 440-746, Republic of Korea Hae-Kwan Cheong , Jin-Hee Eum & Jin Seob Kim Department of Biostatistics, Graduate School of Public Health, Seoul National University, 599 Gwanak-ro, Gwanak-gu, Seoul, Republic of Korea Ho Kim & Clara T. Kim Faculty of Health and Sport Sciences, The University of Tsukuba, Comprehensive Research Building D 709, 1-1-1 Tennoudai, Tsukuba, Japan Yasushi Honda Japan Agency for Marine-Earth Science and Technology (JAMSTEC), Yokohama Institute for Earth Science, 3173-25 Showa-machi, Kanazawa-ku, Yokohama, 236-0001, Japan Swadhin K. Behera World Health Organization Western Pacific Regional Office, P.O. Box 2932, 1000, Manila, Philippines Mohd Nasir Hassan , Joshua Nealon & Hyenmi Chung National Institute of Environmental Research, Hwangyong-ro 42, Seogu, Incheon, Republic of Korea Hyenmi Chung Search for Chisato Imai in: Search for Hae-Kwan Cheong in: Search for Ho Kim in: Search for Yasushi Honda in: Search for Jin-Hee Eum in: Search for Clara T. Kim in: Search for Jin Seob Kim in: Search for Yoonhee Kim in: Search for Swadhin K. Behera in: Search for Mohd Nasir Hassan in: Search for Joshua Nealon in: Search for Hyenmi Chung in: Search for Masahiro Hashizume in: Correspondence to Chisato Imai or Masahiro Hashizume. Additional file 1: Table S1. The periods of time for the respective local weather and global climate models for each study locations. Table S2: The crude Pearson's correlations among malaria cases and local weather factors during the study period at each study location. Figure S1: Cross-correlations for malaria cases and local weather factors. Cross-correlations identify the lagged relationships. The correlograms shows the correlations between malaria cases at time t and local weather at lag time t + k (i.e., k is a lag). Figure S2: Cross-correlations for malaria cases and global climate factors. The associations between malaria cases at time t and global climate indices at time t + k (i.e., lag). Figure S3: Time series plots for malaria cases, precipitation, and minimum temperature in each region during the study period. Figure S4: Correlation, histogram, and plot matrix for EMII, NINO3.4 Anomaly, DMI and SAM during 1997 – 2008. (PDF 1057 kb) Imai, C., Cheong, H., Kim, H. et al. Associations between malaria and local and global climate variability in five regions in Papua New Guinea. Trop Med Health 44, 23 (2016) doi:10.1186/s41182-016-0021-x
CommonCrawl
Most discussion I have read about using tethers and rotation in order to simulate gravity on spacecraft, talk about simulating Earth's gravity - 1g or 9.8 m/s2. Baked into the 1g figure is the assumption that humans evolved on Earth where gravity is 1g so it's probably healthiest for us. But is that really necessary? Have there been any studies or research into how much gravity is actually needed in order to minimize the long-term health effects? A spaceship that rotates to generate 1g of gravity would either require a (debatably) impractically long tether, or have to spin so fast that it would cause a disorienting Coriolis effect. Wikipedia states that the human factors of a Coriolis effect would be mostly negligible at 2 rpm. By my calculations, at Earth's gravity, that yields a radius of 224 meters, whereas at Mar's gravity, that is reduced to 84 meters. One could potentially imagine a spacecraft with two manned modules connected by tethers and a 168 meter long inflatable tube to allow crew and supplies to pass between them; however, bring that up to 448 meters for 1g, that's over a quarter of a mile - and you can see that it starts to become impractical. A spaceship that is enduring 1g of centripetal acceleration would have to be built with the same rigidity and structural properties of a similarly sized structure on earth, meaning that it would require more materials and therefore have more mass. If we assume the entire craft is assembled on Earth and launched as a unit, perhaps this would not be such a problem - since the spacecraft would have to be built under Earth conditions and in fact endure acceleration significantly greater than 1g during launch. Even so, the ship would be in a different configuration during launch that could be designed to be more rigid, or perhaps an aerodynamic fairing could be used to provide extra rigidity during launch, or perhaps the acceleration during launch would be on a different axis than the planned rotation. However, assuming some amount of orbital assembly, which is in fact rather likely, there would have to be at least some additional mass overhead to design for 1g of simulated gravity. If we're going to send astronauts on multi-month or multi-year missions to Mars, or even start a colony there, they're going to be exposed to Martian gravity which is (very approximately) one third that of Earth's. Similarly, gravity on the Moon is $\sf\frac{1}{6}$ that of Earth's. If we think that's OK then why bother with 1g on board the spacecraft to take them there? orulzorulz $\begingroup$ The actual answer is "we don't know." The problem is too complex to model; it must be answered experimentally - and the experiment is a little too expensive for current political climate. $\endgroup$ – SF. $\begingroup$ Can any insights be gained from those who have spent long periods of time confined to bed (in 1g)? If we see effects on astronauts linked to bone and cardiovascular health due to the absence of forces against which the long bones and the heart have to work, isn't being in a prone position comparable? Haven't there been studies based on that exact premise? And if so, wouldn't that create the opportunity to investigate thresholds at which effects begin to appear? $\endgroup$ – Anthony X $\begingroup$ @AnthonyX: Yes, bed rest has many effects very similar to zero gee. For bone density, there is evidence for a very low threshold: nytimes.com/2016/04/02/health/… $\endgroup$ $\begingroup$ At least we have two practically relevant G-levels to consider. That of the Moon and that of Mars. If neither is enough for human health, then our settlement of space will be very different than if that medium gravity turns out to be healthy for us. I speculate that even slight gravity is enough to solve many problems, and gymnastics in low gravity environment could take care of the problem during a couple of years at least. Just rotating the Mars transfer spacecraft slightly would get rid of alot of microbe microgravity problems as well as gym equipment problems. $\endgroup$ – LocalFluff $\begingroup$ I wonder how hard an animal experiment would be. $\endgroup$ – ikrase We do not know yet. The main issue is a lack of empirical data. There are only four specially trained volunteers with more than one year exposure to microgravity. We'd need hundreds of volunteers under different gravities to measure the difference (it is however plausible to assume that the effects are gradually dependent on the level of gravity). Conclusion: For any long-term mission we should provide as much (up to 1g of course) gravity as is reasonably possible. choegerchoeger $\begingroup$ Certainly it is not possible to reasonably provide 1g while on Mars (who wants to live in a giant centrifuge?) So of course this applies to transit. Assuming an approx. 2 year cycle for Mars and Earth to be in alignment for transportation between the two worlds, and given a reasonable transit time (three months?) is the benefit of a full 1g worth the extra cost? Would six extra months at Mars gravity of 1/3g make much difference? Would some intermediate value (1/2g or 2/3g) make sense as a compromise and serve to help astronauts "transition" more gradually between the two environments? $\endgroup$ – orulz $\begingroup$ As I said: We just do not know. All we know is that the risk of exposing a well-trained person under a strict regimen for roughly a year seems to be tolerable in some cases. In case of a Mars mission that might still have fatal consequences to at least some of the crew. $\endgroup$ – choeger $\begingroup$ I find it frustrating that more hasn't been done to find this out. A year at zero G might be acceptable, for some definitions of acceptable. But you don't want to get someone to Mars only to find out they can't walk when they get there. And you don't want to build a colony on Mars only to discover that we degenerate and die under Martian gravity. Somewhere between zero and 1 gee is an amount we can live with long term, and that number has a huge impact on a Mars mission. Yet here we are, blithely designing Mars missions without knowing that. $\endgroup$ – Eric Shafto $\begingroup$ @EricShafto I completely agree. The gravity question is the only one that matters long-term because it's the one problem that can't be solved by engineering in the foreseeable future. I certainly wouldn't want to live in 38% gravity. One would weigh 2.63 times as much on Earth. We should build large rotating space stations and experiment. Maybe slightly more than 1G increases life span, who knows? We need to find out. $\endgroup$ – nmit026 $\begingroup$ Why aren't there more of a push (at least, I've never heard of one) for an unmanned mission to study this with animals? $\endgroup$ We don't know. We currently only have good data for how humans are doing in 9.81 m/s² or in 0 m/s² acceleration. The only case where humans were ever exposed to anything between 1g and 0g for longer than a few minutes was during the Apollo moon landings (1.62 m/s²). The longest was Apollo 17 with 75 hours. Still not nearly long enough to notice any long-term health effects. To find out how exactly different health problems scale with decreasing gravity we would need to put some humans in different gravity environments for several months. Options could be a permanent moon base or a centrifugal space station in Earth orbit. As far as I know, neither is currently in the budget of any space agency. PhilippPhilipp $\begingroup$ What you mean by " The only case where humans were ever exposed to anything between 1g and 0g for longer than a few minutes was during the Apollo moon landings (1.62 m/s²). The longest was Apollo 17 with 75 hours. " To what G forces they were exposed and for what period of time. Landing itself certainly didn't take 75 hours. $\endgroup$ – David Cage $\begingroup$ @DavidCage What I meant was that Apollo 17 spent 75 hours on the surface of the moon. During that time the astronauts were permanently in a 1.62m/s² gravity environment. This was the longest time humans were ever exposed to a constant gravity notably lower than 1g but notably higher than 0g. $\endgroup$ $\begingroup$ My mistake. I remembered that Buzz Aldrin and Neil Armstrong spent on Moon only 21 hours, so didn't think that any landing could be so long. Why such a huge difference between first and last mission. $\endgroup$ $\begingroup$ @DavidCage That's something you might want to ask as a new question. $\endgroup$ $\begingroup$ @DavidCage Because the first mission was to figure out how to get there. Massive margins for safety and inefficiencies due to lack of knowledge on what they would encounter. Later missions could optimize, cut corners where safe, carry along golfing equipment (both club and a very golf-cart-life vehicle!), etc. $\endgroup$ – CuteKItty_pleaseStopBArking I think that despite the lack of enough empirical data there are some basic assumptions we can agree on. Since astronauts can adapt well enough, even for 1 year of living in zero G, full 1G Earth gravity likely isn't necessary for permanent living. A person with a healthy body mass index (BMI) shouldn't have a problem to adapt for living with 10% lower or higher surface gravity. Even to 30% lower or higher surface gravity (SG) after some period of time. Adapting to 10-30% higher SG should for most people be harder than adapting to a 10-30% lower surface gravity. Since Mar's gravity is very much different from Earth's 1G and zero G experienced on the ISS and Mir, there is a high level of uncertainty about how seriously health will be affected by prolonged living on Mars, or even if prolonged living on Mars is possible. Similar health effects as those that occur on prolonged spaceflights (bone and muscle deterioration, eyesight problems, elevated blood calcium levels from the bone loss) should also be present on Mars, but we don't know to what extend and whether astronauts can adapt to them. Artificial centripetal gravity of magnitude 0.7-1G on a rotating spaceship should prevent most of the negative health effects mentioned above (at least bone and muscle deterioration and elevated blood calcium levels from bone loss), but could also have other negative health effects we don't know of yet. Not just Coriolis effect induced dizziness and nausea. David CageDavid Cage $\begingroup$ Could you please clarify what SG means. $\endgroup$ – Fred $\begingroup$ SG is just abbreviation for surface gravity, I mentioned it several times without shortened form. $\endgroup$ Not the answer you're looking for? Browse other questions tagged mars spacecraft humans artificial-gravity travel or ask your own question. Have any mammals been exposed to micogravity longer than humans? Why are there no spacecraft rotating for artificial gravity? Folding structures in space - What are the potential benefits and problems? How could a spacecraft be continuously propelled if it rotates to simulate gravity (by tether)? EVA on rotating spacecraft - differences from microgravity How rapidly would a manned capsule have to dive to simulate Martian gravity? How might artificial gravity experiments be conducted to determine adaptation in mice? Why is tethered artificial gravity hardly ever considered? Are rotating habitats considered the standard solution for long-term human habitations in low-g environments?
CommonCrawl